Ant Group Unveils China’s First Multimodal AI Assistant with Code-Driven Outputs

Ant Group Unveils China’s First Multimodal AI Assistant with Code-Driven Outputs

HANGZHOU, China, November 18, 2025–(BUSINESS WIRE)–Ant Group today launched LingGuang, a next-generation multimodal AI assistant and the first of its kind in China that interacts with users through code-driven outputs. Equipped with the capability to understand and produce language, image, voice and data, LingGuang delivers precise, structured responses to complex queries through 3D models, audio…

Read More
OpenAI o3 Model Revolutionizes Multimodal LLM App Development in 2025

OpenAI o3 Model Revolutionizes Multimodal LLM App Development in 2025

The Rise of Multimodal AI in Application Development In the rapidly evolving field of artificial intelligence, OpenAI’s latest model, o3, is reshaping how developers build large language model (LLM) applications. Released in April 2025, o3 introduces advanced capabilities for handling multimodal inputs—combining text, images, audio, and more—while delivering structured outputs that ensure reliability and integration….

Read More