2024年5月13日 · GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs.
GPT-4 is available on ChatGPT Plus and as an API for developers to build applications and services. View contributions. We’re excited to see how people use GPT-4 as we work towards developing technologies that empower everyone.
2024年5月13日 · GPT-4ois our newest flagship model that provides GPT-4-level intelligence but is much faster and improves on its capabilities across text, voice, and vision. Today, GPT-4o is much better than any existing model at understanding and discussing the images you share.
2024年5月13日 · GPT-4o is OpenAI's new flagship model that can reason across audio, vision, and text in real time. GPT-4o will be available in ChatGPT and the API as a text and vision model (ChatGPT will continue to have support for voice via the pre-existing Voice Mode feature) initially.
ChatGPT helps you get answers, find inspiration and be more productive. It is free to use and easy to try. Just ask and ChatGPT can help with writing, learning, brainstorming and more.
2024年5月13日 · OpenAI is launching GPT-4o, an iteration of the GPT-4 model that powers its hallmark product, ChatGPT. The updated model “is much faster” and improves “capabilities across text, vision, and...
2024年5月13日 · Microsoft is thrilled to announce the launch of GPT-4o, OpenAI’s new flagship model on Azure AI. This groundbreaking multimodal model integrates text, vision, and audio capabilities, setting a new standard for generative and conversational AI experiences.