GPT-4, the fourth generation of Microsoft’s large language model (LLM), is expected to be released soon. Compared to GPT-3.5, GPT-4 will be even more sophisticated and better able to understand natural language. The tech giant intends to reveal the model at an upcoming AI in Focus – Digital Kickoff event.
According to Windows Central, Andreas Braun, Chief Technology Officer of Microsoft Germany, confirmed the availability of GPT-4 and hinted that the new model would have intriguing new capabilities. Multi-modality, which enables machines to process and comprehend information across several modes, such as audio, visual, and text, will be one of the most prominent characteristics.
“We will introduce GPT-4 next week, where we have multimodal models that offer completely different possibilities – for example, videos.”
It is on its way and most likely will release this year. But we don’t know when it will be publicly released in ChatGPT. According to The New York Times, it might show up as early as the first quarter of this year. The launch would take place in a few weeks, given that it is March.
Microsoft announced that this is a multimodal model that can generate text, pictures, and even video, and would launch in March.
How Microsoft and OpenAI will use this is currently unclear. It could take a while before the model appears in ChatGPT and Microsoft’s Bing Chat because it may be debuting for research purposes.
Discover further insights through our blog posts!