Tuesday, April 16, 2024

OpenAI introduced GPT-4, a more creative, “smarter” and capable model than ChatGPT

In accordance with announcements and expectations, this Tuesday, a new multimodal AI system was presented to the public for the first time, which can take text and images as tasks.

SHARE

2 min read

ChatGPT, the language model based on the GPT-3.5 model, as well as its predecessor, GPT-3, received a new successor. Its name, as we have known for some time, is GPT-4. By its very nature, it is a large multimodal model of artificial intelligence, which builds on the purely textual understanding of previous models. GPT-4 can thus, in addition to the text prompt, also take an image as input information, recognize what is on it, analyze it, and based on that create its answer – which will continue to be exclusively in text form (on almost any worldwide language).

Better on tests
OpenAI, the organization behind all these ventures, described GPT-4 as a system that, while still less capable than humans, can compete with them in many scenarios. This was supported by results on various standardized tests, in which the new model achieved results that would place it in the top 10% of people (for example, on entrance or bar exams). By comparison, GPT-3 would be in the worst 10% on the same tests.

GPT-4, they say, is more creative, more precise, and capable of solving more complex logical tasks and gives better results than ChatGPT. He is capable of programming in almost all programming languages, he can write songs, but also technical documentation, compose music, accept the style of individual authors, and as a result, he can give longer answers – up to 25,000 words.

- ADVERTISEMENT -

OpenAI experts have spent the last few months making GPT-4 more secure, directed away from problematic content, and more fact-based. Experiences and data collected via ChatGPT were also used in the training, but the description still states that the model is “prone to hallucinations”, and can “lean” towards a certain side with beliefs and the like. Still, they say, it will work similarly to known systems, but its advantages will come to the fore in specific tasks.

Only with subscription
The biggest functional advantage will be its mentioned image recognition, so in the future, you will be able to ask GPT-4 something like “explain this picture to me and why it’s funny”, and it should be able to do it. Of course, external applications, such as those that help the blind and visually impaired, will benefit the most from this.

GPT-4 will be available now, but only through the developer API and to users who pay a subscription to ChatGPT Plus.

- ADVERTISEMENT -

FOLOW US

LATEST NEWS

ADVERTISEMENT

RELATED

DISCLAIMER
Dudescode.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com