GPT-4, A New Foundation for ChatGPT

GPT-4, A New Foundation for ChatGPT

OpenAI, a research company specializing in artificial intelligence (AI), has announced the latest iteration of its natural language processing computer program. This software drives ChatGPT, a chatbot that has gained significant attention and an expanding user community.


In a recent blog post, OpenAI, the developer of ChatGPT, unveiled their latest large language model. This new system is expected to have superior capabilities compared to its predecessor, GPT-3.5. News of the upcoming release of GPT-4 was leaked last week when Andreas Braun, the CTO of Microsoft Germany, mentioned that it would be launched during this week.


According to the company, the forthcoming GPT-4 large language model will be unlike its predecessors, as it will incorporate a "multimodal system." This advanced technology will be capable of analyzing not only text but also images, videos, and audio.


Braun stated that "there we will have multimodal models that will offer completely different possibilities." This statement seems to allude to the new capabilities of OpenAI's GPT-4 large language model, which is expected to have advanced features such as multimodal processing. Additionally, OpenAI appears to be highlighting the model's ability to handle input in multiple languages beyond just English.


Although large language models lack innate intelligence, they possess the ability to comprehend the connections between words. With the upcoming GPT-4 model, this understanding of relationships and context will be even more advanced. For instance, while ChatGPT received a score in the 10th percentile on a uniform bar exam, GPT-4 obtained a score in the 90th percentile. Furthermore, GPT-4's vision-based capabilities allowed it to achieve a score in the 99th percentile in the Biology Olympiad, whereas ChatGPT scored only in the 31st percentile.


Is it with visual input?

OpenAI has stated that GPT-4 will have the capability to take in images as inputs and produce descriptions, categorizations, and evaluations.
To clarify, ChatGPT and Bing will possess the ability to perceive their
surroundings, or at the very least comprehend visual outputs such as those
produced by image search.


This means GPT-4 will have practical applications in the real world, as demonstrated by its use in the Be My Eyes app which assists visually impaired individuals by using a smartphone camera to provide visual descriptions of their surroundings. Additionally, in a video intended for developers, OpenAI's President and co-founder, Greg Brockman, demonstrated GPT-4's ability to analyze a sketch and convert it into a functional website by generating the necessary code.


It's longer output

With its ability to process more than 25,000 words of text, GPT-4 can be used for various purposes such as generating lengthy content, conducting prolonged conversations, and examining and analyzing documents.According to OpenAI, ChatGPT's results using GPT-4 will not only increase in length but also in creativity. GPT-4 is now more collaborative and inventive than its previous versions. It has the ability to produce, modify, and refine creative and technical writing assignments in partnership with users, including tasks like creating music, crafting screenplays, or learning a user's writing style.


Is it safer?

OpenAI incorporated feedback from human users of ChatGPT to improve the behavior of GPT-4, as reported by the company. They also enlisted the feedback of 50 human experts to ensure AI safety. It remains to be seen whether these efforts will bear fruit in the long term. Nonetheless, the intelligence of ChatGPT and Bing has undeniably increased in other respects.

Recommend