OpenAI announces new AI model ‘GPT-4’
[ad_1]
San Francisco: Microsoft owned OpenAI has announced its new large multimodal model ‘GPT-4’ which accepts image and text input. “We’ve created GPT-4, the latest milestone in OpenAI’s effort to scale deep learning,” the company said in a blogpost on Tuesday. Spent 6 months iteratively aligning the GPT-4 using the , resulting in the best results ever.”
Compared to GPT-3.5, the new AI model is more reliable, creative and capable of handling complex instructions. GPT-4 outperforms existing large language models (LLMs), including most state-of-the-art (SOTA) models that benchmark – May include specialized manufacturing or additional training methods. The company is also using this new model internally, which has a huge impact on functions such as support, sales, content moderation and programming.
Unlike text-only settings, this model can accept a sign with both text and images, allowing users to specify any vision or language function. The GPT-4 base model, like earlier GPT models, is a It was taught to predict the next word in the document. It was trained using both licensed and publicly available data. ChatGPT Plus customers will get GPT-4 access on chat.opennai.com, while developers can sign up for the GPT-4 API waiting list. “We anticipate that GPT-4 will become an important tool in improving people’s lives by powering many applications,” the company said.
[ad_2]
Source link