OpenAI released the latest version of ChatGPT, the artificial intelligence language model making significant waves in the tech industry, on Tuesday.
GPT-4, the latest model, can understand images as input, meaning it can look at a photo and give the user general information about the image.
The language model also has a larger information database, allowing it to provide more accurate information and write code in all major programming languages.
GPT-4 can now read, analyze or generate up to 25,000 words of text and is seemingly much smarter than its previous model. GPT-4 scored in the 90th percentile on the Uniform Bar Exam. Its previous model scored in the 10th, according to OpenAI.
ChatGPT, which was only released a few months ago, is already considered the fastest-growing consumer application in history. The app hit 100 million monthly active users in just a few months. TikTok took nine months to reach that many users and Instagram took nearly three years, according to a UBS study.
"While less capable than humans in many real-world scenarios, [GPT-4] exhibits human-level performance on various professional and academic benchmarks," OpenAI wrote in its press release, adding that the language model scored a 700/800 on the math SAT.
Though impressive, OpenAI acknowledged the program is still "far from perfect."
"It is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it," OpenAI CEO Sam Altman tweeted.
Artificial intelligence models, including ChatGPT, have raised some concerns and disruptive headlines in recent months. In education, students have been using the systems to complete writing assignments, but educators are torn on whether these systems are disruptive or if they could be used as learning tools.
These systems have also been prone to generate inaccurate information – Google's AI, "Bard," notably made a factual error in its first public demo. This is a flaw OpenAI hopes to improve upon – GPT-4 is 40% more likely to produce accurate information than its previous version, according to OpenAI.
Misinformation and potentially biased information are subjects of concern. AI language models are trained on large datasets, which can sometimes contain bias in terms of race, gender, religion, and more. This can result in the AI language model producing biased or discriminatory responses.
Many have pointed out the malicious ways people could use misinformation through models like ChatGPT, like phishing scams or to spread misinformation to deliberately disrupt important events like elections.
OpenAI says it "spent months making [ChatGPT] safer," adding the company is working with "over 50 experts for early feedback in domains including AI safety and security."
GPT-4 is 82% less likely to provide users with "disallowed content," referring to illegal or morally objectionable content, according to OpenAI.