- Google has released Gemini, a new generative AI model, to compete with AI offerings from rival OpenAI.
Tech giant Google just released Gemini, its new generative AI model to compete with OpenAI’s GPT models.
Google calls Gemini the most capable and general-purpose AI it’s developed so far.
The tech giant added that it plans to expand the advanced version of this large language model (LLM) next year. The LLM is multimodal, meaning it can understand different types of information, including text, audio, images, and video.
According to Google, Gemini will be available in three models:
- Gemini Ultra, the largest and most capable, for highly complex tasks.
- Gemini Pro, for a wide range of tasks.
- Gemini Nano, for Android users who want to build Gemini-powered apps. For instance, with Gemini Nano, people can now summarize recordings made using the Recorder app on the Pixel 8 Pro phone (but only in English)
Is Google Gemini Better Than OpenAI’s GPT Models
In a press conference, Sissie Hsiao, vice president of Google’s AI chatbot, Bard, said Gemini Pro outperformed GPT-3.5 in six out of the eight industry benchmarks. But Google’s most advanced model, Gemini Ultra, beat the newer GPT-4 in just one of the eight benchmarks.
In the wake of OpenAI’s release of ChatGPT about a year ago, tech giants have been scrambling to launch their own chatbots and LLMs to compete with the AI startup.
But Google’s recent evaluations of Gemini suggest that the company still has a way to go to match or surpass OpenAI’s offerings.
Here’s What Else You Need To Know About Google’s Gemini:
Bard Gets An Update
Bard is now upgraded with Gemini Pro, which gives the chatbot more advanced reasoning and understanding, among other capabilities, according to Google.
Gemini Pro–backed Bard is available only in English, in more than 170 countries. Bard will be integrated with Gemini Ultra next year, Google said.
In the coming months, the company will add Gemini across its other apps, including search, Google Ads, and the Chrome browser.
Gemini Runs On Google’s TPUs
The LLM runs on Google-made tensor processing units, or TPUs, specialized hardware designed for training AI models.
But in the future, Gemini will be trained on both TPUs and graphics processing units (GPUs), said Amin Vahdat, vice president of Google’s Cloud AI, in a briefing. Nvidia makes the H100 GPU, a popular chip for powering generative AI products.
Will People Have To Pay For This?
Hsiao said Google is exploring how to make money from Gemini but had nothing specific to share.
Does Gemini Hallucinate?
“LLMs are still capable of hallucinating,” said Eli Collins, vice president of product at Google’s DeepMind, at the press conference.
Disclaimer: The information provided is not trading advice, Bitcoinworld.co.in holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.