Google's Gemini big language models now include grounding and context caching to improve accuracy and reduce computing power usage.
The tech giant released Imagen 3 with better processing and digital watermarking, as well as Gemini 1.5 Flash with a 1 million-token context window, which is now widely available.
Grounding gives citations for LLM outcomes, seeking to reduce inaccuracies and differentiate Google from competitors such as OpenAI and Meta.
Context caching, which is enabled in Gemini 1.5 models, saves money by reusing previously saved context information while increasing speed and efficiency.
The generative AI market is highly competitive, with AWS, Microsoft, OpenAI, and smaller providers constantly introducing new features and models.
Google's GenAI technology is being utilized in production by organizations such as Moody's for credit ratings and large-scale data extraction, indicating real-world use and adoption.