WeSearch
Hub / Search / gemini models
SEARCH · GEMINI MODELS

Results for "gemini models".

9 stories match your query across our 700+ source catalog. Ranked by relevance and recency.

9 results for "gemini models"

ARXIV.ORG

A systematic evaluation of vision-language models for observational astronomical reasoning tasks

Vision-language models (VLMs) are increasingly proposed as general-purpose tools for scientific data interpretation, yet their reliability on real astronomical observations across diverse modalities r…

· 5 views
NEWSWEEK

GM Adds Digital In-Car Personalization to Current, Older Models

Google Gemini adds AI-assisted capability to new and older General Motors cars, trucks and SUVs across the world.…

· 3 views
CLAUDEAI

Talkie: a 13B LLM trained only on pre-1931 text used Claude Sonnet to help test the model and judge its output

Researchers Alec Radford (GPT, CLIP, Whisper), Nick Levine, and David Duvenaud just released talkie : a 13 billion parameter language model trained exclusively on text published before 1931. No intern…

· 5 views
ARXIV.ORG

LLMs Corrupt Your Documents When You Delegate

Large Language Models (LLMs) are poised to disrupt knowledge work, with the emergence of delegated work as a new interaction paradigm (e.g., vibe coding). Delegation requires trust - the expectation t…

· 5 views
ARXIV.ORG

Don't Make the LLM Read the Graph: Make the Graph Think

We investigate whether explicit belief graphs improve LLM performance in cooperative multi-agent reasoning. Through 3,000+ controlled trials across four LLM families in the cooperative card game Hanab…

· 3 views
ARXIV.ORG

StoryTR: Narrative-Centric Video Temporal Retrieval with Theory of Mind Reasoning

Current video moment retrieval excels at action-centric tasks but struggles with narrative content. Models can see \textit{what is happening} but fail to reason \textit{why it matters}. This semantic …

· 3 views
ARXIV.ORG

Beyond the Attention Stability Boundary: Agentic Self-Synthesizing Reasoning Protocols

As LLM agents transition to autonomous digital coworkers, maintaining deterministic goal-directedness in non-linear multi-turn conversations emerged as an architectural bottleneck. We identify and for…

· 3 views
REDDIT

Speculative decoding with Gemma-4-31B + Gemma-4-E2B enables 120 - 200 tok/s output speed for specific tasks

So for my project I was using up until now either Gemini 3 / 2.5 Flash or Flash-lite. All my use cases are not agentic, simply LLM workflows for atomic tasks like extracting references from the law, c…

· 8 views
REDDIT

Decreased Intelligence Density in DeepSeek V4 Pro

In the V3.2 paper, they mentioned: Second, token efficiency remains a challenge; DeepSeek-V3.2 typically requires longer generation trajectories (i.e., more tokens) to match the output quality of mode…

· 11 views