Home/Use Cases/Function Calling

Best LLM for Function Calling

5 models ranked for function calling tasks. Sorted by benchmark quality score, with price as a secondary factor.

Best Quality
Gemini 2.0 Flash
Google
ELO 1330
Cheapest Option
Gemini 2.0 Flash
Google
$0.10/1M/1M input

All Models for Function Calling

#ModelProviderInput / 1MOutput / 1MELOFlags
๐Ÿฅ‡Gemini 2.0 FlashGoogle$0.10/1M$0.40/1M1330
๐ŸฅˆClaude Sonnet 4.6Anthropic$3.00/1M$15.00/1M1310
๐Ÿฅ‰Claude 3.5 SonnetAnthropic$3.00/1M$15.00/1M1295
4GPT-4oOpenAI$2.50/1M$10.00/1M1286
5GPT-4 TurboOpenAI$10.00/1M$30.00/1M1260

Why We Picked These Models

Gemini 2.0 Flash
$0.10/1M/1MELO 1330

Gemini 2. Latest-gen quality with Flash-tier pricing.

Claude Sonnet 4.6
$3.00/1M/1MELO 1310

Claude Sonnet 4. Latest Claude model with improved agentic capabilities.

Claude 3.5 Sonnet
$3.00/1M/1MELO 1295

Claude 3. 200K context window โ€” best for long documents.

Compare Top Models

Gemini 2.0 Flash vs Claude Sonnet 4.6Gemini 2.0 Flash vs Claude 3.5 SonnetGemini 2.0 Flash vs GPT-4oGemini 2.0 Flash vs GPT-4 Turbo

Frequently Asked Questions

What is the best LLM for function calling?

Gemini 2.0 Flash by Google is rated as the best model for function calling with an ELO score of 1330. Latest-gen quality with Flash-tier pricing.

What is the cheapest LLM for function calling?

Gemini 2.0 Flash is the most affordable option for function calling at $0.10/1M per 1M input tokens.

Is there a free LLM for function calling?

No completely free models are listed for function calling, but Gemini 2.0 Flash start at very low prices.