Return to model library
Groq
llama-4-maverick-17b-128e
Universaltextlong contextExtremely fast reasoning
Data Policy: 未知 / Unknown
未经法务确认上游 ToS,不做承诺
View Groq terms of serviceNote: This provider's data policy is pending legal review; medical/legal customers please contact sales to confirm DPA.
context window
1M
tokens
maximum output
4.1K
tokens
knowledge cutoff
—
Overall rating
8.07
/ 10
capability radar
code8.0
Mathematics8.0
reasoning8.0
creativity7.5
multilingual7.5
long context8.0
speed9.5
Pricing
input price$0.240000/ 1M tokens
output price$0.720000/ 1M tokens
Support features
No additional features
Call example
Called via Nexevo.ai gateway — fully compatible with OpenAI SDK, just replace base URL
curl https://api.nexevo.ai/v1/chat/completions \
-H "Authorization: Bearer $NEXEVO_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "llama-4-maverick-17b-128e",
"messages": [
{ "role": "user", "content": "Hello!" }
]
}'