With the rapid pace of change in LLM models, the ability to evaluate and deploy different models quickly has become critical. This is why model selection logic must remain separate from application logic. However, switching between models often requires adjusting parameters like reasoning levels, temperature, and top_p as well.
Datawizz already makes model switching seamless—your application calls a Datawizz endpoint, which transparently routes to different models. Starting today, you can also configure default inference parameters for different models directly in Datawizz. We built this capability for three key reasons:
Simplifying Model Switching: When upgrading to newer or better models, configurations often change and new parameters become available.
Optimizing Inference Parameters: Inference parameters significantly impact accuracy, speed, and cost. Datawizz now enables you to run experiments, identify optimal configurations, and apply those changes automatically.
Enabling Rule-Based Defaults: One size doesn't fit all—combined with rule-based routing, Datawizz now allows you to customize parameters for specific use cases.
Making Model Switching Easier
Switching to a different model often requires adjusting inference parameters. This happens when defaults change between model versions or when migrating across model families with fundamentally different behaviors.
For example, GPT-5 defaults to reasoning mode (reasoning_effort = medium), while GPT-5.1 defaults to non-reasoning mode (reasoning_effort = none). Upgrading from GPT-5 to 5.1 may therefore require manually adjusting the reasoning level.
Similarly, if you want to switch to Anthropic's Sonnet-4.5, which is also reasoning-capable, you'll encounter different requirements. Its reasoning is controlled by a thinking token budget, and utilizing its reasoning capabilities requires setting temperature to 1. Making this switch would necessitate adjusting these parameters accordingly.
Optimizing Inference Parameters
The most effective way to determine optimal inference parameters is through testing different values for your specific use case. Datawizz streamlines this process with built-in evaluation—you can benchmark the same model with various parameters to identify which configuration yields the best results for your needs.

Once you've identified the optimal configuration, you can deploy it instantly through default parameters. Datawizz makes both experimentation and configuration rollout easier than ever.
Rule-Based Inference Parameters
Different requests or workflows within your application may perform best with different parameters. Consider these scenarios:
You might allocate more thinking budget or reasoning effort for complex questions
You might extend max_tokens for requests expecting longer outputs
You might adjust temperature or top_p for creative use cases
Datawizz enables this flexibility through metadata tags. When sending an inference request, you can include a metadata object. Datawizz then routes to different model-parameter combinations based on these tags.

Setting Up Endpoints in Datawizz
Endpoints serve as the primary entry point for LLM requests and control model routing. Rather than specifying "model": "gpt-5.1", endpoints provide a virtual model name (like "model": "web-app"), enabling seamless underlying model changes.
Each endpoint defines various upstreams—the models it routes requests to. It can load balance between models or route to specific models based on metadata tags in the LLM request (e.g., routing task=summarize to model X, or complexity=high to model Y).

Now, you can also specify default parameters that will be sent to the underlying model, automatically attaching them to requests before forwarding.
An Iterative Workflow
Building AI agents, workflows, and applications isn't a one-time process—it's a continuous journey, and optimizing model selection and inference parameters is integral to this evolution. Prompts evolve, data changes, and new models emerge—your applications must adapt accordingly.

Datawizz is built to power continuous learning: collecting inference logs for observability, running experiments across different models and configurations, and seamlessly updating model routing. That's the power of Datawizz!


