We are adding support for Liquid templates in prompts (in addition to the already supported Mustache) to let developers create more advanced prompts! Read on to learn more about Liquid support, why it’s important and how you can use it to advance your prompt game on Datwizz.
The Growing Complexity of the Prompt Layer
As LLM applications mature, we're seeing a fundamental shift in where application logic lives. What started as simple template substitutions has evolved into complex conditional flows, data transformations, and dynamic content generation—all happening at the prompt layer.
This evolution makes sense. Prompts are where context meets capability. They're where your domain knowledge, business logic, and AI interactions converge. The better you can experiment with, version, and refine your prompts, the better your AI applications perform. But as applications grow from simple chatbots to sophisticated agents handling multi-step workflows, the limitations of basic templating become apparent.
That's why we're expanding Datawizz's prompt management capabilities to support Liquid templates alongside our existing Mustache support.
Real-World Use Cases: Where Liquid Shines
Let's look at concrete examples where Liquid's advanced logic transforms prompt management from simple substitution to intelligent orchestration.
Adaptive Context Window Management
Role-Based Prompt Customization
Multi-Stage RAG Pipeline with Dynamic Retrieval
Core Liquid Features for Prompt Engineering
Liquid brings several powerful features that make it particularly well-suited for prompt management:
Filters and Transformations
Liquid's filter pipeline allows inline data transformation without external processing:
{{ user_input | downcase | strip }}- Normalize input{{ price | times: tax_rate | round: 2 }}- Calculate values{{ description | truncatewords: 50 }}- Control token usage{{ date | date: "%Y-%m-%d" }}- Format timestamps
Control Flow Constructs
Beyond basic conditionals, Liquid supports:
Case statements for multi-branch logic
For loops with built-in loop variables (
forloop.index,forloop.first,forloop.last)Break and continue for complex iteration control
Unless blocks for negative conditionals
Variable Assignment and Manipulation
{% assign token_budget = 4000 %}
{% assign used_tokens = prompt | size | divided_by: 4 %}
{% assign remaining = token_budget | minus: used_tokens %}
Array and Object Operations
Work with structured data directly in templates:
{% assign sorted_results = results | sort: "relevance" | reverse %}
{% assign high_priority = tasks | where: "priority", "high" %}
Why We Chose Liquid
When evaluating templating engines for enhanced prompt management, we needed something that would complement our existing Mustache support while providing the advanced features our users were requesting. The decision came down to Liquid versus Jinja2, and here's why Liquid won.
Multi-Platform Compatibility
While Jinja2 is deeply rooted in the Python ecosystem, Liquid has mature implementations across multiple languages—Ruby, JavaScript, Python, Go, .NET, and more. This matters because Datawizz users integrate our prompt management across diverse tech stacks. A Python-centric solution would have created friction for teams using Node.js, Go, or other languages in their inference pipelines.
Security by Design
Liquid was built for untrusted template execution from day one. Shopify created it specifically to allow merchants to customize their stores without security risks. This means:
No arbitrary code execution
Controlled access to objects and methods
Sandboxed execution environment
Predictable resource consumption
Jinja2, while powerful, requires careful configuration to achieve similar security guarantees. Its design philosophy leans toward flexibility over safety-by-default, which makes sense for its typical use cases but adds risk in a multi-tenant prompt management system.
Gentle Learning Curve
Liquid's syntax is intentionally constrained and declarative. While this might seem limiting, it actually makes templates more maintainable and easier to reason about. Team members who aren't template experts can understand and modify Liquid templates without worrying about introducing subtle bugs or performance issues.
Performance Characteristics
Liquid's restricted feature set enables consistent performance. Template compilation is fast, execution is predictable, and there's no risk of accidentally creating expensive operations. When you're processing thousands of prompts per second, this predictability matters.
Ecosystem Alignment
Many of our users are already familiar with Liquid from other tools in the AI/ML ecosystem. GitHub Actions, Jekyll, and various documentation platforms use Liquid, creating a transferable skill set. This familiarity reduces onboarding time and increases adoption.
Getting Started
Liquid templates are now available in all Datawizz prompt management features. Your existing Mustache templates continue to work—but you can choose liquid when creating new prompts / prompt versions. To start using Liquid, simply select it as your template engine when creating new prompts or update existing ones through our API or UI.
The prompt layer is where the magic happens in LLM applications. With Liquid templates, that magic becomes more powerful, more maintainable, and more secure. We're excited to see what you build with it.
Thank You
And of course, a big shout out to the contributors to the open-source LiquidJS without which this feature would not have been possible!


