Updates

Leveling Up Prompt Management with Liquid Templates

Leveling Up Prompt Management with Liquid Templates

Leveling Up Prompt Management with Liquid Templates

Nov 11, 2025

12 min read

We are adding support for Liquid templates in prompts (in addition to the already supported Mustache) to let developers create more advanced prompts! Read on to learn more about Liquid support, why it’s important and how you can use it to advance your prompt game on Datwizz.

The Growing Complexity of the Prompt Layer

As LLM applications mature, we're seeing a fundamental shift in where application logic lives. What started as simple template substitutions has evolved into complex conditional flows, data transformations, and dynamic content generation—all happening at the prompt layer.

This evolution makes sense. Prompts are where context meets capability. They're where your domain knowledge, business logic, and AI interactions converge. The better you can experiment with, version, and refine your prompts, the better your AI applications perform. But as applications grow from simple chatbots to sophisticated agents handling multi-step workflows, the limitations of basic templating become apparent.

That's why we're expanding Datawizz's prompt management capabilities to support Liquid templates alongside our existing Mustache support.

Real-World Use Cases: Where Liquid Shines

Let's look at concrete examples where Liquid's advanced logic transforms prompt management from simple substitution to intelligent orchestration.

Adaptive Context Window Management

{% assign context_length = context_items | size %}
{% assign max_context = 10 %}
Previous conversation context:
{% for item in context_items limit:max_context %}
  {% if forloop.index > context_length | minus: 5 %}
    [{{ item.timestamp }}] {{ item.role }}: {{ item.content }}
  {% else %}
    [{{ item.timestamp }}] {{ item.role }}: {{ item.content | truncate: 100 }}
  {% endif %}
{% endfor %}
{% if context_length > max_context %}
  Note: Showing {{ max_context }} most recent messages out of {{ context_length }} total.
{% endif %}

Role-Based Prompt Customization

{% case user.subscription_tier %}
  {% when "enterprise" %}
    You have access to all features including custom model fine-tuning.
    Response style: Detailed technical analysis with citations.
    
    {% if user.industry == "healthcare" %}
      Apply HIPAA compliance considerations to all responses.
    {% elsif user.industry == "finance" %}
      Include relevant regulatory frameworks (SOX, GDPR) where applicable.
    {% endif %}
    
  {% when "professional" %}
    Provide comprehensive answers with examples.
    Daily rate limit: {{ 1000 | minus: user.daily_usage }} requests remaining.
    
  {% else %}
    Provide helpful but concise responses.
    {% if user.daily_usage >= 100 %}
      Note: You've reached your daily limit. Upgrade for unlimited access.
    {% endif %}
{% endcase %}
Query: {{ query }}

Multi-Stage RAG Pipeline with Dynamic Retrieval

{% assign confidence_threshold = 0.7 %}
{% assign retrieved_docs = documents | where: "score", ">", confidence_threshold %}
{% if retrieved_docs.size == 0 %}
  No high-confidence documents found. Falling back to general knowledge.
  
{% elsif retrieved_docs.size > 5 %}
  Using top 5 most relevant documents from {{ retrieved_docs.size }} matches:
  {% for doc in retrieved_docs limit:5 %}
    Source {{ forloop.index }}: {{ doc.title }} (confidence: {{ doc.score | round: 2 }})
    {{ doc.content | truncate: 200 }}
  {% endfor %}
  
{% else %}
  Found {{ retrieved_docs.size }} relevant documents:
  {% for doc in retrieved_docs %}
    {{ doc.content }}
  {% endfor %}
{% endif %}
Answer the question based on the above context: {{ question }}

Core Liquid Features for Prompt Engineering

Liquid brings several powerful features that make it particularly well-suited for prompt management:

Filters and Transformations

Liquid's filter pipeline allows inline data transformation without external processing:

  • {{ user_input | downcase | strip }} - Normalize input

  • {{ price | times: tax_rate | round: 2 }} - Calculate values

  • {{ description | truncatewords: 50 }} - Control token usage

  • {{ date | date: "%Y-%m-%d" }} - Format timestamps

Control Flow Constructs

Beyond basic conditionals, Liquid supports:

  • Case statements for multi-branch logic

  • For loops with built-in loop variables (forloop.index, forloop.first, forloop.last)

  • Break and continue for complex iteration control

  • Unless blocks for negative conditionals

Variable Assignment and Manipulation

{% assign token_budget = 4000 %}

{% assign used_tokens = prompt | size | divided_by: 4 %}

{% assign remaining = token_budget | minus: used_tokens %}

Array and Object Operations

Work with structured data directly in templates:

{% assign sorted_results = results | sort: "relevance" | reverse %}

{% assign high_priority = tasks | where: "priority", "high" %}

Why We Chose Liquid

When evaluating templating engines for enhanced prompt management, we needed something that would complement our existing Mustache support while providing the advanced features our users were requesting. The decision came down to Liquid versus Jinja2, and here's why Liquid won.

Multi-Platform Compatibility

While Jinja2 is deeply rooted in the Python ecosystem, Liquid has mature implementations across multiple languages—Ruby, JavaScript, Python, Go, .NET, and more. This matters because Datawizz users integrate our prompt management across diverse tech stacks. A Python-centric solution would have created friction for teams using Node.js, Go, or other languages in their inference pipelines.

Security by Design

Liquid was built for untrusted template execution from day one. Shopify created it specifically to allow merchants to customize their stores without security risks. This means:

  • No arbitrary code execution

  • Controlled access to objects and methods

  • Sandboxed execution environment

  • Predictable resource consumption

Jinja2, while powerful, requires careful configuration to achieve similar security guarantees. Its design philosophy leans toward flexibility over safety-by-default, which makes sense for its typical use cases but adds risk in a multi-tenant prompt management system.

Gentle Learning Curve

Liquid's syntax is intentionally constrained and declarative. While this might seem limiting, it actually makes templates more maintainable and easier to reason about. Team members who aren't template experts can understand and modify Liquid templates without worrying about introducing subtle bugs or performance issues.

Performance Characteristics

Liquid's restricted feature set enables consistent performance. Template compilation is fast, execution is predictable, and there's no risk of accidentally creating expensive operations. When you're processing thousands of prompts per second, this predictability matters.

Ecosystem Alignment

Many of our users are already familiar with Liquid from other tools in the AI/ML ecosystem. GitHub Actions, Jekyll, and various documentation platforms use Liquid, creating a transferable skill set. This familiarity reduces onboarding time and increases adoption.

Getting Started

Liquid templates are now available in all Datawizz prompt management features. Your existing Mustache templates continue to work—but you can choose liquid when creating new prompts / prompt versions. To start using Liquid, simply select it as your template engine when creating new prompts or update existing ones through our API or UI.

The prompt layer is where the magic happens in LLM applications. With Liquid templates, that magic becomes more powerful, more maintainable, and more secure. We're excited to see what you build with it.

Thank You

And of course, a big shout out to the contributors to the open-source LiquidJS without which this feature would not have been possible! 

In this post

In this post

In this post