What’s new in the world of AI

What’s new in the world of AI

What’s new in the world of AI

AI is a wide topic you can learn about from our articles

All

Tutorials

Benchmarks

Updates

Industry News

AI Models

AI Models

Continuous Learning: Closing the Loop Between Runtime and Training

Updates

Fine Tuning Models with Tinker on Datawizz!

Updates

Controlling Model Configs Through Datawizz

Updates

Leveling Up Prompt Management with Liquid Templates

Updates

More Flexibility for LLM Evaluators with Custom Dependencies

Tutorials

Run Custom Evaluations During Training with Datawizz

Tutorials

When and How to Train on Completions Only When Fine-tuning LLMs

Industry News

Announcing Datawizz Seed Raise

Tutorials

Apple Foundations Models Framework - 10 Best Practices for Developing AI Apps

Tutorials

Fine-Tuning Gemma 3 with Multi Modal (Vision Images) Inputs

Tutorials

Apple Foundation Models Framework Benchmarks and Custom Adapters Training with Datawizz

Benchmarks

The Death of RAG? Do We Still Need Retrieval Augmented Generation in the Age of Large Contexts?

AI Models

Are Newer LLMs Hallucinating More? Ways to Solve AI Hallucinations

Tutorials

Writing Effective Prompts for OpenAI GPT-4.1

Tutorials

Understanding LoRA Adapters Rank and Alpha Parameters

A small, agile robot racing ahead of a large robot, symbolizing how Qwen-0.5B fine-tuned with GRPO outperforms larger AI models in Q&A tasks.

AI Models

Fast GRPO Fine-Tuning for Q&A: How We Outperformed OpenAI’s O1-Preview with Qwen-0.5B & Llama3.2-1B in 50 Minutes

AI Models

What are Low-Rank (LoRA) Adapters?

Benchmarks

Top Tiny Open-Source Language Models (Up to 1B Parameters) in Early 2025

Benchmarks

Top 5 Open-Source LLMs (3B-8B Parameters) to Watch in Early 2025

Industry News

DeepScaleR - Tiny 1.5B Model Beats OpenAI O1 in Math

Tutorials

Evaluating a Specialized Language Model (SLM) Against Its Teacher

Benchmarks

Outperforming GPT-4 on News Classification - Achieving 95% Accuracy with a Fine-Tuned Llama Model

All

Tutorials

Benchmarks

Updates

Industry News

AI Models

AI Models

Continuous Learning: Closing the Loop Between Runtime and Training

Updates

Fine Tuning Models with Tinker on Datawizz!

Updates

Controlling Model Configs Through Datawizz

Updates

Leveling Up Prompt Management with Liquid Templates

Updates

More Flexibility for LLM Evaluators with Custom Dependencies

Tutorials

Run Custom Evaluations During Training with Datawizz

Tutorials

When and How to Train on Completions Only When Fine-tuning LLMs

Industry News

Announcing Datawizz Seed Raise

Tutorials

Apple Foundations Models Framework - 10 Best Practices for Developing AI Apps

Tutorials

Fine-Tuning Gemma 3 with Multi Modal (Vision Images) Inputs

Tutorials

Apple Foundation Models Framework Benchmarks and Custom Adapters Training with Datawizz

Benchmarks

The Death of RAG? Do We Still Need Retrieval Augmented Generation in the Age of Large Contexts?

AI Models

Are Newer LLMs Hallucinating More? Ways to Solve AI Hallucinations

Tutorials

Writing Effective Prompts for OpenAI GPT-4.1

Tutorials

Understanding LoRA Adapters Rank and Alpha Parameters

A small, agile robot racing ahead of a large robot, symbolizing how Qwen-0.5B fine-tuned with GRPO outperforms larger AI models in Q&A tasks.

AI Models

Fast GRPO Fine-Tuning for Q&A: How We Outperformed OpenAI’s O1-Preview with Qwen-0.5B & Llama3.2-1B in 50 Minutes

AI Models

What are Low-Rank (LoRA) Adapters?

Benchmarks

Top Tiny Open-Source Language Models (Up to 1B Parameters) in Early 2025

Benchmarks

Top 5 Open-Source LLMs (3B-8B Parameters) to Watch in Early 2025

Industry News

DeepScaleR - Tiny 1.5B Model Beats OpenAI O1 in Math

Tutorials

Evaluating a Specialized Language Model (SLM) Against Its Teacher

Benchmarks

Outperforming GPT-4 on News Classification - Achieving 95% Accuracy with a Fine-Tuned Llama Model

All

Tutorials

Benchmarks

Updates

Industry News

AI Models

AI Models

Continuous Learning: Closing the Loop Between Runtime and Training

Updates

Fine Tuning Models with Tinker on Datawizz!

Updates

Controlling Model Configs Through Datawizz

Updates

Leveling Up Prompt Management with Liquid Templates

Updates

More Flexibility for LLM Evaluators with Custom Dependencies

Tutorials

Run Custom Evaluations During Training with Datawizz

Tutorials

When and How to Train on Completions Only When Fine-tuning LLMs

Industry News

Announcing Datawizz Seed Raise

Tutorials

Apple Foundations Models Framework - 10 Best Practices for Developing AI Apps

Tutorials

Fine-Tuning Gemma 3 with Multi Modal (Vision Images) Inputs

Tutorials

Apple Foundation Models Framework Benchmarks and Custom Adapters Training with Datawizz

Benchmarks

The Death of RAG? Do We Still Need Retrieval Augmented Generation in the Age of Large Contexts?

AI Models

Are Newer LLMs Hallucinating More? Ways to Solve AI Hallucinations

Tutorials

Writing Effective Prompts for OpenAI GPT-4.1

Tutorials

Understanding LoRA Adapters Rank and Alpha Parameters

A small, agile robot racing ahead of a large robot, symbolizing how Qwen-0.5B fine-tuned with GRPO outperforms larger AI models in Q&A tasks.

AI Models

Fast GRPO Fine-Tuning for Q&A: How We Outperformed OpenAI’s O1-Preview with Qwen-0.5B & Llama3.2-1B in 50 Minutes

AI Models

What are Low-Rank (LoRA) Adapters?

Benchmarks

Top Tiny Open-Source Language Models (Up to 1B Parameters) in Early 2025

Benchmarks

Top 5 Open-Source LLMs (3B-8B Parameters) to Watch in Early 2025

Industry News

DeepScaleR - Tiny 1.5B Model Beats OpenAI O1 in Math

Tutorials

Evaluating a Specialized Language Model (SLM) Against Its Teacher

Benchmarks

Outperforming GPT-4 on News Classification - Achieving 95% Accuracy with a Fine-Tuned Llama Model