Explore SimIIR

Discover SimIIR Studio through this hands-on interactive tutorial!
Learn how to build, customize, and run user simulations step-by-step.

STEP 1

Choose Your Template

The first step in simulating user search behavior is establishing your simulation context.
Choose from 3 pre-designed templates—each tailored for different search scenarios.

Basic Search Simulator

A simple search simulation with keyword-based queries

LLM-Powered Search

Advanced search using Large Language Models

Conversational Assistant

Multi-turn conversational search experience

Step 2 Locked

Complete Step 1 to unlock this step

STEP 2

Build Your Workflow

Visualize and customize your simulation pipeline by dragging components onto the canvas, connecting and configuring them to define your workflow.

Component Library

TrecTopicQueryGenerator
Generates one query from the TREC topic title
TrecTopicAllTextQueryGenerator
Generates query from all TREC topic text (title + description)
SingleTermQueryGenerator
Returns single-term queries ranked by frequency or discriminatory value
BiTermQueryGenerator
Generates two-term queries from topic title and description
TriTermQueryGenerator
Generates three-term queries combining title and description terms
SmarterQueryGenerator
Advanced query generator using language models for term ranking
BasicLangChainQueryGenerator
LLM-based query generation using LangChain (supports Ollama, OpenAI)
PredeterminedQueryGenerator
Uses pre-defined query list from configuration file
AdditionalTermsQueryGenerator
Adds additional terms to refine existing queries
GoogleSuggestQueryGenerator
Uses Google autocomplete suggestions for query generation
GoogleSuggestRandomQueryGenerator
Randomly selects from Google autocomplete suggestions
SingleTermReversedQueryGenerator
Single-term queries with reversed term ordering
TriTermReversedQueryGenerator
Three-term queries with reversed term ordering
SingleSmarterInterleavedQueryGenerator
Interleaved single-term queries with smart ranking
SingleTriInterleavedQueryGenerator
Interleaves single and tri-term query strategies
SingleReversedTriInterleavedQueryGenerator
Interleaves single reversed and tri-term queries
SingleReversedTriReversedInterleavedQueryGenerator
Interleaves reversed single and tri-term queries
RefiningSmarterQueryGenerator
Refines queries iteratively using smart language models
DudSmartQueryGenerator
Smart query generator with fallback for poor performers
QS34QueryGenerator
Query generator implementing QS34 strategy
TrecTextClassifier
Always returns true - treats all results as relevant
InformedTrecTextClassifier
Uses TREC qrels to determine actual result relevance
StochasticInformedTrecTextClassifier
Probabilistic relevance classification using TREC qrels
LangChainTextClassifier
LLM-based result classification via LangChain (snippets and documents)
LanguageModelTextClassifier
Classifies results using language model similarity
LanguageModelTopicTextClassifier
Topic-based language model classifier
PerfectTextClassifier
Perfect classifier with 100% accuracy (for baselines)
IfindTextClassifier
Result classifier using iFIND framework
FixedDepthDecisionMaker
Stops after examining a fixed number of results
SatisfactionDecisionMaker
Stops after finding N relevant results
SatisfactionFrustrationCombinationDecisionMaker
Combines satisfaction and frustration stopping rules
LimitedSatisfactionDecisionMaker
Satisfaction-based stopping with result depth limit
TimeLimitedSatisfactionDecisionMaker
Satisfaction stopping with time constraints
SequentialNonRelevantDecisionMaker
Stops after N consecutive non-relevant results
SequentialNonRelevantSkipDecisionMaker
Skips sequential non-relevant results before stopping
TotalNonRelevantDecisionMaker
Stops after total count of non-relevant results
TotalNonRelevantSkipDecisionMaker
Stops with skip strategy for non-relevant results
TimeDecisionMaker
Time-based stopping decision maker
TimeSinceRelevancyDecisionMaker
Stops based on time elapsed since last relevant result
IFTBasedDecisionMaker
Information Foraging Theory based stopping
RBPDecisionMaker
Rank-Biased Precision based stopping strategy
PatchCombinationDecisionMaker
Combines multiple patch strategies for stopping
PatchCombinationSimplifiedDecisionMaker
Simplified patch combination stopping strategy
InstDecisionMaker
Instance-based stopping decision maker
SimpleLangChainDecisionMaker
LLM-based stopping decision using LangChain
DifferenceDecisionMaker
Stops based on document difference measures
RandomDecisionMaker
Random stopping decision for baseline comparisons
WhooshSearchInterface
Local search using Whoosh (supports TFIDF, BM25, PL2)
WhooshDiversifiedSearchInterface
Diversified search results using Whoosh
FileBasedConversationalInterface
Conversational search using pre-defined response dictionary
PyTerrierSearchInterface
Search interface using PyTerrier framework

Quick Tips & Shortcuts

Build Your Workflow

Select a template in Step 1 or drag components from the sidebar

0 components0 connections
Add components to start

Step 3 Locked

Complete Step 2 to unlock this step

STEP 3

Explore Components

View and edit the source code of components in your workflow. Click on any component to see its implementation and customize it for your needs.

Workflow Components

0 components in your workflow

No components yet

Add components in Step 2 or create a custom component above

No Component Selected

Add components to your workflow in Step 2 to start editing their code

Happy with your workflow?

Step 4 Locked

Complete Step 3 to unlock this step

STEP 4

Run Simulation

Run a simulation to see how it works! This tutorial runs the original template for demonstration.
To test your custom components, head to Playground or Shared Tasks after completing the tour.

Two Ways to Experience Simulations

Mock Demo: Instant preview with simulated logs—no backend required
Real Simulation: Executes actual SimIIR XML config in Docker container

Simulation Console

Live execution logs from your simulation

Choose a simulation type:

Mock Demo: Quick preview with mock logs
Real Simulation: Execute actual SimIIR simulation

Step 5 Locked

Complete Step 4 to unlock this step

STEP 5

Scale with API

Learn how to use the SimIIR Studio API locally to run large-scale experiments with multiple users and topics. Clone the repository, set up the API, and execute your configured workflow at scale.

1. Clone SimIIR Studio

Get the local API wrapper for large-scale experiments

# Clone simIIR framework
git clone https://github.com/simint-ai/simiir-3.git simiir
cd simiir && pip install -r requirements.txt && cd ..

# Clone API wrapper
cd simiir-api
poetry install
poetry run simiir-api
# API will be available at http://localhost:8000

2. Run Your Experiment

Execute your configured simulation via API

POST/simulations/— Create simulation with config
POST/simulations/{id}/start— Start execution
POST/simulations/{id}/pause— Pause simulation
POST/simulations/{id}/resume— Resume simulation
GET/simulations/{id}— Check status & progress
GET/simulations/{id}/results— Download results

Feeling ready?

Continue exploring SimIIR Studio to its full extend!

Build in Playground

Create real simulations with full features

Browse Library

Explore all available components

Documentation

Deep dive into advanced features