Execution
Run queries to generate datasets, preview results, and benchmark performance.
Query Execution Overview
Informer queries can be executed in several ways:
| Endpoint | Purpose | Returns |
|---|---|---|
POST /_run | Generate an embedded dataset with full data refresh | Dataset with progress tracking |
POST /_execute | Execute query and return raw results | Query results in various formats |
GET /sample | Preview query results without caching | Sample data (default: 50 rows) |
POST /_benchmark | Test query performance | Benchmark metrics |
POST /api/queries/{id}/_run
Run a query to create or refresh an embedded dataset for the current user.
Authentication: Required
Permission: query:run
Path Parameters:
| Parameter | Type | Description |
|---|---|---|
id | string | Query ID (UUID) or natural ID |
Request Body:
{
"params": {
"startDate": "2024-01-01",
"endDate": "2024-12-31",
"region": "West"
},
"progress": "task-monitor-id"
}
Validation:
| Field | Type | Required | Description |
|---|---|---|---|
params | object | No | Input parameter values matching query inputs |
progress | string | No | Task monitor ID for progress updates |
Response:
Returns a long-running task that creates or updates an embedded dataset. The dataset TTL is set based on tenant configuration (default: 60 minutes).
{
"id": "embedded-dataset-uuid",
"name": "Sales Analysis Query Results",
"type": "query",
"queryId": "550e8400-e29b-41d4-a716-446655440000",
"ownerId": "john.doe",
"embedded": true,
"params": {
"startDate": "2024-01-01",
"endDate": "2024-12-31",
"region": "West"
},
"ttl": 3600000,
"records": 12450,
"size": 2458901,
"dataUpdatedAt": "2024-02-09T10:30:00Z",
"_links": {
"self": { "href": "/api/datasets/embedded-dataset-uuid" },
"inf:query": { "href": "/api/queries/550e8400-e29b-41d4-a716-446655440000" }
}
}
Run Workflow:
- Find or Create Dataset - Locate existing embedded dataset for query + user, or create new one
- Update Parameters - Set dataset params to match request
- Set TTL - Configure time-to-live based on tenant settings
- Update User Settings - Store params in user settings for next run
- Refresh Data - Execute query and populate dataset
- Return Dataset - Return populated dataset with stats
This endpoint will return 400 Bad Request if called on an embedded query. Embedded queries must be run through their parent dataset's refresh endpoint.
Include a progress parameter to receive real-time updates as the query executes. The progress monitor will report:
- "Running query..." - Query execution started
- Row counts and percentages as data loads
- Completion status
POST /api/queries/{id}/_execute
Execute a query and return raw results without creating a persistent dataset.
Authentication: Required
Permission: query:run
Path Parameters:
| Parameter | Type | Description |
|---|---|---|
id | string | Query ID (UUID) or natural ID |
Query Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
output | string | json | Output format: json, csv, excel, etc. |
limit | integer | -1 | Row limit (-1 for unlimited) |
pretty | boolean | false | Pretty-print JSON output |
timezone | string | - | Timezone for date formatting |
applyFormatting | boolean | true | Apply field formatting to results |
Request Body:
{
"startDate": "2024-01-01",
"region": "West"
}
Input parameters as key-value pairs.
Response (JSON):
{
"_links": {
"self": { "href": "/api/queries/{id}/_execute" }
},
"_embedded": {
"inf:record": [
{
"date": "2024-01-15",
"region": "West",
"amount": 1250.50,
"customer": "Acme Corp"
},
{
"date": "2024-01-16",
"region": "West",
"amount": 890.25,
"customer": "Tech Solutions"
}
]
},
"start": 0,
"count": 2,
"total": 150
}
Response (CSV):
date,region,amount,customer
2024-01-15,West,1250.50,Acme Corp
2024-01-16,West,890.25,Tech Solutions
Available output formats depend on registered dataset exporters. Common formats include json, csv, excel, html, pdf. Use GET /api/queries/{id}/exporters to see available formats.
Use _execute for:
- API integrations that need raw data
- One-time data exports
- Testing queries without caching
Use _run for:
- User-facing query results
- Data that will be visualized
- Results that need to persist
GET /api/queries/{id}/sample
Preview query results without executing a full run or creating a dataset.
Authentication: Required
Path Parameters:
| Parameter | Type | Description |
|---|---|---|
id | string | Query ID (UUID) or natural ID |
Query Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
size | integer | 50 | Number of sample rows to return |
Response:
{
"_links": {
"self": { "href": "/api/queries/{id}/sample" }
},
"_embedded": {
"inf:record": [
{
"id": 1,
"name": "Product A",
"price": 29.99,
"category": "Electronics"
},
{
"id": 2,
"name": "Product B",
"price": 49.99,
"category": "Home"
}
]
},
"start": 0,
"count": 2,
"total": 50
}
Use Cases:
- Schema Discovery - See what fields the query returns
- Data Preview - Quick peek at results during development
- Validation - Verify query syntax and datasource connectivity
Sample queries are limited to the requested size (default 50 rows). They do not represent the full result set and may not include all distinct values.
POST /api/queries/{id}/_benchmark
Run performance benchmarks on a query to measure execution time and throughput.
Authentication: Required
Permission: query:run
Path Parameters:
| Parameter | Type | Description |
|---|---|---|
id | string | Query ID (UUID) or natural ID |
Request Body:
{
"params": {
"startDate": "2024-01-01"
},
"config": {
"iterations": 5,
"warmup": 2,
"timeout": 60000
},
"progress": "task-monitor-id"
}
Validation:
| Field | Type | Required | Description |
|---|---|---|---|
params | object | No | Input parameter values |
config | object | No | Benchmark configuration |
progress | string | No | Task monitor ID |
Config Options:
| Field | Type | Default | Description |
|---|---|---|---|
iterations | integer | 3 | Number of benchmark runs |
warmup | integer | 1 | Number of warmup runs (excluded from stats) |
timeout | integer | 60000 | Maximum time per run (ms) |
Response:
{
"runs": [
{
"iteration": 1,
"duration": 1250,
"rows": 12450,
"throughput": 9960
},
{
"iteration": 2,
"duration": 1180,
"rows": 12450,
"throughput": 10550
},
{
"iteration": 3,
"duration": 1220,
"rows": 12450,
"throughput": 10205
}
],
"stats": {
"minDuration": 1180,
"maxDuration": 1250,
"avgDuration": 1217,
"medianDuration": 1220,
"totalRows": 12450,
"avgThroughput": 10238
},
"params": {
"startDate": "2024-01-01"
},
"timestamp": "2024-02-09T10:45:00Z"
}
Benchmark Metrics:
| Metric | Description |
|---|---|
duration | Time to execute query (ms) |
rows | Number of rows returned |
throughput | Rows per second |
minDuration | Fastest run time |
maxDuration | Slowest run time |
avgDuration | Average run time |
medianDuration | Median run time |
Benchmarking will return 400 Bad Request for embedded queries. Only ad-hoc queries can be benchmarked.
Use benchmarks to:
- Compare Datasources - Test same query on different connections
- Optimize Queries - Measure impact of index changes or query rewrites
- Capacity Planning - Estimate load for scheduled jobs
- SLA Validation - Verify queries meet performance requirements
GET /api/queries/{id}/dataset
Get the embedded dataset created by running a query for the current user.
Authentication: Required
Path Parameters:
| Parameter | Type | Description |
|---|---|---|
id | string | Query ID (UUID) or natural ID |
Response:
Returns 404 if no dataset exists for the current user and query. Otherwise returns the embedded dataset.
{
"id": "embedded-dataset-uuid",
"name": "Sales Analysis Query Results",
"type": "query",
"queryId": "550e8400-e29b-41d4-a716-446655440000",
"ownerId": "john.doe",
"embedded": true,
"params": {
"startDate": "2024-01-01"
},
"ttl": 3600000,
"records": 12450,
"dataUpdatedAt": "2024-02-09T10:30:00Z"
}
Embedded datasets are automatically created when a query is run. They persist until:
- TTL expires (default: 60 minutes)
- User runs the query again with different parameters
- Query definition is updated and committed with
clearAllUserSettings: true
GET /api/queries/{id}/exporters
Get available export formats for query results.
Authentication: Required
Path Parameters:
| Parameter | Type | Description |
|---|---|---|
id | string | Query ID (UUID) or natural ID |
Response:
[
{
"id": "csv",
"name": "CSV",
"mimeType": "text/csv",
"extension": "csv",
"eligible": true
},
{
"id": "excel",
"name": "Excel",
"mimeType": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"extension": "xlsx",
"eligible": true
},
{
"id": "pdf",
"name": "PDF",
"mimeType": "application/pdf",
"extension": "pdf",
"eligible": true
},
{
"id": "json",
"name": "JSON",
"mimeType": "application/json",
"extension": "json",
"eligible": true
}
]
Exporter Properties:
| Property | Description |
|---|---|
id | Exporter identifier (used in output parameter) |
name | Display name |
mimeType | MIME type for HTTP responses |
extension | File extension |
eligible | Whether exporter is available for this query |
Available exporters may be restricted by tenant configuration. Some formats like PDF or Excel may require additional licensing.
Execution Best Practices
Choosing the Right Endpoint
| Use Case | Endpoint | Why |
|---|---|---|
| User wants to view/explore data | POST /_run | Creates persistent dataset for filtering, sorting, visualization |
| API needs data for integration | POST /_execute | Returns raw data without persistence overhead |
| Testing query during development | GET /sample | Quick preview without full execution |
| Optimizing query performance | POST /_benchmark | Detailed performance metrics |
Parameter Handling
// Good: Pass parameters as object
{
"params": {
"startDate": "2024-01-01",
"region": "West"
}
}
// Bad: String values for dates
{
"params": {
"startDate": 1704067200000 // Use ISO strings, not timestamps
}
}
Error Handling
Common execution errors:
| Status | Cause | Solution |
|---|---|---|
400 | Embedded query | Use dataset refresh instead |
400 | Invalid parameters | Check query inputs definition |
403 | Missing run permission | Verify query language supports execution |
408 | Query timeout | Optimize query or increase timeout setting |
500 | Datasource error | Check datasource connectivity and credentials |
Performance Optimization
- Use Limits - Set
limitparameter for large result sets - Sample First - Use
/sampleto verify query before full run - Cache Results - Use
_runinstead of_executefor repeated access - Monitor TTL - Adjust dataset TTL based on data freshness requirements
- Benchmark - Use
_benchmarkto identify slow queries