Dataset API Overview
The Informer Dataset API provides comprehensive endpoints for creating, managing, and querying datasets. Datasets are the core data structures in Informer, supporting data import from multiple sources, field manipulation, filtering, aggregations, and export to various formats. All routes are prefixed with /api.
Features
- CRUD Operations - Create, read, update, and delete datasets
- Data Management - Import, append, replace, and clear dataset records
- Field Operations - Add, update, and remove fields with type mapping
- Drafts & Versioning - Create draft copies for safe editing without affecting production
- Filters - Create saved filters and apply complex query logic
- Aggregations - Group and aggregate data with hierarchical structures
- Visuals - Manage charts and visual representations
- Copy & Snapshots - Create point-in-time copies and promote snapshots
- Import & Export - Support for multiple formats (CSV, Excel, PDF, etc.)
- Sharing & Permissions - Team-based access control with field-level restrictions
- Flow Builder - Multi-step data transformation pipelines
- Elasticsearch Integration - Full-text search, mapping management, index operations
Dataset Types
Datasets can be created from multiple sources:
| Type | Description |
|---|---|
| datasource | Connected to external database via query |
| upload | Created from uploaded file (CSV, Excel, etc.) |
| external | Connected to external API or service |
Authentication
All Dataset API endpoints require authentication via session cookies or API tokens. Most endpoints verify dataset permissions before allowing access.
Common Permission Patterns
- Read access - View dataset structure and data
- Write permission - Required for
dataset:write(create, update, delete, field changes, data modifications) - Refresh permission - Required for
dataset:refresh(run queries, refresh data from source) - Copy permission - Required for
dataset:copy(copy datasets, create snapshots) - Share permission - Required for
dataset:share(manage access and shares) - Superuser permission - Required for
tenant:superuser(vacuum, admin operations)
Elasticsearch Integration
Datasets store their data in Elasticsearch indices. Each dataset has:
- esIndex - Unique Elasticsearch index name
- Mapping - Field type definitions and analyzer settings
- Search API - Full-text search with aggregations and filters
Data Operations
Import Data
- Upload files (CSV, Excel)
- Import from datasource query
- Append or replace existing data
Query Data
- Pagination with
startandlimit - Sorting by field
- Full-text search with
qparameter - Filtering by criteria
- Aggregations and grouping
Export Data
- Multiple formats (CSV, Excel, PDF, HTML, JSON)
- Apply filters and formatting
- Include snapshots in exports
Drafts & Versioning
Datasets support draft mode for safe editing:
- Create draft -
POST /api/datasets/{id}/_edit - Modify draft - Make changes without affecting production
- Commit draft -
POST /api/datasets/{id}/draft/_committo publish changes
Flow Builder
Build multi-step data transformation pipelines:
- Steps - Filter, transform, aggregate, join
- Parameters - Dynamic values for reusable flows
- Testing - Preview results at each step
Common Query Parameters
Many list endpoints support:
| Parameter | Type | Default | Description |
|---|---|---|---|
q | string | - | Full-text search query |
sort | string | - | Field to sort by (prefix with - for descending) |
start | integer | 0 | Pagination offset |
limit | integer | 30 | Number of results per page |
filter | string | - | Filter ID or criteria |
type | string | - | Dataset type filter |
datasource | string | - | Filter by datasource ID |
Error Responses
Standard HTTP status codes:
200- Success400- Bad request (validation error)403- Forbidden (insufficient permissions)404- Resource not found409- Conflict (duplicate, constraint violation)500- Internal server error
Error responses include:
{
"statusCode": 403,
"error": "Forbidden",
"message": "Insufficient permissions to modify dataset"
}
Long-Running Operations
Several endpoints support long-running operations with progress tracking:
POST /api/datasets- Create with data importPOST /api/datasets/{id}/_refresh- Refresh from datasourcePOST /api/datasets/{id}/_import- Import dataPOST /api/datasets/{id}/export/{exporter}(POST) - Export to file
These endpoints accept a progress parameter in the payload for progress updates.
Next Steps
Explore the specific endpoint categories:
- Core CRUD - Basic dataset operations
- Drafts & Versioning - Draft workflow
- Data Operations - Data CRUD
- Fields - Field management
- Mapping & Index - Elasticsearch integration
- Refresh & Execution - Query execution
- Copy & Snapshots - Dataset copying
- Filters - Saved filters
- Settings - User preferences
- Flow Builder - Transformation pipelines
- Visuals - Chart management
- Comments - Dataset discussions
- Sharing - Access control
- Tags - Dataset tagging
- Ownership & Favorites - Ownership management
- Aggregations - Data grouping
- Import & Export - Data exchange
- Discovery & Templates - Template discovery
- Info & Metadata - Dataset metadata
- Admin - Administrative tools