Connector Documentation
A data source defines where a DataConnector reads its data from. You configure it by setting the backend field and the matching data_endpoint options. This page describes each available backend and how to configure it.
PostgreSQL
Connects to a PostgreSQL database and uses the result of a SQL query as the data source.
- Supports any SQL query, including joins and expressions
- Streams results to handle large datasets without loading everything into memory
Configuration:
{
"backend": "postgres",
"data_endpoint": {
"host": "localhost",
"port": 5432,
"database": "mydb",
"user": "username",
"password": "password",
"query": "SELECT id, name, ST_AsGeoJSON(geom) as geometry FROM my_table"
}
}Apache Solr
Connects to an Apache Solr search index and fetches all matching documents.
- Automatically paginates through large result sets
- Supports multi-value fields and query parameter forwarding
Configuration:
{
"backend": "solr",
"data_endpoint": {
"url": "http://solr:8983/solr/mycore/select",
"query": "q=*:*&wt=json",
"paged": true
}
}GeoJSON
Fetches GeoJSON data from an HTTP endpoint.
- Supports paginated APIs
- Flattens nested feature properties automatically
Configuration:
{
"backend": "geojson",
"data_endpoint": {
"url": "https://api.example.com/data.geojson",
"paged": false
}
}Files
Reads GeoJSON data from a local file.
- Streams the file to handle large inputs without loading everything into memory
Configuration:
{
"backend": "file",
"data_endpoint": {
"file": "mydata.geojson"
}
}Custom JavaScript
Runs custom JavaScript code to fetch and transform data before it enters the processing pipeline. Use this when no built-in backend fits your data source.
- Full control over fetch logic and data shape
- Output must be a stream of NDJSON (Newline Delimited JSON) features
Configuration:
{
"backend": "custom",
"data_endpoint": {
"code": "// Custom JavaScript code to fetch and transform data"
}
}TileProxy
Proxies and caches an existing vector tile service. Use this when you already have a tile server and want to cache its output.
- Supports standard
{z}/{x}/{y}tile URL templates - Can pre-seed tiles for a specific bounding box and zoom range
Configuration:
{
"backend": "tileproxy",
"data_endpoint": {
"url": "https://tiles.example.com/{z}/{x}/{y}.pbf",
"seed_bbox": "16.0,48.0,17.0,49.0",
"seed_minzoom": 0,
"seed_maxzoom": 14
}
}PMTiles
Serves vector tiles directly from a PMTiles archive — a single-file tile format served over HTTP. Unlike other backends, this does not fetch or process data; it reads tiles on demand from the archive using HTTP range requests.
Configuration:
{
"backend": "pmtiles",
"data_endpoint": {
"url": "https://example.com/tiles.pmtiles"
}
}TilePG
Generates vector tiles on demand from PostGIS queries. Each tile is rendered by running SQL against a PostGIS database, with geometry simplification applied per zoom level.
- Supports multiple layers with separate SQL queries per layer
- Configurable buffer size, zoom delta, and simplification
- Compatible with MapboxGL feature interaction via feature IDs
Configuration:
{
"backend": "tilepg",
"data_endpoint": {
"source": {
"db": {
"host": "localhost",
"port": 5432,
"database": "geodata",
"user": "username",
"password": "password"
},
"maxzoom": 14,
"deltazoom": 0,
"layers": [
{
"id": "mylayer",
"query": ["SELECT id, name, geometry FROM my_table WHERE geometry && !bbox!"],
"fields": { "name": "String" },
"feature_id": "id",
"buffer_size": 4,
"simplify": 0
}
]
}
}
}Grouped Data Source (Tags)
Combines data from multiple DataConnectors that share the same tag into a single unified DataConnector. When a DataConnector has tags: ["mytag"], a Tag backend with name: "mytag" merges all tagged sources automatically.
- Merges field definitions (
data_fields,vectortile_fields,trie_fields) from all tagged sources - Prefixes item IDs with the source DataConnector name to avoid collisions (e.g.,
pois_123) - Rebuilds automatically when any tagged DataConnector is updated
Configuration:
{
"backend": "tag",
"name": "mytag"
}Open Government Data (OGD)
Normalizes Open Government Data from different sources into a single consistent structure. Different OGD sources publish data with different property names and formats. This backend lets you write SQL expressions that map each source’s fields to a fixed set of normalized columns.
Normalized columns:
| Column | Description |
|---|---|
id | Internal primary key |
external_id | Original ID from the OGD source |
type | Main category (e.g., restaurant, park) |
subtype | Sub-category |
subsubtype | Further sub-classification |
iconname | Icon identifier for map display |
label | Short display label |
name | Full name of the feature |
address | Formatted address string |
url | Link to more information |
rank | Sorting rank |
geometry | GeoJSON geometry |
The following columns are generated automatically:
md5_id— stable hash-based ID derived from geometry and non-volatile propertiespostal_code,district_name,district_number,street_name,street_number— populated via reverse geocoding of each feature’s locationorigin— JSON object with all original OGD properties, available for use in templates
How to configure the SQL mapping:
When you create an OGD DataConnector in the admin UI, you:
- Enter the GeoJSON endpoint URL to load a preview of the raw source data.
- Write a SQL expression for each normalized column that maps the source fields to the target. Expressions run against a PostgreSQL CTE (common table expression) holding the raw data. For example:
id→"OBJECTID"— map an existing column directlyname→COALESCE("BEZEICHNUNG", "NAME_DE", "LABEL")— pick the first non-null value from multiple columnstype→'pharmacy'— assign a constantaddress→CONCAT("STRASSE", ' ', "HAUSNUMMER", ', ', "PLZ", ' ', "ORT")— compose from multiple fields
- Use the Preview button next to each column to verify the mapping against the sample data before saving.
Configuration:
{
"backend": "ogd",
"data_endpoint": {
"url": "https://data.wien.gv.at/daten/geo?service=WFS&request=GetFeature&outputFormat=json&typeName=...",
"sqls": {
"id": "\"OBJECTID\"",
"external_id": "\"FID\"",
"type": "'pharmacy'",
"subtype": "NULL",
"subsubtype": "NULL",
"iconname": "'pharmacy'",
"label": "\"BEZEICHNUNG\"",
"name": "\"BEZEICHNUNG\"",
"address": "CONCAT(\"STRASSE\", ' ', \"HAUSNUMMER\")",
"url": "\"WEITERE_INFORMATIONEN\"",
"rank": "1",
"geometry": "geometry"
}
}
}InfoMax GraphQL
Connects to InfoMax GraphQL APIs with automatic pagination. Supports TourSearch, EventSearch, and POI search operations, and extracts geometry from the nested GraphQL response structure.
- Bearer token authentication
- Paginates automatically at 1000 items per page
- Extracts geometry for tours (LineString), events (Point), and POIs
- Flattens nested address and location fields
Configuration:
{
"backend": "infomaxgraphql",
"data_endpoint": {
"url": "https://api.infomax.example.com/graphql",
"token": "your-bearer-token",
"operationName": "TourSearch",
"query": "query TourSearch($pagination: PaginationInput) { ... }",
"variables": "{}"
}
}