SQL Syntax Reference
This page documents all SQL annotations and modifiers supported by SQG.
DBeaver Compatibility
Section titled “DBeaver Compatibility”SQG’s syntax is designed to be fully compatible with DBeaver, the popular open-source database IDE. The @set variable syntax and ${variable} references are native DBeaver features, which means you can:
- Develop queries interactively with full autocomplete for tables and columns
- Execute and test queries directly in DBeaver before generating code
- Modify
@setvalues and re-run to test different parameter scenarios - Debug with query plans to optimize performance
This compatibility lets you use DBeaver as your primary SQL development environment—write and test queries there, then run sqg to generate type-safe code.
File Structure
Section titled “File Structure”An SQG SQL file contains multiple blocks, each starting with a comment annotation:
-- MIGRATE 1CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT);
-- MIGRATE 2ALTER TABLE users ADD COLUMN email TEXT;
-- TESTDATA test1
-- QUERY get_all_usersSELECT * FROM users;
-- QUERY get_user :one@set id = 1SELECT * FROM users WHERE id = ${id};
-- EXEC create_user@set name = 'John'INSERT INTO users (name, email) VALUES (${name}, ${email});Block Types
Section titled “Block Types”MIGRATE
Section titled “MIGRATE”Schema migrations are executed in order to set up the database structure.
-- MIGRATE 1CREATE TABLE users ( id INTEGER PRIMARY KEY, name TEXT NOT NULL, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP);
-- MIGRATE 2CREATE INDEX idx_users_name ON users(name);
-- MIGRATE 3ALTER TABLE users ADD COLUMN email TEXT UNIQUE;Rules:
- Migrations must be numbered sequentially starting from 1
- Each migration is executed once during type introspection
- Generated code includes a
getMigrations()static method - You’re responsible for tracking which migrations have run in production
TESTDATA
Section titled “TESTDATA”Populate sample data used during type introspection. This data helps SQG understand nullable columns and complex return types.
-- TESTDATAINSERT INTO users (id, name, email) VALUES
INSERT INTO posts (id, user_id, title) VALUES (1, 1, 'Hello World');Rules:
- TESTDATA blocks are executed after all migrations
- Use meaningful test data that exercises your queries
- Data is only used during generation, not included in output
Select queries that return data.
-- QUERY find_active_usersSELECT * FROM users WHERE active = true;Generated code returns:
- Array of row objects (default)
- Single row with
:onemodifier - Column values with
:pluckmodifier
Execute statements that don’t return rows (INSERT, UPDATE, DELETE).
-- EXEC deactivate_user@set id = 1UPDATE users SET active = false WHERE id = ${id};Generated code returns:
- Database-specific result type (e.g.,
RunResultfor SQLite) - Typically includes
changescount andlastInsertRowid
TABLE (DuckDB Only)
Section titled “TABLE (DuckDB Only)”Generate high-performance bulk insert appenders for DuckDB tables. Appenders provide significantly faster inserts than individual INSERT statements.
-- TABLE users :appenderSyntax:
-- TABLE <table_name> :appender- generates an appender for the specified table- The
:appendermodifier is required - SQG introspects the table schema to generate type-safe row interfaces
Generated code includes:
- A factory method to create the appender (e.g.,
createUsersAppender()) - A typed row interface (e.g.,
UsersRow) - An appender class with
append(),appendMany(),flush(), andclose()methods
Example:
-- MIGRATE 1CREATE TABLE events ( id INTEGER PRIMARY KEY, event_type VARCHAR NOT NULL, payload VARCHAR, created_at TIMESTAMP);
-- TABLE events :appender// Generated interfaceinterface EventsRow { id: number; event_type: string; payload: string | null; created_at: DuckDBTimestampValue | null;}
// Usageconst appender = await queries.createEventsAppender();
// Append rows (10-100x faster than INSERT)appender.append({ id: 1, event_type: 'click', payload: null, created_at: null });appender.appendMany([ { id: 2, event_type: 'view', payload: '{"page": "home"}', created_at: null }, { id: 3, event_type: 'click', payload: null, created_at: null },]);
appender.flush(); // Flush buffered data to tableappender.close(); // Flush and release resourcesNotes:
- Appenders are only supported with the
typescript/duckdbgenerator - Use appenders for batch inserts (ETL, data pipelines, bulk imports)
- Call
flush()periodically for long-running imports - Always call
close()when done to release resources - See TypeScript + DuckDB for detailed usage
Modifiers
Section titled “Modifiers”Modifiers change how query results are returned. Add them after the query name:
-- QUERY name :modifier1 :modifier2:all (default)
Section titled “:all (default)”Returns all matching rows as an array.
-- QUERY get_users :allSELECT * FROM users;// Generated: User[]getUsers(): { id: number | null; name: string | null; }[]Returns a single row or undefined. Use for queries expected to return 0 or 1 rows.
-- QUERY get_user :one@set id = 1SELECT * FROM users WHERE id = ${id};// Generated: User | undefinedgetUser(id: number): { id: number | null; name: string | null; } | undefined:pluck
Section titled “:pluck”Extracts values from the first (or only) column. Useful for fetching lists of IDs or scalar values.
-- QUERY get_user_ids :pluckSELECT id FROM users;// Generated: (number | null)[]getUserIds(): (number | null)[]Combining Modifiers
Section titled “Combining Modifiers”Modifiers can be combined:
-- QUERY count_users :one :pluckSELECT COUNT(*) FROM users;// Generated: number | null | undefinedcountUsers(): number | null | undefined-- QUERY get_first_email :one :pluckSELECT email FROM users ORDER BY id LIMIT 1;// Generated: string | null | undefinedgetFirstEmail(): string | null | undefinedParameters
Section titled “Parameters”Defining Parameters
Section titled “Defining Parameters”Use @set to define parameters with sample values:
-- QUERY find_users_by_name@set name = 'John'SELECT * FROM users WHERE name = ${name};The sample value ('John') is used during type introspection. At runtime, the parameter becomes a function argument.
Multiple Parameters
Section titled “Multiple Parameters”-- QUERY find_users@set name = 'John'@set min_age = 18@set max_age = 65SELECT * FROM usersWHERE name LIKE ${name} AND age >= ${min_age} AND age <= ${max_age};findUsers(name: string, min_age: number, max_age: number): User[]Parameter Order
Section titled “Parameter Order”Parameters appear in the generated function in the order they’re defined with @set:
-- QUERY example@set first = 'a'@set second = 1@set third = trueSELECT * FROM t WHERE a = ${first} AND b = ${second} AND c = ${third};example(first: string, second: number, third: boolean): Result[]Parameter Types
Section titled “Parameter Types”Parameter types are inferred from the sample values:
| Sample Value | Inferred Type |
|---|---|
'text' | string |
123 | number (integer) |
12.5 | number (float) |
true / false | boolean |
Block Comments
Section titled “Block Comments”You can use block comments for queries with configuration:
/* QUERY complex_query :one result: count: integer not null email: text not null*/SELECT COUNT(*) as count, email FROM users GROUP BY email LIMIT 1;The YAML-like configuration allows explicit type overrides when automatic inference isn’t sufficient.
Inline Comments
Section titled “Inline Comments”Regular SQL comments within queries are preserved:
-- QUERY get_active_usersSELECT * FROM usersWHERE active = true -- Filter active users AND deleted_at IS NULL; -- Exclude deletedComplex Types (DuckDB)
Section titled “Complex Types (DuckDB)”DuckDB supports rich data types that SQG fully maps:
Arrays/Lists
Section titled “Arrays/Lists”-- QUERY get_tags :oneSELECT ['tag1', 'tag2', 'tag3'] as tags;// Generated:{ tags: { items: (string | null)[] } | null}Structs
Section titled “Structs”-- QUERY get_user_data :oneSELECT {'name': 'John', 'age': 30} as user;// Generated:{ user: { entries: { name: string | null; age: number | null; } } | null}-- QUERY get_metadata :oneSELECT MAP {'key1': 'value1', 'key2': 'value2'} as meta;// Generated:{ meta: { entries: { key: string; value: string | null }[] } | null}Nested Structures
Section titled “Nested Structures”-- QUERY get_complex :oneSELECT { 'user': {'id': 1, 'name': 'John'}, 'tags': ['admin', 'user'], 'settings': MAP {'theme': 'dark'}} as data;// Generated:{ data: { entries: { user: { entries: { id: number | null; name: string | null } } | null; tags: { items: (string | null)[] } | null; settings: { entries: { key: string; value: string | null }[] } | null; } } | null}Best Practices
Section titled “Best Practices”Naming Conventions
Section titled “Naming Conventions”- Use
snake_casefor query names:get_user_by_id - Generated code converts to
camelCase:getUserById
One Query Per Block
Section titled “One Query Per Block”Each query should be in its own block:
-- QUERY get_usersSELECT * FROM users;
-- QUERY get_postsSELECT * FROM posts;Use Meaningful Test Data
Section titled “Use Meaningful Test Data”Test data helps with type inference:
-- TESTDATA-- Include NULL values to test nullable handlingINSERT INTO users (id, name, email) VALUES (2, 'No Email', NULL);Keep Migrations Atomic
Section titled “Keep Migrations Atomic”Each migration should be a single logical change:
-- MIGRATE 1CREATE TABLE users (id INTEGER PRIMARY KEY);
-- MIGRATE 2ALTER TABLE users ADD COLUMN name TEXT;
-- MIGRATE 3CREATE INDEX idx_users_name ON users(name);