Skip to content

SQL Syntax Reference

This page documents all SQL annotations and modifiers supported by SQG.

SQG’s syntax is designed to be fully compatible with DBeaver, the popular open-source database IDE. The @set variable syntax and ${variable} references are native DBeaver features, which means you can:

  • Develop queries interactively with full autocomplete for tables and columns
  • Execute and test queries directly in DBeaver before generating code
  • Modify @set values and re-run to test different parameter scenarios
  • Debug with query plans to optimize performance

This compatibility lets you use DBeaver as your primary SQL development environment—write and test queries there, then run sqg to generate type-safe code.

An SQG SQL file contains multiple blocks, each starting with a comment annotation:

Complete File Structure Example
Example showing all block types in an SQG SQL file
Try in Playground
-- MIGRATE 1
CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT);
-- MIGRATE 2
ALTER TABLE users ADD COLUMN email TEXT;
-- TESTDATA test1
INSERT INTO users VALUES (1, 'Test User', '[email protected]');
-- QUERY get_all_users
SELECT * FROM users;
-- QUERY get_user :one
@set id = 1
SELECT * FROM users WHERE id = ${id};
-- EXEC create_user
@set name = 'John'
@set email = '[email protected]'
INSERT INTO users (name, email) VALUES (${name}, ${email});

Schema migrations are executed in order to set up the database structure.

-- MIGRATE 1
CREATE TABLE users (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- MIGRATE 2
CREATE INDEX idx_users_name ON users(name);
-- MIGRATE 3
ALTER TABLE users ADD COLUMN email TEXT UNIQUE;

Rules:

  • Migrations must be numbered sequentially starting from 1
  • Each migration is executed once during type introspection
  • Generated code includes a getMigrations() static method
  • You’re responsible for tracking which migrations have run in production

Populate sample data used during type introspection. This data helps SQG understand nullable columns and complex return types.

-- TESTDATA
INSERT INTO users (id, name, email) VALUES
(1, 'Alice', '[email protected]'),
(2, 'Bob', '[email protected]');
INSERT INTO posts (id, user_id, title) VALUES
(1, 1, 'Hello World');

Rules:

  • TESTDATA blocks are executed after all migrations
  • Use meaningful test data that exercises your queries
  • Data is only used during generation, not included in output

Select queries that return data.

-- QUERY find_active_users
SELECT * FROM users WHERE active = true;

Generated code returns:

  • Array of row objects (default)
  • Single row with :one modifier
  • Column values with :pluck modifier

Execute statements that don’t return rows (INSERT, UPDATE, DELETE).

-- EXEC deactivate_user
@set id = 1
UPDATE users SET active = false WHERE id = ${id};

Generated code returns:

  • Database-specific result type (e.g., RunResult for SQLite)
  • Typically includes changes count and lastInsertRowid

Generate high-performance bulk insert appenders for DuckDB tables. Appenders provide significantly faster inserts than individual INSERT statements.

-- TABLE users :appender

Syntax:

  • -- TABLE <table_name> :appender - generates an appender for the specified table
  • The :appender modifier is required
  • SQG introspects the table schema to generate type-safe row interfaces

Generated code includes:

  • A factory method to create the appender (e.g., createUsersAppender())
  • A typed row interface (e.g., UsersRow)
  • An appender class with append(), appendMany(), flush(), and close() methods

Example:

-- MIGRATE 1
CREATE TABLE events (
id INTEGER PRIMARY KEY,
event_type VARCHAR NOT NULL,
payload VARCHAR,
created_at TIMESTAMP
);
-- TABLE events :appender
// Generated interface
interface EventsRow {
id: number;
event_type: string;
payload: string | null;
created_at: DuckDBTimestampValue | null;
}
// Usage
const appender = await queries.createEventsAppender();
// Append rows (10-100x faster than INSERT)
appender.append({ id: 1, event_type: 'click', payload: null, created_at: null });
appender.appendMany([
{ id: 2, event_type: 'view', payload: '{"page": "home"}', created_at: null },
{ id: 3, event_type: 'click', payload: null, created_at: null },
]);
appender.flush(); // Flush buffered data to table
appender.close(); // Flush and release resources

Notes:

  • Appenders are only supported with the typescript/duckdb generator
  • Use appenders for batch inserts (ETL, data pipelines, bulk imports)
  • Call flush() periodically for long-running imports
  • Always call close() when done to release resources
  • See TypeScript + DuckDB for detailed usage

Modifiers change how query results are returned. Add them after the query name:

-- QUERY name :modifier1 :modifier2

Returns all matching rows as an array.

-- QUERY get_users :all
SELECT * FROM users;
// Generated: User[]
getUsers(): { id: number | null; name: string | null; }[]

Returns a single row or undefined. Use for queries expected to return 0 or 1 rows.

-- QUERY get_user :one
@set id = 1
SELECT * FROM users WHERE id = ${id};
// Generated: User | undefined
getUser(id: number): { id: number | null; name: string | null; } | undefined

Extracts values from the first (or only) column. Useful for fetching lists of IDs or scalar values.

-- QUERY get_user_ids :pluck
SELECT id FROM users;
// Generated: (number | null)[]
getUserIds(): (number | null)[]

Modifiers can be combined:

-- QUERY count_users :one :pluck
SELECT COUNT(*) FROM users;
// Generated: number | null | undefined
countUsers(): number | null | undefined
-- QUERY get_first_email :one :pluck
SELECT email FROM users ORDER BY id LIMIT 1;
// Generated: string | null | undefined
getFirstEmail(): string | null | undefined

Use @set to define parameters with sample values:

-- QUERY find_users_by_name
@set name = 'John'
SELECT * FROM users WHERE name = ${name};

The sample value ('John') is used during type introspection. At runtime, the parameter becomes a function argument.

-- QUERY find_users
@set name = 'John'
@set min_age = 18
@set max_age = 65
SELECT * FROM users
WHERE name LIKE ${name}
AND age >= ${min_age}
AND age <= ${max_age};
findUsers(name: string, min_age: number, max_age: number): User[]

Parameters appear in the generated function in the order they’re defined with @set:

-- QUERY example
@set first = 'a'
@set second = 1
@set third = true
SELECT * FROM t WHERE a = ${first} AND b = ${second} AND c = ${third};
example(first: string, second: number, third: boolean): Result[]

Parameter types are inferred from the sample values:

Sample ValueInferred Type
'text'string
123number (integer)
12.5number (float)
true / falseboolean

You can use block comments for queries with configuration:

/* QUERY complex_query :one
result:
count: integer not null
email: text not null
*/
SELECT COUNT(*) as count, email FROM users GROUP BY email LIMIT 1;

The YAML-like configuration allows explicit type overrides when automatic inference isn’t sufficient.

Regular SQL comments within queries are preserved:

-- QUERY get_active_users
SELECT * FROM users
WHERE active = true -- Filter active users
AND deleted_at IS NULL; -- Exclude deleted

DuckDB supports rich data types that SQG fully maps:

-- QUERY get_tags :one
SELECT ['tag1', 'tag2', 'tag3'] as tags;
// Generated:
{
tags: {
items: (string | null)[]
} | null
}
-- QUERY get_user_data :one
SELECT {'name': 'John', 'age': 30} as user;
// Generated:
{
user: {
entries: {
name: string | null;
age: number | null;
}
} | null
}
-- QUERY get_metadata :one
SELECT MAP {'key1': 'value1', 'key2': 'value2'} as meta;
// Generated:
{
meta: {
entries: { key: string; value: string | null }[]
} | null
}
-- QUERY get_complex :one
SELECT {
'user': {'id': 1, 'name': 'John'},
'tags': ['admin', 'user'],
'settings': MAP {'theme': 'dark'}
} as data;
// Generated:
{
data: {
entries: {
user: { entries: { id: number | null; name: string | null } } | null;
tags: { items: (string | null)[] } | null;
settings: { entries: { key: string; value: string | null }[] } | null;
}
} | null
}
  • Use snake_case for query names: get_user_by_id
  • Generated code converts to camelCase: getUserById

Each query should be in its own block:

-- QUERY get_users
SELECT * FROM users;
-- QUERY get_posts
SELECT * FROM posts;

Test data helps with type inference:

-- TESTDATA
-- Include NULL values to test nullable handling
INSERT INTO users (id, name, email) VALUES
(1, 'Test', '[email protected]'),
(2, 'No Email', NULL);

Each migration should be a single logical change:

-- MIGRATE 1
CREATE TABLE users (id INTEGER PRIMARY KEY);
-- MIGRATE 2
ALTER TABLE users ADD COLUMN name TEXT;
-- MIGRATE 3
CREATE INDEX idx_users_name ON users(name);