Skip to main content
A database library for the JVM

Your SQL is wrong. You’ll find out in tests, not at 2 AM.

Every type your database has, modeled exactly. Every query, verified against a real schema before production. Every transaction, explicit. No annotations, no reflection, no pages.

Languages
JavaKotlinScala
Databases
PostgreSQLMariaDBDuckDBSQLiteOracleSQL ServerDB2

Find SQL bugs at test time, not 2 AM

Query Analysis verifies your SQL against the actual database schema. Wrong column type? Missing .opt() on a nullable column? Parameter count mismatch? Catch it in tests, not in production.

// name is VARCHAR in the database, but declared as Int here
data class Product(val id: Int, val name: Int, val price: Double)

val listProductsBad: OperationRead.Query<List<Product>> =
Fragment.of("SELECT id, name, price FROM products")
.query(productCodec.all())

fun check() {
val checker: QueryChecker = QueryChecker.create(transactor)
checker.check(listProductsBad)
}
Query Analysis Report showing type and nullability mismatches
No other Java SQL library does this. jOOQ validates DSL at compile time but can't check hand-written SQL. Hibernate validates annotations at startup but not query correctness. foundations-jdbc validates your actual queries against your actual database.

The depth of analysis depends on what each database's JDBC driver reports. See database-specific behavior.

What existing libraries still get wrong

ORMs and query builders solve the verbosity of raw JDBC. But fundamental problems remain: problems that surface in production as silent data corruption, runtime exceptions, and database lock-in.

What goes wrong
foundations
Symptom01

Queries are unchecked strings

Rename a column in the schema and nothing fails until production. No library catches this at test time.

Remedy01

Query Analysis catches bugs in tests

Validate every query against a real database in your test suite. Schema changes break tests, not production.

Symptom02

Nullability is invisible

A nullable column and a non-nullable column have the same Java type. Nothing in the API tells you which columns can be null.

Remedy02

Nullable means Optional

.opt() changes the return type to Optional / T? / Option[T]. If the type isn\'t optional, the column is guaranteed non-null.

Symptom03

Type fidelity is lost

Your DuckDB STRUCT becomes a shapeless Object. Your PostgreSQL int4range becomes a string you have to parse yourself. The database has real types. Your library ignores them.

Remedy03

Every database type, modeled exactly

Composite types, domains, enums, arrays, intervals. All first-class, with full roundtrip fidelity.

Symptom04

DB-specific features are second-class

Libraries target the lowest common denominator. PostgreSQL arrays, Oracle MULTISET, MariaDB unsigned types. All require escape hatches.

Remedy04

Database-specific by design

Dedicated type classes for each database. PgTypes, OracleTypes, MariaDbTypes: use your database's full feature set.

From zero to query in under a minute

DuckDB runs in-memory, no database server needed. A Fragment is a typed SQL building block.

Main.java
import dev.typr.foundations.*;
import dev.typr.foundations.connect.*;

public class Main {
public static void main(String[] args) {
var tx = ConnectionSource.of(DuckDbConfig.inMemory().build()).transactor();
int answer = Fragment.of("SELECT 42")
.queryExactlyOne(DuckDbTypes.integer)
.transact(tx);
System.out.println("Result: " + answer);
}
}
build.gradle.kts
dependencies {
implementation("dev.typr.foundations:foundations-jdbc:1.0.0-RC1")
// Add your driver
runtimeOnly("org.duckdb:duckdb_jdbc:1.1.3")
}

Prefer a working app? example-kotlin · example-spring-boot

Messages that actually help

When things go wrong, you get helpful messages that tell you exactly what happened, not a cryptic stack trace.

Runtime parse error with detailed contextPostgreSQL error with hint

Functional to the foundations

A database library built on functional principles. Fragments, codecs, types, operations: each one is a value you compose. Same primitives top to bottom.

Composable, top to bottom

DbType, Fragment, RowCodec, Operation: every layer composes. Codecs join for tuples; left joins wrap the right side in Optional. Combine independent operations with combine, chain dependent ones with then. The library can tell the difference, and the optimizer will too.

Full roundtrip fidelity

Read a value from the database, write it back unchanged. Every type modeled exactly as the database defines it: composites, ranges, arrays, enums, domains.

Read or write, in the type

Readonly transactions are first-class. transactRead hands you a ConnectionRead; transact hands you a Connection. Operations declare what they need. The type system tells the library, and the next reviewer, what's allowed.

No reflection, no magic

Zero runtime reflection, zero bytecode generation, zero annotation processing. GraalVM native-image works out of the box. You can read every line of what runs.

Not an ORM

No entity manager, no session, no lazy loading, no dirty checking, no surprises. You write SQL, you get typed results. That's it.

Query Analysis

Verify SQL at test time. Parameter types, column types, nullability: all checked against the real database schema. Catch bugs before production.

Start with your schema

A real PostgreSQL table: a uuid, an enum, a tstzrange, a jsonb, two nullable columns. The RowCodec maps each to a DbType that knows exactly how to read and write its value. No getObject() guessing, no wasNull() checks.

Your database schema
CREATE TYPE plan_tier AS ENUM ('free', 'pro', 'team');

CREATE TABLE subscription (
id uuid PRIMARY KEY,
email text NOT NULL,
plan plan_tier NOT NULL,
active_range tstzrange NOT NULL,
metadata jsonb,
cancelled_at timestamptz
);
data class Subscription(
val id: UUID,
val email: String,
val plan: Plan,
val activeRange: Range<Instant>,
val metadata: Jsonb?,
val cancelledAt: Instant?
)
val plan: PgType<Plan> = PgTypes.ofEnum<Plan>("plan_tier")

val subscriptionCodec: RowCodecNamed<Subscription> =
RowCodec.namedBuilder<Subscription>()
.field("id", PgTypes.uuid, Subscription::id)
.field("email", PgTypes.text, Subscription::email)
.field("plan", plan, Subscription::plan)
.field("active_range", PgTypes.tstzrange, Subscription::activeRange)
.field("metadata", PgTypes.jsonb.opt(), Subscription::metadata)
.field("cancelled_at", PgTypes.timestamptz.opt(), Subscription::cancelledAt)
.build(::Subscription)

Building blocks, faithfully modeled

Composite types, wrapper types, and arrays: each database has its own type system, and each one is modeled faithfully.

The dimensions composite type becomes a record with typed fields. PgStruct handles the wire format.

PostgreSQL DDL
CREATE TYPE dimensions AS (
width double precision,
height double precision,
depth double precision,
unit varchar(10)
);
data class Dim(
val width: Double, val height: Double,
val depth: Double, val unit: String
)

val dimCodec: RowCodecNamed<Dim> =
RowCodec.namedBuilder<Dim>()
.field("width", PgTypes.float8, Dim::width)
.field("height", PgTypes.float8, Dim::height)
.field("depth", PgTypes.float8, Dim::depth)
.field("unit", PgTypes.text, Dim::unit)
.build(::Dim)

// Named composite — reads and writes via CREATE TYPE dimensions
val pgType: PgType<Dim> = PgTypes.compositeOf("dimensions", dimCodec)

Queries are values you compose

Build fragments, combine them, pass them to functions, return them from functions. The optionally DSL gives you optional filters as branch points, and Query Analysis verifies every resulting SQL shape, not just the one your test happens to take.

// Reusable conditional filters as Fragment extensions —
// each wraps `.optionally().append(...)` so calls read like domain verbs.
// Query Analysis still expands every branch at test time.
fun Fragment.matchingName(name: String?): Fragment =
optionally(name).append(" AND name LIKE ", SqlServerTypes.nvarchar)

fun Fragment.cheaperThan(max: BigDecimal?): Fragment =
optionally(max).append(" AND price < ", SqlServerTypes.decimal)

fun Fragment.activeOnly(active: Boolean): Fragment =
optionally(active).append(" AND active = 1")

val orders: List<OrderRow> =
Fragment.of("SELECT id, name, price FROM orders WHERE 1 = 1")
.matchingName(name)
.cheaperThan(maxPrice)
.activeOnly(onlyActive)
.query(orderRowCodec.all())
.run(conn)

Transactions you can see

Use Spring's @Transactional if that's your style, or manage transactions explicitly with Transactor. Either way, you get typed builders for every database and full control over the lifecycle. Here with Oracle.

// Oracle - typed config, no JDBC URL to remember
val tx =
ConnectionSource.of(
OracleConfig.builder("localhost", 1521, "xe", "app", "secret")
.serviceName("XEPDB1")
.build()
).transactor()

// Everything inside runs in one transaction
fun getGreeting(): String =
sql { "SELECT 'Hello from Oracle' FROM dual" }
.query(RowCodec.of(OracleTypes.varchar2).exactlyOne())
.transact(tx)

Built-in JSON codecs

All databases can transfer data as JSON, and now you can use it uniformly. Your RowCodec doubles as a JSON codec with zero extra code. Aggregate child rows with json_agg(), JSON_ARRAYAGG, or FOR JSON and parse them with the same types.

// RowCodec → JSON column type, zero extra code
val linesType: DuckDbType<List<OrderLine>> =
DuckDbTypes.jsonArrayEncodedList(lineCodec)

fun getOrderLines(customerId: Int): List<OrderLine> =
sql { """
SELECT json_group_array(json_array(product, qty, price))
FROM order_lines
WHERE customer_id = ${DuckDbTypes.integer(customerId)}
""" }
.query(RowCodec.of(linesType).exactlyOne())
.transact(tx)

Call any routine like a typed function

Define a procedure or function once. The builder tracks IN and OUT types statically. Functions use SELECT so every DbType reads correctly through the normal codec path. OUT params use a CallableStatement adapter that reuses the same DbRead logic.

// Define once, call many times — input and output types are baked in
val getUser: DbProcedure.Def1_2<Int, String, String> =
DbProcedure.define("get_user_by_id")
.input(PgTypes.int4)
.out(PgTypes.text)
.out(PgTypes.text)
.build()

// call() returns an Operation — use it like any other operation
fun findUser(userId: Int): Pair<String, String> =
getUser.call(userId).transact(tx)

How it compares

FoundationsjOOQHibernateJDBIJdbcTemplateExposed
ApproachSQL + typed codecsType-safe DSL + codegenORM with entity mappingSQL + annotationsSQL + RowMapperKotlin DSL
LanguagesJava, Kotlin, ScalaJava, Kotlin, ScalajOOQ ships a KotlinGenerator (data classes) and ScalaGenerator / Scala3Generator (case classes), so Kotlin and Scala callers get generated code in their own language. Caveats: arrays still surface as Java arrays / List, Scala nullable columns are not mapped to Option[T], and Kotlin non-null types for NOT NULL columns require opt-in flags (kotlinNotNullRecordAttributes et al.) that don't apply to derived columns. Foundations ships dedicated Kotlin and Scala wrappers, native collections, T? in Kotlin, and Option[T] in Scala, idiomatic by default, no flags required.Java, KotlinJava, KotlinJava, KotlinKotlin only
Database portabilityDatabase-specificType references are explicit and searchable: find all PgTypes. and replace with MariaTypes.. Then run Query Analysis to verify every query against the new database at test time. More manual than hoping an abstraction holds, but nothing slips through unchecked.Multi-dialect DSLHQL: dialect leaks at runtimeRaw SQL: portable until it isn'tRaw SQL: portable until it isn'tDSL: dialect-specific extensions
Type modelEvery database typeEvery database typeJava types onlyBasic + customBasic Java typesKotlin types + custom
Composites, arrays, rangesFirst-classPartialjOOQ has first-class PostgreSQL array support. PostgreSQL and Oracle composite UDTs are generated by codegen. PostgreSQL ranges are available via the jooq-postgres-extensions module (not the core DSL). Other vendor-specific types (hstore, geometric) typically require custom bindings.PartialHibernate 6.2+ has @Struct for composites and built-in basic array mapping. Ranges still need third-party libraries (Hypersistence Utils).Manual mappingRaw JDBC onlyPartialExposed has built-in array support. Ranges and composite types require custom ColumnType implementations.
ReflectionReflection affects GraalVM native-image compatibility, startup time, and debuggability. Libraries using runtime proxies or bytecode generation require additional configuration for native compilation.NoneDSL-onlyjOOQ's DSL runs without reflection, but the default record-to-POJO mapper (DefaultRecordMapper) uses reflection for constructor and setter lookup, and GraalVM native-image requires reflection configuration for jOOQ internals and generated classes.HeavyModerateNone (manual mapper)DAO layer
Query type checkingAt test time (hand-written SQL)DSL only (compile)jOOQ's compile-time checking applies to its DSL. Queries written as plain SQL strings (DSL.sql(...), .fetch(String), or resultQuery) are not type-checked. jOOQ's SQL parser can parse and transform SQL strings but does not validate them against the schema. Foundations validates hand-written SQL against the real database schema at test time.Opt-in@CheckHQL (6.3+) validates HQL at compile time against the entity metamodel, not the database schema. Not enabled by default.NoNoDSL only (compile)
Type-safe nullable columnsOptional<T> / T? / Option[T]Not type-safejOOQ's Field<T> is not null-aware: nullable and non-null columns share the same type. record.getValue(field) returns plain T, possibly null at runtime regardless of the schema. Java has no Optional<T> codegen; JSR-305 @Nullable annotations can be emitted as an opt-in. The Kotlin generator can emit non-null Kotlin types for NOT NULL columns via kotlinNotNullRecordAttributes / kotlinNotNullPojoAttributes, but these are off by default and don't help with derived columns (LEFT JOIN, UNION, DEFAULT, IDENTITY). Scala generators do not map nullable columns to Option[T].@Column(nullable)Manual null checksManual null checksT? in Kotlin
Code generationComing soonMature, schema-drivenjOOQ Open Source Edition (Apache-2.0) supports PostgreSQL, MySQL/MariaDB, SQLite, H2, Derby, HSQLDB, Firebird, DuckDB, YugabyteDB, Trino, and ClickHouse. Commercial licenses (Express / Professional / Enterprise, per-developer) are required for Oracle, SQL Server, and CockroachDB (Express+); Redshift (Professional+); and DB2, Sybase, Snowflake, Teradata, Vertica, HANA, Exasol, BigQuery, and Databricks (Enterprise only).Reverse engineeringHibernate Tools generates entity classes from database schemas.Not supportedNot supportedGradle pluginOfficial JetBrains plugin generates Exposed table definitions from database schemas.

Ready to try it?

Foundations JDBC is open source, MIT-licensed, and ready to use today.

Coming soon

World-class codegen with a SQL DSL

Generate all the RowCodecs, type definitions, and repository scaffolding you see above, directly from your database schema. Write queries in a type-safe SQL DSL that composes like the language it's embedded in.

A native PostgreSQL driver for the JVM

We've been working on something that fundamentally changes what's possible with PostgreSQL on the JVM. It bypasses JDBC entirely, speaks the PostgreSQL wire protocol directly, and unlocks a class of optimizations that no connection pool or driver can offer today. The same Fragments, RowCodecs, and Operations you write today will run on it without changing a line of code.