Skip to main content

A functional JDBC library for the JVM

What if JDBC just worked the way you think it should?

Every type your database has, as a real typed value. Queries that compose. Transactions you control. No annotations, no reflection, no surprises. Just the database library you've been looking for.

The Problem with JDBC

JDBC is notoriously difficult to use correctly. The API is verbose, error-prone, and makes it almost impossible to handle all column types properly.

What goes wrong

  • Null handling — Was that column nullable? Did you remember wasNull()?
  • Type conversions — Does getObject() return what you expect? Often not.
  • Resource management — Did you close that ResultSet? Statement? Connection?
  • Database differences — Code that works on PostgreSQL may silently corrupt data on Oracle.

What we built

  • Type-safe database types — Every column type modeled correctly for each database.
  • Full roundtrip support — Read a value, write it back — no loss, no corruption.
  • Automatic resource management — Connections, statements, and result sets managed for you.
  • Multi-database — PostgreSQL, MariaDB, DuckDB, Oracle, SQL Server, and DB2.

What this is

A complete database library built on functional principles. Not the lowest common denominator — the full power of your database, with the safety and composability of functional programming.

Not an ORM

No entity manager, no session, no lazy loading, no surprises. You write SQL, you get typed results. That's it.

Queries are values

Fragments and row parsers are immutable values you compose, pass around, and run when you're ready. No type-class machinery required — just functions and values.

Full roundtrip fidelity

Read a value from the database and write it back without loss or corruption. Every type is modeled exactly as the database defines it.

No reflection, no magic

Zero reflection, zero bytecode generation, zero annotation processing. Works with GraalVM native-image out of the box. You can read every line of what runs.

Composable

Row parsers compose. Join two parsers for a joined query. Left join gives you Optional on the right side. Fragments compose with and(), or(), whereAnd(). It's just functions.

Java, Kotlin, Scala

Core library in Java. Kotlin gets nullable types natively. Scala gets Option types and string interpolation. Same concepts, idiomatic in each language.

Start with your schema

Take a PostgreSQL table that uses a composite type, arrays, and jsonb. This is the running example for everything below.

Your database schema
CREATE TYPE dimensions AS (
width double precision,
height double precision,
depth double precision,
unit varchar(10)
);

CREATE TABLE product (
id serial PRIMARY KEY,
name text NOT NULL,
price numeric(10,2) NOT NULL,
tags text[],
dimensions dimensions,
metadata jsonb,
created_at timestamptz DEFAULT now()
);
How you represent it
record ProductRow(
ProductId id,
String name,
BigDecimal price,
Optional<String[]> tags, // text[]
Optional<Dimensions> dimensions, // composite type
Optional<Jsonb> metadata, // jsonb
Optional<OffsetDateTime> createdAt // timestamptz
) {}

Every column has a type

The RowParser maps each column to a DbType that knows exactly how to read and write its value. No getObject() guessing, no wasNull() checking. Parsers compose for joins — left join gives you Optional on the right side.

// The RowParser defines how to read and write every column
static RowParser<ProductRow> rowParser = RowParsers.of(
ProductId.pgType, // id
PgTypes.text, // name
PgTypes.numeric, // price
PgTypes.textArray.opt(), // tags: text[]
Dimensions.pgType.opt(), // dimensions: composite
PgTypes.jsonb.opt(), // metadata: jsonb
PgTypes.timestamptz.opt(), // created_at: timestamptz
ProductRow::new,
p -> new Object[]{p.id(), p.name(), p.price(), p.tags(),
p.dimensions(), p.metadata(), p.createdAt()}
);

// Compose parsers for joins
RowParser<And<ProductRow, Optional<CategoryRow>>> joined =
ProductRow.rowParser.leftJoined(CategoryRow.rowParser);

Composite types become real types

The dimensions composite type doesn't become a string or a map — it becomes a record with typed fields. PgStruct handles PostgreSQL's composite wire format with typed field builders, and gives you a PgType you can use anywhere.

// The composite type becomes a record with its own PgStruct codec
record Dimensions(
Double width, Double height, Double depth, String unit
) {}

// PgStruct handles PostgreSQL's composite wire format
static PgStruct<Dimensions> pgStruct = PgStruct.<Dimensions>builder("dimensions")
.doubleField("width", PgTypes.float8, Dimensions::width)
.doubleField("height", PgTypes.float8, Dimensions::height)
.doubleField("depth", PgTypes.float8, Dimensions::depth)
.stringField("unit", PgTypes.varchar, Dimensions::unit)
.build(arr -> new Dimensions(
(Double) arr[0], (Double) arr[1],
(Double) arr[2], (String) arr[3]));

// Use as a PgType anywhere — row parsers, arrays, JSON
static PgType<Dimensions> pgType = pgStruct.asType();

Wrapper types that work everywhere

A ProductId is just a String underneath, but the type system keeps them apart. Call bimap on the base type with a constructor and an extractor — you get a full codec that works in row parsers, arrays, JSON, and composite types. All of them. Guaranteed.

record ProductId(String value) {
static PgType<ProductId> pgType =
PgTypes.text.bimap(ProductId::new, ProductId::value);

static PgType<ProductId[]> pgTypeArray =
PgTypes.textArray.bimap(
xs -> arrayMap.map(xs, ProductId::new, ProductId.class),
xs -> arrayMap.map(xs, ProductId::value, String.class));
}

Arrays without the pain

Passing arrays to JDBC normally means createArrayOf, type name strings, and a connection reference just to build the parameter. Here you just pass ProductId[] directly — the codec from bimap handles the rest.

List<ProductRow> selectByIds(ProductId[] ids, Connection c) {
return Fragment.interpolate(
Fragment.lit("SELECT * FROM product WHERE id = ANY("),
Fragment.encode(ProductId.pgTypeArray, ids),
Fragment.lit(")")
).query(ProductRow.rowParser.all()).runUnchecked(c);
}

Queries are values you compose

Build fragments, combine them, pass them to functions, return them from functions. Parameters are always bound and typed. Run when you're ready — against a connection or a Transactor that manages the lifecycle for you.

// Build small reusable filters
Fragment byName(String name) {
return Fragment.of("name ILIKE ?").param(PgTypes.text, name);
}
Fragment cheaperThan(BigDecimal max) {
return Fragment.of("price < ?").param(PgTypes.numeric, max);
}
Fragment amongTags(String tag) {
return Fragment.of("tags @> ?").param(PgTypes.textArray, new String[]{tag});
}
Fragment createdAfter(OffsetDateTime date) {
return Fragment.of("created_at > ?").param(PgTypes.timestamptz, date);
}

// Compose them dynamically — only include the filters that are present
List<Fragment> filters = Stream.of(
Optional.of(byName("%widget%")),
maxPrice.map(this::cheaperThan),
tag.map(this::amongTags),
since.map(this::createdAfter)
)
.flatMap(Optional::stream)
.toList();

List<ProductRow> products = Fragment.of("SELECT * FROM product ")
.append(Fragment.whereAnd(filters))
.query(ProductRow.rowParser.list())
.run(tx);

Transactions you can see

No @Transactional annotation deciding your transaction boundaries somewhere else. No implicit session flushing at unpredictable times. The Transactor makes the lifecycle explicit: before, after, oops, always — four hooks you control.

// The Transactor manages connections and transactions
// You choose the strategy — it handles the lifecycle
var tx = connectionSource.transactor(Transactor.defaultStrategy());

// Everything inside runs in one transaction: begin, commit, close
List<ProductRow> products = tx.execute(conn ->
Fragment.of("SELECT * FROM product WHERE price > ?")
.param(PgTypes.numeric, minPrice)
.query(ProductRow.rowParser.list())
.runUnchecked(conn)
);

// Built-in strategies for common patterns
Transactor.defaultStrategy() // begin → commit → close
Transactor.autoCommitStrategy() // no transaction, just close
Transactor.rollbackOnErrorStrategy() // begin → commit, rollback on error → close
Transactor.testStrategy() // begin → rollback → close (for tests)

// Or define your own with explicit hooks
new Transactor.Strategy(
conn -> conn.setAutoCommit(false), // before
Connection::commit, // after (success)
throwable -> { /* handle error */ }, // oops
Connection::close // always (finally)
);

Clear Error Messages

When things go wrong, you get helpful messages that tell you exactly what happened — not a cryptic stack trace.

Column type mismatch at index 3 (name): Expected: varchar (PgTypes.text) Actual: integer Row parsing failed: Expected 5 columns, got 4 Missing column at index 4 Type conversion error: Cannot read column 'created_at' as OffsetDateTime Database type: timestamp without time zone Hint: Use PgTypes.timestamp instead of PgTypes.timestamptz

JSON Codecs

Every database type includes a JSON codec. No Jackson, Gson, or other dependencies required — just a built-in JsonValue sealed interface.

Built-in JSON for every type
// Every type has a built-in JSON codec
PgType<Integer> intType = PgTypes.int4;
JsonValue json = intType.json().toJson(42);
Integer value = intType.json().fromJson(json);

// Works for complex types too
PgType<int[]> arrayType = PgTypes.int4ArrayUnboxed;
JsonValue arrayJson = arrayType.json()
.toJson(new int[]{1, 2, 3});
MULTISET-like nested queries
// Parse nested data from JSON
JsonRowType<Email> emailJsonType =
JsonRowType.of(emailParser, List.of("id", "email"));

// Use it like any other DbType
PgType<List<Email>> emailListType =
emailJsonType.pgList();

// Works on every database:
// emailJsonType.mariaList()
// emailJsonType.oracleList()
// emailJsonType.duckDbList()
// emailJsonType.sqlServerList()

Streaming Inserts PostgreSQL

Insert large datasets without loading everything into memory using PostgreSQL's COPY protocol. The PgText codec encodes each row to the COPY wire format — tabs, escaping, nulls, all handled correctly.

// Stream millions of product records via PostgreSQL COPY protocol
Iterator<ProductRow> products = loadProductsFromFile();

long inserted = streamingInsert.insertUnchecked(
"COPY product(id, name, price, tags, dimensions, metadata, created_at) FROM STDIN",
1000, // batch size
products,
connection,
ProductRow.pgText // PgText<ProductRow> encodes to COPY format
);

No Reflection

The entire library is reflection-free. All type information is preserved at compile time.

GraalVM native-image

Build native executables with instant startup. No reflection configuration needed.

ProGuard / R8

Full minification and optimization support. No keep rules for reflection targets.

Static analysis

Complete visibility into code paths. Tools can trace every call without dead ends.

No more JDBC URL archaeology

Typed builders for every database — PostgreSQL, MariaDB, Oracle, SQL Server, DuckDB, DB2. Every driver property has a real method with documentation. SSL modes are enums, not strings you hope are spelled right. Go from config to connection pool to transactor in a few lines.

// Typed config — no JDBC URL to remember
var config = PostgresConfig.builder("localhost", 5432, "mydb", "user", "pass")
.sslmode(PgSslMode.REQUIRE)
.reWriteBatchedInserts(true)
.build();

// For scripts and tests — simple, non-pooled
var tx = config.transactor(Transactor.defaultStrategy());

// For production — HikariCP connection pool
var pool = PooledDataSource.create(config,
ConnectionSettings.builder()
.transactionIsolation(TransactionIsolation.READ_COMMITTED)
.build(),
PoolConfig.builder()
.maximumPoolSize(20)
.idleTimeout(Duration.ofMinutes(10))
.build());

var tx = pool.transactor();

Designed for code generation

We obsessed over eliminating edge cases and making every API behave consistently across all databases and all types. No special cases, no surprising behavior, no workarounds. This makes Foundations JDBC a premier target for code generation.

All the code you've seen above — row parsers, fragments, transactors, streaming inserts — can be generated automatically from your database schema by Typr, our type-safe database code generator for the JVM.