The Problem with JDBC
JDBC is notoriously difficult to use correctly. The API is verbose, error-prone, and makes it almost impossible to handle all column types properly.
What goes wrong
- Null handling — Was that column nullable? Did you remember
wasNull()? - Type conversions — Does
getObject()return what you expect? Often not. - Resource management — Did you close that ResultSet? Statement? Connection?
- Database differences — Code that works on PostgreSQL may silently corrupt data on Oracle.
What we built
- Type-safe database types — Every column type modeled correctly for each database.
- Full roundtrip support — Read a value, write it back — no loss, no corruption.
- Automatic resource management — Connections, statements, and result sets managed for you.
- Multi-database — PostgreSQL, MariaDB, DuckDB, Oracle, SQL Server, and DB2.
What this is
A complete database library built on functional principles. Not the lowest common denominator — the full power of your database, with the safety and composability of functional programming.
Not an ORM
No entity manager, no session, no lazy loading, no surprises. You write SQL, you get typed results. That's it.
Queries are values
Fragments and row parsers are immutable values you compose, pass around, and run when you're ready. No type-class machinery required — just functions and values.
Full roundtrip fidelity
Read a value from the database and write it back without loss or corruption. Every type is modeled exactly as the database defines it.
No reflection, no magic
Zero reflection, zero bytecode generation, zero annotation processing. Works with GraalVM native-image out of the box. You can read every line of what runs.
Composable
Row parsers compose. Join two parsers for a joined query. Left join gives you Optional on the right side. Fragments compose with and(), or(), whereAnd(). It's just functions.
Java, Kotlin, Scala
Core library in Java. Kotlin gets nullable types natively. Scala gets Option types and string interpolation. Same concepts, idiomatic in each language.
Start with your schema
Take a PostgreSQL table that uses a composite type, arrays, and jsonb. This is the running example for everything below.
CREATE TYPE dimensions AS (
width double precision,
height double precision,
depth double precision,
unit varchar(10)
);
CREATE TABLE product (
id serial PRIMARY KEY,
name text NOT NULL,
price numeric(10,2) NOT NULL,
tags text[],
dimensions dimensions,
metadata jsonb,
created_at timestamptz DEFAULT now()
);
- Java
- Kotlin
- Scala
record ProductRow(
ProductId id,
String name,
BigDecimal price,
Optional<String[]> tags, // text[]
Optional<Dimensions> dimensions, // composite type
Optional<Jsonb> metadata, // jsonb
Optional<OffsetDateTime> createdAt // timestamptz
) {}
data class ProductRow(
val id: ProductId,
val name: String,
val price: BigDecimal,
val tags: Array<String>?, // text[]
val dimensions: Dimensions?, // composite type
val metadata: Jsonb?, // jsonb
val createdAt: OffsetDateTime? // timestamptz
)
case class ProductRow(
id: ProductId,
name: String,
price: BigDecimal,
tags: Option[Array[String]], // text[]
dimensions: Option[Dimensions], // composite type
metadata: Option[Jsonb], // jsonb
createdAt: Option[OffsetDateTime] // timestamptz
)
Every column has a type
The RowParser maps each column to a DbType that knows exactly how to read and write its value. No getObject() guessing, no wasNull() checking. Parsers compose for joins — left join gives you Optional on the right side.
- Java
- Kotlin
- Scala
// The RowParser defines how to read and write every column
static RowParser<ProductRow> rowParser = RowParsers.of(
ProductId.pgType, // id
PgTypes.text, // name
PgTypes.numeric, // price
PgTypes.textArray.opt(), // tags: text[]
Dimensions.pgType.opt(), // dimensions: composite
PgTypes.jsonb.opt(), // metadata: jsonb
PgTypes.timestamptz.opt(), // created_at: timestamptz
ProductRow::new,
p -> new Object[]{p.id(), p.name(), p.price(), p.tags(),
p.dimensions(), p.metadata(), p.createdAt()}
);
// Compose parsers for joins
RowParser<And<ProductRow, Optional<CategoryRow>>> joined =
ProductRow.rowParser.leftJoined(CategoryRow.rowParser);
// The RowParser defines how to read and write every column
val rowParser: RowParser<ProductRow> = RowParsers.of(
ProductId.pgType, // id
PgTypes.text, // name
PgTypes.numeric, // price
PgTypes.textArray.opt(), // tags: text[]
Dimensions.pgType.opt(), // dimensions: composite
PgTypes.jsonb.opt(), // metadata: jsonb
PgTypes.timestamptz.opt(), // created_at: timestamptz
::ProductRow,
{ p -> arrayOf(p.id, p.name, p.price, p.tags,
p.dimensions, p.metadata, p.createdAt) }
)
// Compose parsers for joins
val joined: RowParser<And<ProductRow, ProductRow?>> =
ProductRow.rowParser.leftJoined(CategoryRow.rowParser)
// The RowParser defines how to read and write every column
val rowParser: RowParser[ProductRow] = RowParsers.of(
ProductId.pgType, // id
PgTypes.text, // name
PgTypes.numeric, // price
PgTypes.textArray.opt(), // tags: text[]
Dimensions.pgType.opt(), // dimensions: composite
PgTypes.jsonb.opt(), // metadata: jsonb
PgTypes.timestamptz.opt(), // created_at: timestamptz
)(ProductRow.apply)(
p => Array(p.id, p.name, p.price, p.tags,
p.dimensions, p.metadata, p.createdAt)
)
// Compose parsers for joins
val joined: RowParser[And[ProductRow, Option[CategoryRow]]] =
ProductRow.rowParser.leftJoined(CategoryRow.rowParser)
Composite types become real types
The dimensions composite type doesn't become a string or a map — it becomes a record with typed fields. PgStruct handles PostgreSQL's composite wire format with typed field builders, and gives you a PgType you can use anywhere.
- Java
- Kotlin
- Scala
// The composite type becomes a record with its own PgStruct codec
record Dimensions(
Double width, Double height, Double depth, String unit
) {}
// PgStruct handles PostgreSQL's composite wire format
static PgStruct<Dimensions> pgStruct = PgStruct.<Dimensions>builder("dimensions")
.doubleField("width", PgTypes.float8, Dimensions::width)
.doubleField("height", PgTypes.float8, Dimensions::height)
.doubleField("depth", PgTypes.float8, Dimensions::depth)
.stringField("unit", PgTypes.varchar, Dimensions::unit)
.build(arr -> new Dimensions(
(Double) arr[0], (Double) arr[1],
(Double) arr[2], (String) arr[3]));
// Use as a PgType anywhere — row parsers, arrays, JSON
static PgType<Dimensions> pgType = pgStruct.asType();
// The composite type becomes a data class with its own PgStruct codec
data class Dimensions(
val width: Double, val height: Double,
val depth: Double, val unit: String
)
// PgStruct handles PostgreSQL's composite wire format
val pgStruct: PgStruct<Dimensions> = PgStruct.builder<Dimensions>("dimensions")
.doubleField("width", PgTypes.float8, Dimensions::width)
.doubleField("height", PgTypes.float8, Dimensions::height)
.doubleField("depth", PgTypes.float8, Dimensions::depth)
.stringField("unit", PgTypes.varchar, Dimensions::unit)
.build { arr -> Dimensions(
arr[0] as Double, arr[1] as Double,
arr[2] as Double, arr[3] as String) }
// Use as a PgType anywhere — row parsers, arrays, JSON
val pgType: PgType<Dimensions> = pgStruct.asType()
// The composite type becomes a case class with its own PgStruct codec
case class Dimensions(
width: Double, height: Double,
depth: Double, unit: String
)
// PgStruct handles PostgreSQL's composite wire format
val pgStruct: PgStruct[Dimensions] = PgStruct.builder[Dimensions]("dimensions")
.doubleField("width", PgTypes.float8, _.width)
.doubleField("height", PgTypes.float8, _.height)
.doubleField("depth", PgTypes.float8, _.depth)
.stringField("unit", PgTypes.varchar, _.unit)
.build(arr => Dimensions(
arr(0).asInstanceOf[Double], arr(1).asInstanceOf[Double],
arr(2).asInstanceOf[Double], arr(3).asInstanceOf[String]))
// Use as a PgType anywhere — row parsers, arrays, JSON
val pgType: PgType[Dimensions] = pgStruct.asType()
Wrapper types that work everywhere
A ProductId is just a String underneath, but the type system keeps them apart. Call bimap on the base type with a constructor and an extractor — you get a full codec that works in row parsers, arrays, JSON, and composite types. All of them. Guaranteed.
- Java
- Kotlin
- Scala
record ProductId(String value) {
static PgType<ProductId> pgType =
PgTypes.text.bimap(ProductId::new, ProductId::value);
static PgType<ProductId[]> pgTypeArray =
PgTypes.textArray.bimap(
xs -> arrayMap.map(xs, ProductId::new, ProductId.class),
xs -> arrayMap.map(xs, ProductId::value, String.class));
}
data class ProductId(val value: String) {
companion object {
val pgType: PgType<ProductId> =
PgTypes.text.bimap(::ProductId, ProductId::value)
val pgTypeArray: PgType<Array<ProductId>> =
PgTypes.textArray.bimap(
{ xs -> arrayMap.map(xs, ::ProductId, ProductId::class.java) },
{ xs -> arrayMap.map(xs, ProductId::value, String::class.java) })
}
}
case class ProductId(value: String) extends AnyVal
object ProductId:
given pgType: PgType[ProductId] =
PgTypes.text.bimap(ProductId.apply, _.value)
given pgTypeArray: PgType[Array[ProductId]] =
PgTypes.textArray.bimap(
_.map(ProductId.apply),
_.map(_.value))
Arrays without the pain
Passing arrays to JDBC normally means createArrayOf, type name strings, and a connection reference just to build the parameter. Here you just pass ProductId[] directly — the codec from bimap handles the rest.
- Java
- Kotlin
- Scala
List<ProductRow> selectByIds(ProductId[] ids, Connection c) {
return Fragment.interpolate(
Fragment.lit("SELECT * FROM product WHERE id = ANY("),
Fragment.encode(ProductId.pgTypeArray, ids),
Fragment.lit(")")
).query(ProductRow.rowParser.all()).runUnchecked(c);
}
fun selectByIds(ids: Array<ProductId>, c: Connection): List<ProductRow> =
Fragment.interpolate(
Fragment.lit("SELECT * FROM product WHERE id = ANY("),
Fragment.encode(ProductId.pgTypeArray, ids),
Fragment.lit(")")
).query(ProductRow.rowParser.all()).runUnchecked(c)
import dev.typr.foundations.scala.FragmentInterpolator.sql
def selectByIds(ids: Array[ProductId])
(using c: Connection): List[ProductRow] =
sql"SELECT * FROM product WHERE id = ANY(${Fragment.encode(ProductId.pgTypeArray, ids)})"
.query(ProductRow.rowParser.all()).runUnchecked(c)
Queries are values you compose
Build fragments, combine them, pass them to functions, return them from functions. Parameters are always bound and typed. Run when you're ready — against a connection or a Transactor that manages the lifecycle for you.
- Java
- Kotlin
- Scala
// Build small reusable filters
Fragment byName(String name) {
return Fragment.of("name ILIKE ?").param(PgTypes.text, name);
}
Fragment cheaperThan(BigDecimal max) {
return Fragment.of("price < ?").param(PgTypes.numeric, max);
}
Fragment amongTags(String tag) {
return Fragment.of("tags @> ?").param(PgTypes.textArray, new String[]{tag});
}
Fragment createdAfter(OffsetDateTime date) {
return Fragment.of("created_at > ?").param(PgTypes.timestamptz, date);
}
// Compose them dynamically — only include the filters that are present
List<Fragment> filters = Stream.of(
Optional.of(byName("%widget%")),
maxPrice.map(this::cheaperThan),
tag.map(this::amongTags),
since.map(this::createdAfter)
)
.flatMap(Optional::stream)
.toList();
List<ProductRow> products = Fragment.of("SELECT * FROM product ")
.append(Fragment.whereAnd(filters))
.query(ProductRow.rowParser.list())
.run(tx);
// Build small reusable filters
fun byName(name: String) =
Fragment.of("name ILIKE ?").param(PgTypes.text, name)
fun cheaperThan(max: BigDecimal) =
Fragment.of("price < ?").param(PgTypes.numeric, max)
fun amongTags(tag: String) =
Fragment.of("tags @> ?").param(PgTypes.textArray, arrayOf(tag))
fun createdAfter(date: OffsetDateTime) =
Fragment.of("created_at > ?").param(PgTypes.timestamptz, date)
// Compose them dynamically — only include the filters that are present
val filters = listOfNotNull(
byName("%widget%"),
maxPrice?.let { cheaperThan(it) },
tag?.let { amongTags(it) },
since?.let { createdAfter(it) }
)
val products: List<ProductRow> = Fragment.of("SELECT * FROM product ")
.append(Fragment.whereAnd(filters))
.query(ProductRow.rowParser.list())
.run(tx)
import dev.typr.foundations.scala.FragmentInterpolator.sql
// Build small reusable filters
def byName(name: String) =
sql"name ILIKE ${PgTypes.text.encode(name)}"
def cheaperThan(max: BigDecimal) =
sql"price < ${PgTypes.numeric.encode(max)}"
def amongTags(tag: String) =
sql"tags @> ${PgTypes.textArray.encode(Array(tag))}"
def createdAfter(date: OffsetDateTime) =
sql"created_at > ${PgTypes.timestamptz.encode(date)}"
// Compose them dynamically — only include the filters that are present
val filters: List[Fragment] = List(
Some(byName("%widget%")),
maxPrice.map(cheaperThan),
tag.map(amongTags),
since.map(createdAfter)
).flatten
val products: List[ProductRow] =
sql"SELECT * FROM product ${Fragment.whereAnd(filters)}"
.query(ProductRow.rowParser.list())
.run(tx)
Transactions you can see
No @Transactional annotation deciding your transaction boundaries somewhere else. No implicit session flushing at unpredictable times. The Transactor makes the lifecycle explicit: before, after, oops, always — four hooks you control.
- Java
- Kotlin
- Scala
// The Transactor manages connections and transactions
// You choose the strategy — it handles the lifecycle
var tx = connectionSource.transactor(Transactor.defaultStrategy());
// Everything inside runs in one transaction: begin, commit, close
List<ProductRow> products = tx.execute(conn ->
Fragment.of("SELECT * FROM product WHERE price > ?")
.param(PgTypes.numeric, minPrice)
.query(ProductRow.rowParser.list())
.runUnchecked(conn)
);
// Built-in strategies for common patterns
Transactor.defaultStrategy() // begin → commit → close
Transactor.autoCommitStrategy() // no transaction, just close
Transactor.rollbackOnErrorStrategy() // begin → commit, rollback on error → close
Transactor.testStrategy() // begin → rollback → close (for tests)
// Or define your own with explicit hooks
new Transactor.Strategy(
conn -> conn.setAutoCommit(false), // before
Connection::commit, // after (success)
throwable -> { /* handle error */ }, // oops
Connection::close // always (finally)
);
// The Transactor manages connections and transactions
// You choose the strategy — it handles the lifecycle
val tx = connectionSource.transactor(Transactor.defaultStrategy())
// Everything inside runs in one transaction: begin, commit, close
val products: List<ProductRow> = tx.execute { conn ->
Fragment.of("SELECT * FROM product WHERE price > ?")
.param(PgTypes.numeric, minPrice)
.query(ProductRow.rowParser.list())
.runUnchecked(conn)
}
// Built-in strategies for common patterns
Transactor.defaultStrategy() // begin → commit → close
Transactor.autoCommitStrategy() // no transaction, just close
Transactor.rollbackOnErrorStrategy() // begin → commit, rollback on error → close
Transactor.testStrategy() // begin → rollback → close (for tests)
// Or define your own with explicit hooks
Transactor.Strategy(
{ conn -> conn.autoCommit = false }, // before
{ conn -> conn.commit() }, // after (success)
{ throwable -> /* handle error */ }, // oops
{ conn -> conn.close() } // always (finally)
)
// The Transactor manages connections and transactions
// You choose the strategy — it handles the lifecycle
val tx = connectionSource.transactor(Transactor.defaultStrategy())
// Everything inside runs in one transaction: begin, commit, close
val products: List[ProductRow] = tx.execute(conn =>
sql"SELECT * FROM product WHERE price > ${PgTypes.numeric.encode(minPrice)}"
.query(ProductRow.rowParser.list())
.runUnchecked(conn)
)
// Built-in strategies for common patterns
Transactor.defaultStrategy() // begin → commit → close
Transactor.autoCommitStrategy() // no transaction, just close
Transactor.rollbackOnErrorStrategy() // begin → commit, rollback on error → close
Transactor.testStrategy() // begin → rollback → close (for tests)
// Or define your own with explicit hooks
Transactor.Strategy(
conn => conn.setAutoCommit(false), // before
conn => conn.commit(), // after (success)
throwable => (), // oops
conn => conn.close() // always (finally)
)
Clear Error Messages
When things go wrong, you get helpful messages that tell you exactly what happened — not a cryptic stack trace.
JSON Codecs
Every database type includes a JSON codec. No Jackson, Gson, or other dependencies required — just a built-in JsonValue sealed interface.
// Every type has a built-in JSON codec
PgType<Integer> intType = PgTypes.int4;
JsonValue json = intType.json().toJson(42);
Integer value = intType.json().fromJson(json);
// Works for complex types too
PgType<int[]> arrayType = PgTypes.int4ArrayUnboxed;
JsonValue arrayJson = arrayType.json()
.toJson(new int[]{1, 2, 3});
// Parse nested data from JSON
JsonRowType<Email> emailJsonType =
JsonRowType.of(emailParser, List.of("id", "email"));
// Use it like any other DbType
PgType<List<Email>> emailListType =
emailJsonType.pgList();
// Works on every database:
// emailJsonType.mariaList()
// emailJsonType.oracleList()
// emailJsonType.duckDbList()
// emailJsonType.sqlServerList()
Streaming Inserts PostgreSQL
Insert large datasets without loading everything into memory using PostgreSQL's COPY protocol. The PgText codec encodes each row to the COPY wire format — tabs, escaping, nulls, all handled correctly.
- Java
- Kotlin
- Scala
// Stream millions of product records via PostgreSQL COPY protocol
Iterator<ProductRow> products = loadProductsFromFile();
long inserted = streamingInsert.insertUnchecked(
"COPY product(id, name, price, tags, dimensions, metadata, created_at) FROM STDIN",
1000, // batch size
products,
connection,
ProductRow.pgText // PgText<ProductRow> encodes to COPY format
);
// Stream millions of product records via PostgreSQL COPY protocol
val products: Iterator<ProductRow> = loadProductsFromFile()
val inserted: Long = streamingInsert.insertUnchecked(
"COPY product(id, name, price, tags, dimensions, metadata, created_at) FROM STDIN",
1000, // batch size
products,
connection,
ProductRow.pgText // PgText<ProductRow> encodes to COPY format
)
// Stream millions of product records via PostgreSQL COPY protocol
val products: Iterator[ProductRow] = loadProductsFromFile()
val inserted: Long = streamingInsert.insertUnchecked(
"COPY product(id, name, price, tags, dimensions, metadata, created_at) FROM STDIN",
1000, // batch size
products.asJava,
connection,
ProductRow.pgText // PgText[ProductRow] encodes to COPY format
)
No Reflection
The entire library is reflection-free. All type information is preserved at compile time.
GraalVM native-image
Build native executables with instant startup. No reflection configuration needed.
ProGuard / R8
Full minification and optimization support. No keep rules for reflection targets.
Static analysis
Complete visibility into code paths. Tools can trace every call without dead ends.
Six databases, full type fidelity
This example used PostgreSQL, but the same approach works across all supported databases. Full roundtrip fidelity for every type each one supports.
PostgreSQL
DuckDB
Oracle
MariaDB
SQL Server
DB2
More databases coming soon — the architecture is designed to make adding new ones straightforward.
No more JDBC URL archaeology
Typed builders for every database — PostgreSQL, MariaDB, Oracle, SQL Server, DuckDB, DB2. Every driver property has a real method with documentation. SSL modes are enums, not strings you hope are spelled right. Go from config to connection pool to transactor in a few lines.
- Java
- Kotlin
- Scala
// Typed config — no JDBC URL to remember
var config = PostgresConfig.builder("localhost", 5432, "mydb", "user", "pass")
.sslmode(PgSslMode.REQUIRE)
.reWriteBatchedInserts(true)
.build();
// For scripts and tests — simple, non-pooled
var tx = config.transactor(Transactor.defaultStrategy());
// For production — HikariCP connection pool
var pool = PooledDataSource.create(config,
ConnectionSettings.builder()
.transactionIsolation(TransactionIsolation.READ_COMMITTED)
.build(),
PoolConfig.builder()
.maximumPoolSize(20)
.idleTimeout(Duration.ofMinutes(10))
.build());
var tx = pool.transactor();
// Typed config — no JDBC URL to remember
val config = PostgresConfig.builder("localhost", 5432, "mydb", "user", "pass")
.sslmode(PgSslMode.REQUIRE)
.reWriteBatchedInserts(true)
.build()
// For scripts and tests — simple, non-pooled
val tx = config.transactor(Transactor.defaultStrategy())
// For production — HikariCP connection pool
val pool = PooledDataSource.create(config,
ConnectionSettings.builder()
.transactionIsolation(TransactionIsolation.READ_COMMITTED)
.build(),
PoolConfig.builder()
.maximumPoolSize(20)
.idleTimeout(Duration.ofMinutes(10))
.build())
val tx = pool.transactor()
// Typed config — no JDBC URL to remember
val config = PostgresConfig.builder("localhost", 5432, "mydb", "user", "pass")
.sslmode(PgSslMode.REQUIRE)
.reWriteBatchedInserts(true)
.build()
// For scripts and tests — simple, non-pooled
val tx = config.transactor(Transactor.defaultStrategy())
// For production — HikariCP connection pool
val pool = PooledDataSource.create(config,
ConnectionSettings.builder()
.transactionIsolation(TransactionIsolation.READ_COMMITTED)
.build(),
PoolConfig.builder()
.maximumPoolSize(20)
.idleTimeout(Duration.ofMinutes(10))
.build())
val tx = pool.transactor()
Designed for code generation
We obsessed over eliminating edge cases and making every API behave consistently across all databases and all types. No special cases, no surprising behavior, no workarounds. This makes Foundations JDBC a premier target for code generation.
All the code you've seen above — row parsers, fragments, transactors, streaming inserts — can be generated automatically from your database schema by Typr, our type-safe database code generator for the JVM.