Quick start
DuckDB runs in-memory — no database server needed. A Fragment is a typed SQL building block.
import dev.typr.foundationskt.*
import dev.typr.foundationskt.connect.*
fun main() {
val tx = SingleConnectionDataSource.create(DuckDbConfig.inMemory().build()).transactor()
val answer: Int = sql { "SELECT 42" }
.queryExactlyOne(DuckDbTypes.integer)
.transact(tx)
println("Result: $answer")
}
- Gradle
- Maven
dependencies {
implementation("dev.typr:foundations-jdbc-kotlin:1.0.0-RC1")
// Add your driver
runtimeOnly("org.duckdb:duckdb_jdbc:1.1.3")
}
<dependency>
<groupId>dev.typr</groupId>
<artifactId>foundations-jdbc-kotlin</artifactId>
<version>1.0.0-RC1</version>
</dependency>
<dependency>
<groupId>org.duckdb</groupId>
<artifactId>duckdb_jdbc</artifactId>
<version>1.1.3</version>
</dependency>
What existing libraries still get wrong
ORMs and query builders solve the verbosity of raw JDBC. But fundamental problems remain — problems that surface in production as silent data corruption, runtime exceptions, and database lock-in.
What goes wrong
- Queries are unchecked strings — Rename a column in the schema and nothing fails until production. No library catches this at test time.
- Nullability is invisible — A nullable column and a non-nullable column have the same Java type. Nothing in the API tells you which columns can be null.
- Type fidelity is lost — ORMs map database types to a handful of Java primitives. Your
NUMERIC(38,18)loses precision throughdouble. Read a value, write it back — it's not the same value anymore. - DB-specific features are second-class — Libraries target the lowest common denominator. PostgreSQL arrays, Oracle
MULTISET, MariaDB unsigned types — all require escape hatches.
What we built
- Query Analysis catches bugs in tests — Validate every query against a real database in your test suite. Schema changes break tests, not production.
- Nullable means Optional —
.opt()changes the return type toOptional/T?/Option[T]. If the type isn't optional, the column is guaranteed non-null. - Every database type, modeled exactly — Not just primitives. Composite types, domains, enums, arrays, intervals — all first-class, with full roundtrip fidelity.
- Database-specific by design — Dedicated type classes for each database.
PgTypes,OracleTypes,MariaDbTypes— use your database's full feature set.
Find SQL bugs at test time, not 2 AM
Query Analysis verifies your SQL against the actual database schema. Wrong column type? Missing .opt() on a nullable column? Parameter count mismatch? Catch it in tests, not in production.
- Kotlin
- Java
- Scala
val query: Operation.Query<List<User>> =
sql { """
SELECT id, name, created_at, email
FROM users
WHERE active = ${PgTypes.bool(true)}
""" }
.query(userCodec.all())
fun check() {
val analysis: QueryAnalysis =
QueryAnalyzer.analyze(query, connection).single()
if (!analysis.succeeded()) {
throw AssertionError(analysis.report())
}
}
// Your query looks fine at compile time...
Operation.Query<List<User>> query =
Fragment.of("""
SELECT id, name, created_at, email
FROM users WHERE active =
""")
.value(PgTypes.bool, true)
.query(userCodec.all());
// But Query Analysis catches the bugs in your tests
void check() {
dev.typr.foundations.QueryAnalysis analysis =
QueryAnalyzer.analyze(query, connection).getFirst();
if (!analysis.succeeded()) {
throw new AssertionError(analysis.report());
}
}
// Your query looks fine at compile time...
val query: Operation.Query[List[User]] =
sql"""SELECT id, name, created_at, email
FROM users
WHERE active = ${PgTypes.bool(true)}"""
.query(User.rowCodec.all())
// But Query Analysis catches the bugs in your tests
def check(): Unit =
val result: QueryAnalysis =
QueryAnalyzer.analyze(query, connection).head
if !result.succeeded() then
throw new AssertionError(result.report())
No other Java SQL library does this. jOOQ validates DSL at compile time but can't check hand-written SQL. Hibernate validates annotations at startup but not query correctness. foundations-jdbc validates your actual queries against your actual database.
Clear Error Messages
When things go wrong, you get helpful messages that tell you exactly what happened — not a cryptic stack trace.
Design Philosophy
A complete database library built on functional principles. Not the lowest common denominator — the full power of your database, with the safety and composability of functional programming.
Full roundtrip fidelity
Read a value from the database and write it back without loss or corruption. Every type is modeled exactly as the database defines it.
Queries are values
Fragments and row codecs are immutable values you compose, pass around, and run when you're ready. Just functions and values.
Composable
Row codecs compose. Join two codecs for a joined query. Left join gives you Optional on the right side. Fragments compose with and(), or(), whereAnd(). It's just functions.
No reflection, no magic
Zero runtime reflection, zero bytecode generation, zero annotation processing. Works with GraalVM native-image out of the box. You can read every line of what runs.
Not an ORM
No entity manager, no session, no lazy loading, no surprises. You write SQL, you get typed results. That's it.
Java, Kotlin, Scala
Core library in Java. Kotlin gets nullable types natively. Scala gets Option types and string interpolation. Same concepts, idiomatic in each language.
Query Analysis
Verify your SQL at test time. Parameter types, column types, nullability — all checked against the real database schema. Catch bugs before production.
Start with your schema
Take a PostgreSQL table. The RowCodec maps each column to a DbType that knows exactly how to read and write its value. No getObject() guessing, no wasNull() checking.
CREATE TABLE product (
id serial PRIMARY KEY,
name text NOT NULL,
price numeric(10,2) NOT NULL,
created_at timestamptz DEFAULT now()
);
- Kotlin
- Java
- Scala
data class Product(
val id: Int,
val name: String,
val price: BigDecimal,
val createdAt: Instant?
)
record Product(
int id,
String name,
BigDecimal price,
Optional<Instant> createdAt
) {}
case class Product(
id: Int,
name: String,
price: BigDecimal,
createdAt: Option[Instant]
)
- Kotlin
- Java
- Scala
val productCodec: RowCodecNamed<Product> =
RowCodec.namedBuilder<Product>()
.field("id", PgTypes.int4, Product::id)
.field("name", PgTypes.text, Product::name)
.field("price", PgTypes.numeric, Product::price)
.field("created_at", PgTypes.timestamptz.opt(), Product::createdAt)
.build(::Product)
// Compose codecs for joins
val joined: RowCodec<Pair<Product, Category?>> =
productCodec.leftJoined(categoryRowCodec)
static RowCodecNamed<Product> productCodec =
RowCodec.<Product>namedBuilder()
.field("id", PgTypes.int4, Product::id)
.field("name", PgTypes.text, Product::name)
.field("price", PgTypes.numeric, Product::price)
.field("created_at", PgTypes.timestamptz.opt(), Product::createdAt)
.build(Product::new);
// Compose codecs for joins
static RowCodec<Tuple.Tuple2<Product, Optional<Category>>> joined =
productCodec.leftJoined(categoryRowCodec);
val productCodec: RowCodecNamed[Product] = RowCodec.namedBuilder[Product]()
.field("id", PgTypes.int4)(_.id)
.field("name", PgTypes.text)(_.name)
.field("price", PgTypes.numeric)(_.price)
.field("created_at", PgTypes.timestamptz.opt)(_.createdAt)
.build(Product.apply)
// Compose codecs for joins
val joined: RowCodec[(Product, Option[Category])] =
productCodec.leftJoined(categoryRowCodec)
Type building blocks
Composite types, wrapper types, and arrays — each database has its own type system, and each one is modeled faithfully.
- Composite Types
- Wrapper Types
- Arrays
The dimensions composite type becomes a record with typed fields. PgStruct handles the wire format.
CREATE TYPE dimensions AS (
width double precision,
height double precision,
depth double precision,
unit varchar(10)
);
- Kotlin
- Java
- Scala
data class Dim(
val width: Double, val height: Double,
val depth: Double, val unit: String
)
// PgStruct handles PostgreSQL's composite wire format
val pgStruct: PgStruct<Dim> =
PgStruct.builder<Dim>("dimensions")
.field("width", PgTypes.float8, Dim::width)
.field("height", PgTypes.float8, Dim::height)
.field("depth", PgTypes.float8, Dim::depth)
.field("unit", PgTypes.text, Dim::unit)
.build(::Dim)
val pgType: PgType<Dim> = pgStruct.asType()
record Dim(
Double width, Double height, Double depth, String unit
) {}
// PgStruct handles PostgreSQL's composite wire format
static PgStruct<Dim> pgStruct =
PgStruct.<Dim>builder("dimensions")
.field("width", PgTypes.float8, Dim::width)
.field("height", PgTypes.float8, Dim::height)
.field("depth", PgTypes.float8, Dim::depth)
.field("unit", PgTypes.text, Dim::unit)
.build(Dim::new);
static PgType<Dim> pgType = pgStruct.asType();
case class Dim(
width: Double, height: Double,
depth: Double, unit: String
)
// PgStruct handles PostgreSQL's composite wire format
val pgStruct: PgStruct[Dim] =
PgStruct.builder[Dim]("dimensions")
.field("width", PgTypes.float8, _.width)
.field("height", PgTypes.float8, _.height)
.field("depth", PgTypes.float8, _.depth)
.field("unit", PgTypes.text, _.unit)
.build(Dim.apply)
val pgType: PgType[Dim] = pgStruct.asType()
Call transform (two-way mapping) on a base type — you get a full codec that works in row codecs, arrays, and JSON.
CREATE TABLE products (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(255) NOT NULL
);
- Kotlin
- Java
- Scala
data class ProductId(val value: Int) {
companion object {
// MariaDB int -> wraps to your domain type
val mariaType: MariaType<ProductId> =
MariaTypes.int_.transform(::ProductId, ProductId::value)
}
}
record ProductId(Integer value) {
// MariaDB int -> wraps to your domain type
static MariaType<ProductId> mariaType =
MariaTypes.int_.transform(ProductId::new, ProductId::value);
}
case class ProductId(value: Int)
object ProductId:
// MariaDB int -> wraps to your domain type
val mariaType: MariaType[ProductId] =
MariaTypes.int_.transform(ProductId.apply, _.value)
Pass arrays directly — no createArrayOf, no type name strings, no connection reference.
CREATE TABLE posts (
id INTEGER,
tags VARCHAR[],
published BOOLEAN
);
- Kotlin
- Java
- Scala
// DuckDB arrays are first-class typed values
fun getTagSets(): List<Array<String>> =
sql { "SELECT tags FROM posts WHERE published = true" }
.query(RowCodec.of(DuckDbTypes.varcharArray).all())
.transact(tx)
// DuckDB arrays are first-class typed values
List<String[]> getTagSets() {
return Fragment.of("SELECT tags FROM posts WHERE published = true")
.query(RowCodec.of(DuckDbTypes.varcharArray).all())
.transact(tx);
}
// DuckDB arrays are first-class typed values
def getTagSets(): List[Array[String]] =
sql"SELECT tags FROM posts WHERE published = true"
.query(RowCodec.of(DuckDbTypes.varcharArray).all())
.transact(tx)
Queries are values you compose
Build fragments, combine them, pass them to functions, return them from functions. Parameters are always bound and typed. Works across databases — here with SQL Server.
- Kotlin
- Java
- Scala
// Build small reusable filters - SQL Server example
fun byName(name: String): Fragment =
sql { "name LIKE ${SqlServerTypes.nvarchar(name)}" }
fun cheaperThan(max: BigDecimal): Fragment =
sql { "price < ${SqlServerTypes.decimal(max)}" }
// Compose dynamically - only include the filters that are present
val filters: List<Fragment> =
listOfNotNull(
byName("%widget%"),
maxPrice?.let { cheaperThan(it) }
)
val orders: List<OrderRow> =
sql { "SELECT * FROM orders ${Fragment.whereAnd(filters)}" }
.query(orderRowCodec.all())
.run(conn)
// Build small reusable filters - SQL Server example
Fragment byName(String name) {
return Fragment.of("name LIKE ")
.value(SqlServerTypes.nvarchar, name);
}
Fragment cheaperThan(BigDecimal max) {
return Fragment.of("price < ")
.value(SqlServerTypes.decimal, max);
}
// Compose dynamically - only include the filters that are present
List<Fragment> filters =
Stream.of(
Optional.of(byName("%widget%")),
maxPrice.map(this::cheaperThan)
)
.flatMap(Optional::stream)
.toList();
List<OrderRow> orders =
Fragment.of("SELECT * FROM orders ")
.append(Fragment.whereAnd(filters))
.query(orderRowCodec.all())
.run(conn);
// Build small reusable filters - SQL Server example
val nvarchar = SqlServerTypes.nvarchar
val decimal = SqlServerTypes.decimal
def byName(name: String): Fragment =
sql"name LIKE ${nvarchar(name)}"
def cheaperThan(max: BigDecimal): Fragment =
sql"price < ${decimal(max)}"
// Compose dynamically
val filters: List[Fragment] =
List(
Some(byName("%widget%")),
maxPrice.map(cheaperThan)
).flatten
val orders: List[OrderRow] =
sql"SELECT * FROM orders ${Fragment.whereAnd(filters)}"
.query(orderRowCodec.all())
.run(conn)
Transactions you can see
Use Spring's @Transactional if that's your style, or manage transactions explicitly with Transactor. Either way, you get typed builders for every database and full control over the lifecycle — here with Oracle.
- Explicit
- Spring Integration
- Kotlin
- Java
- Scala
// Oracle - typed config, no JDBC URL to remember
val tx =
SimpleDataSource.create(
OracleConfig.builder("localhost", 1521, "xe", "app", "secret")
.serviceName("XEPDB1")
.build()
).transactor()
// Everything inside runs in one transaction
fun getGreeting(): String =
sql { "SELECT 'Hello from Oracle' FROM dual" }
.query(RowCodec.of(OracleTypes.varchar2).exactlyOne())
.transact(tx)
// Oracle - typed config, no JDBC URL to remember
Transactor tx =
OracleConfig.builder(
"localhost", 1521, "xe", "app", "secret")
.serviceName("XEPDB1")
.transactor();
// Everything inside runs in one transaction
String getGreeting() {
return Fragment
.of("SELECT 'Hello from Oracle' FROM dual")
.query(RowCodec.of(OracleTypes.varchar2)
.exactlyOne())
.transact(tx);
}
// Oracle - typed config, no JDBC URL to remember
val tx =
SimpleDataSource.create(
OracleConfig.builder(
"localhost", 1521, "xe", "app", "secret"
).serviceName("XEPDB1").build()
).transactor()
// Everything inside runs in one transaction
def getGreeting(): String =
sql"SELECT 'Hello from Oracle' FROM dual"
.query(RowCodec.of(OracleTypes.varchar2).exactlyOne())
.transact(tx)
- Kotlin
- Java
- Scala
@Service
class OrderService(private val tx: Transactor) {
@Transactional
fun getGreeting(): String =
sql { "SELECT 'Hello from Oracle' FROM dual" }
.query(RowCodec.of(OracleTypes.varchar2).exactlyOne())
.transact(tx)
}
@Service
class OrderService {
private final Transactor tx;
OrderService(Transactor tx) {
this.tx = tx;
}
@Transactional
String getGreeting() {
return Fragment
.of("SELECT 'Hello from Oracle' FROM dual")
.query(RowCodec.of(OracleTypes.varchar2)
.exactlyOne())
.transact(tx);
}
}
@Service
class OrderService(tx: Transactor):
@Transactional
def getGreeting(): String =
sql"SELECT 'Hello from Oracle' FROM dual"
.query(RowCodec.of(OracleTypes.varchar2).exactlyOne())
.transact(tx)
Built-in JSON codecs
All databases can transfer data as JSON — and now you can use it uniformly. Your RowCodec doubles as a JSON codec with zero extra code. Aggregate child rows with json_agg(), JSON_ARRAYAGG, or FOR JSON and parse them with the same types. Learn more →
- Kotlin
- Java
- Scala
// RowCodec → JSON column type, zero extra code
val linesType: DuckDbType<List<OrderLine>> =
DuckDbTypes.jsonArrayEncodedList(lineCodec)
fun getOrderLines(customerId: Int): List<OrderLine> =
sql { """
SELECT json_group_array(json_array(product, qty, price))
FROM order_lines
WHERE customer_id = ${DuckDbTypes.integer(customerId)}
""" }
.query(RowCodec.of(linesType).exactlyOne())
.transact(tx)
// RowCodec → JSON column type, zero extra code
DuckDbType<List<OrderLine>> linesType =
DuckDbTypes.jsonArrayEncodedList(lineCodec);
List<OrderLine> getOrderLines(int customerId) {
return Fragment.of("""
SELECT json_group_array(\
json_array(product, qty, price))
FROM order_lines
WHERE customer_id = """)
.value(DuckDbTypes.integer, customerId)
.query(RowCodec.of(linesType).exactlyOne())
.transact(tx);
}
// RowCodec → JSON column type, zero extra code
val linesType: DuckDbType[List[OrderLine]] =
DuckDbTypes.jsonArrayEncodedList(lineCodec)
def getOrderLines(customerId: Int): List[OrderLine] =
sql"""SELECT json_group_array(
json_array(product, qty, price))
FROM order_lines
WHERE customer_id =
${DuckDbTypes.integer(customerId)}"""
.query(RowCodec.of(linesType).exactlyOne())
.transact(tx)
Type-safe stored procedures
Define a procedure or function once — the builder tracks IN and OUT types statically. Functions use SELECT so every DbType reads correctly through the normal codec path. OUT params use a CallableStatement adapter that reuses the same DbRead logic.
- Kotlin
- Java
- Scala
// Define once, call many times — input and output types are baked in
val getUser: DbProcedure.Def1_2<Int, String, String> =
DbProcedure.define("get_user_by_id")
.input(PgTypes.int4)
.out(PgTypes.text)
.out(PgTypes.text)
.build()
// call() returns a ProcedureOp — use it like any other operation
fun findUser(userId: Int): Pair<String, String> =
getUser.call(userId).transact(tx)
// Define once, call many times — input and output types are baked in
static final DbProcedure.Def1_2<Integer, String, String> getUser =
DbProcedure.define("get_user_by_id")
.input(PgTypes.int4)
.out(PgTypes.text)
.out(PgTypes.text)
.build();
// call() returns an Operation — compose it like any other query
Tuple.Tuple2<String, String> findUser(int userId) {
return getUser.call(userId).transact(tx);
}
// Define once, call many times — input and output types are baked in
val getUser: DbProcedure.Def1_2[Int, String, String] =
DbProcedure.define("get_user_by_id")
.input(PgTypes.int4)
.out(PgTypes.text)
.out(PgTypes.text)
.build()
// call() returns a ProcedureOp — use it like any other operation
def findUser(userId: Int): (String, String) =
getUser.call(userId).transact(tx)
Six databases, full type fidelity
The same approach works across all supported databases. Full roundtrip fidelity for every type each one supports. More databases coming soon.
PostgreSQL
DuckDB
Oracle
MariaDB
SQL Server
DB2
How it compares
| Foundations | Hibernate | JDBI | JdbcTemplate | Exposed | |
|---|---|---|---|---|---|
| Approach | SQL + typed codecs | ORM with entity mapping | SQL + annotations | SQL + RowMapper | Kotlin DSL |
| Languages | Java, Kotlin, Scala | Java, Kotlin | Java, Kotlin | Java, Kotlin | Kotlin only |
| Database portability | Database-specific* | HQL abstracts over DBs | Raw SQL (portable enough) | Raw SQL (portable enough) | DSL is mostly portable |
| Type model | Every database type | Java types only | Basic + custom | Basic Java types | Kotlin types + custom |
| Composites, arrays, ranges | First-class | Partial1 | Manual mapping | Raw JDBC only | Partial2 |
| Reflection6 | None | Heavy | Moderate | None (manual mapper) | DAO layer |
| Query type checking | At test time | Opt-in3 | No | No | DSL only (compile) |
| Type-safe nullable columns | Optional<T> / T? / Option[T] | @Column(nullable) | Manual null checks | Manual null checks | T? in Kotlin |
| Code generation | Optional (Typr) | Reverse engineering4 | Not supported | Not supported | Gradle plugin5 |
* Type references are explicit and searchable — find all PgTypes. and replace with MariaTypes. — then run Query Analysis to verify every query against the new database at test time. More manual than hoping an abstraction holds, but nothing slips through unchecked.
1 Hibernate 6.2+ has @Struct for composites and built-in basic array mapping. Ranges still need third-party libraries (Hypersistence Utils).
2 Exposed has built-in array support. Ranges and composite types require custom ColumnType implementations.
3 @CheckHQL (6.3+) validates HQL at compile time against the entity metamodel, not the database schema. Not enabled by default.
4 Hibernate Tools generates entity classes from database schemas.
5 Official JetBrains plugin generates Exposed table definitions from database schemas.
6 Reflection affects GraalVM native-image compatibility, startup time, and debuggability. Libraries using runtime proxies or bytecode generation require additional configuration for native compilation.
Ready to try it?
Foundations works great on its own. For larger codebases, Typr can generate all the code you see above from your database schema.