Row Codecs
Reading rows from JDBC means calling rs.getInt(1), rs.getString(2), rs.getTimestamp(3) -- column by column, in the right order, with the right types. Get any of it wrong and you get a ClassCastException at runtime. Add a column to your query and you silently shift every index after it.
A RowCodec<T> replaces all of that with a single declaration: list the database types and a constructor, and the codec does the rest. The same codec drives everything the library does with your type:
- Reading — decodes rows from a
ResultSet(queries) or aCallableStatement(stored procedures) - Writing — encodes values into a
PreparedStatementfor inserts, updates, and batch operations - Streaming — feeds rows into the PostgreSQL COPY protocol for high-throughput inserts
- JSON — round-trips your type to and from JSON objects using column names as keys
- Analysis — Query Analysis inspects the codec's types to verify them against the database schema, including IN/OUT parameters of stored procedures and functions
You define the mapping once, and it propagates everywhere.
Foundations JDBC has two kinds of row codecs:
- Named codecs (
RowCodecNamed) — track column names, recommended for most use cases. Created viaRowCodec.namedBuilder(). - Positional codecs (
RowCodecUnnamed) — track only column positions, for quick single/multi-column reads. Created viaRowCodec.of(type)orRowCodec.builder().
Named row codecs
A named row codec tracks both types and column names. This is the recommended default:
- Kotlin
- Java
- Scala
data class Product(val id: Int, val name: String, val price: BigDecimal, val createdAt: Instant)
val productCodec: RowCodecNamed<Product> =
RowCodec.namedBuilder<Product>()
.field("id", PgTypes.int4, Product::id)
.field("name", PgTypes.text, Product::name)
.field("price", PgTypes.numeric, Product::price)
.field("created_at", PgTypes.timestamptz, Product::createdAt)
.build(::Product)
// Column list for SQL — no hand-written strings to keep in sync
val allProducts =
sql { "SELECT ${productCodec.columnList} FROM product" }
record Product(Integer id, String name, BigDecimal price, Instant createdAt) {}
RowCodecNamed<Product> productCodec =
RowCodec.<Product>namedBuilder()
.field("id", PgTypes.int4, Product::id)
.field("name", PgTypes.text, Product::name)
.field("price", PgTypes.numeric, Product::price)
.field("created_at", PgTypes.timestamptz, Product::createdAt)
.build(Product::new);
// Column list for SQL — no hand-written strings to keep in sync
Fragment allProducts =
Fragment.of("SELECT ").append(productCodec.columnList()).append(" FROM product");
case class Product(
id: Int,
name: String,
price: BigDecimal,
createdAt: Instant
)
val productCodec: RowCodecNamed[Product] =
RowCodec
.namedBuilder[Product]()
.field("id", PgTypes.int4)(_.id)
.field("name", PgTypes.text)(_.name)
.field("price", PgTypes.numeric)(_.price)
.field("created_at", PgTypes.timestamptz)(_.createdAt)
.build(Product.apply)
// Column list — no hand-written strings to keep in sync
val allProducts =
sql"SELECT ${productCodec.columnList} FROM product"
Row codecs always read by column index, never by column name. The column names in a named codec exist for SQL generation (INSERT statements, column lists), JSON encoding (object keys), and composite types (field names). They are never passed to ResultSet.getString("name"). When the codec reads a row, it calls rs.getXxx(1), rs.getXxx(2), etc. in declaration order.
This is a deliberate design choice that will not change. Index-based reading is the only option because it composes safely: when you join two tables, both may have columns named id or name. Column-name-based reading would silently return the wrong value. Index-based reading makes composition safe -- each codec reads its own slice of columns in sequence, and name clashes are irrelevant.
Having names lets you:
columnList()— emit column names as aFragmentfor SELECT clauses, so queries stay in sync with the codeccolumnNames()— get column names as a listFragment.insertOne(table, codec, row)/insertMany(table, codec, rows)— execute INSERT directly from the codec's column metadataFragment.insertOneReturning(table, codec, row)— same, with aRETURNINGclause that parses the inserted row backfragment.row(codec, value)— emit an object's fields as comma-separated parameters for custom INSERT patternsDbJsonRow.jsonObject(codec)— build a JSON object codec with column names as keys
Single-column codec
For single-column queries, use the simpler of() factory:
- Kotlin
- Java
- Scala
val idCodec: RowCodec<Int> = RowCodec.of(PgTypes.int4)
RowCodec<Integer> idCodec = RowCodec.of(PgTypes.int4);
val idCodec: RowCodec[Int] = RowCodec.of(PgTypes.int4)
For a single named column (preserving the column name for joins), use RowCodec.ofNamed("name", type).
Nullable columns
Use .opt() to wrap a type for nullable columns:
- Kotlin
- Java
- Scala
data class Person(val id: Int, val name: String, val createdAt: Instant?)
val personCodec: RowCodec<Person> =
RowCodec.builder<Person>()
.field(PgTypes.int4, Person::id)
.field(PgTypes.text, Person::name)
.field(PgTypes.timestamptz.opt(), Person::createdAt)
.build(::Person)
record Person(Integer id, String name, Optional<Instant> createdAt) {}
RowCodec<Person> personCodec =
RowCodec.<Person>builder()
.field(PgTypes.int4, Person::id)
.field(PgTypes.text, Person::name)
.field(PgTypes.timestamptz.opt(), Person::createdAt)
.build(Person::new);
case class Person(id: Int, name: String, createdAt: Option[Instant])
val personCodec: RowCodec[Person] = RowCodec
.builder[Person]()
.field(PgTypes.int4)(_.id)
.field(PgTypes.text)(_.name)
.field(PgTypes.timestamptz.opt)(_.createdAt)
.build(Person.apply)
Composing codecs for joins
Row codecs compose for joins. Given a productCodec and a categoryCodec, combine them with .join() or .leftJoin():
- Kotlin
- Java
- Scala
// Inner join — both sides always present
val innerJoined: RowCodec<Pair<ProductRow, CategoryRow>> =
productCodec.join(categoryCodec)
// Left join — right side is nullable
val leftJoined: RowCodec<Pair<ProductRow, CategoryRow?>> =
productCodec.leftJoin(categoryCodec)
// Inner join — both sides always present
RowCodec<Tuple.Tuple2<ProductRow, CategoryRow>> innerJoined = productCodec.join(categoryCodec);
// Left join — right side is Optional (nullable in Kotlin, Option in Scala)
RowCodec<Tuple.Tuple2<ProductRow, Optional<CategoryRow>>> leftJoined =
productCodec.leftJoin(categoryCodec);
// Inner join — both sides always present
val innerJoined: RowCodec[(ProductRow, CategoryRow)] =
productCodec.join(categoryCodec)
// Left join — right side is Option
val leftJoined: RowCodec[(ProductRow, Option[CategoryRow])] =
productCodec.leftJoin(categoryCodec)
The result type is Tuple2<A, B> in Java (with ._1() and ._2() accessors), Pair<A, B> in Kotlin, and a tuple (A, B) in Scala. Left join wraps the right side in Optional (or nullable in Kotlin, Option in Scala).
Named codecs have .join() and .leftJoin() methods that preserve column names, so the combined codec still works with columnList(), Fragment.insertOne(), and JSON encoding.
The same Tuple types appear whenever the library needs to return multiple values without a dedicated record type — RowCodec.of(type1, type2, ...) for multi-column ad-hoc queries, .combine() for composed operations, and .join() for joins all return TupleN. Accessors are 1-based: ._1(), ._2(), ._3(), etc.
Disambiguating duplicate column names
If the two sides of a named join share any column name (the common id, name, created_at case), the joined codec's columnList() concatenates the names and throws:
RowCodecNamed.columnList() has duplicate column names [id, name]
(typically from a join of codecs sharing a column name). Call .aliased("x")
on each side before composing, ...
Call .aliased("alias") on each side before composing. Aliasing prefixes every column name with alias., so the joined codec has unambiguous entries (e.id, e.name, ..., d.id, d.name) and columnList() is usable directly in the SELECT:
var joined = empCodec.aliased("e").leftJoin(deptCodec.aliased("d"));
Fragment.of("SELECT ")
.append(joined.columnList())
.append(" FROM emp e LEFT JOIN dept d ON e.department = d.name")
.query(joined.all());
val joined = empCodec.aliased("e").leftJoin(deptCodec.aliased("d"))
sql { "SELECT ${joined.columnList} FROM emp e LEFT JOIN dept d ON e.department = d.name" }
.query(joined.all())
An aliased codec is SELECT-friendly but not INSERT-friendly — use the unaliased codec for Fragment.insertOne(table, codec, row).
Result modes
A codec defines the shape of a row. Result modes define how many rows the query returns:
| Mode | Returns | Behavior |
|---|---|---|
.all() | List<T> | Collect all rows |
.exactlyOne() | T | Expect exactly one row, throw if 0 or 2+ |
.maxOne() | Optional<T> | Return 0 or 1 rows |
Data-driven inserts
Fragment.insertOneReturning() executes a complete INSERT from a named codec. The column list, bound values, and RETURNING clause are all driven by the codec. Pass column names to except to skip columns with database defaults. Use insertMany for batch inserts.
- Kotlin
- Java
- Scala
fun insert(product: Product): Product =
Fragment.insertOneReturning("product", productCodec, product).run(conn)
// Skip columns with database defaults — pass column names to except
fun insertWithDefault(product: Product): Product =
Fragment.insertOneReturning("product", productCodec, product, "id").run(conn)
Product insert(Product product) {
return Fragment.insertOneReturning("product", productCodec, product).run(conn);
}
// Skip columns with database defaults — pass column names to except
Product insertWithDefault(Product product) {
return Fragment.insertOneReturning("product", productCodec, product, "id").run(conn);
}
def insert(product: Product)(using Connection): Product =
Fragment.insertOneReturning("product", productCodec, product).run
// Skip columns with database defaults — pass column names to except
def insertWithDefault(product: Product)(using Connection): Product =
Fragment.insertOneReturning("product", productCodec, product, "id").run
Generated keys inserts
For databases that don't support RETURNING (DB2, Oracle, SQL Server, MariaDB), use Fragment.insertOneGenerated(). It works like insertOneReturning but uses JDBC's getGeneratedKeys() API to read back the generated columns:
- Kotlin
- Java
- Scala
// For databases without RETURNING (DB2, Oracle, SQL Server, MariaDB):
fun insertGeneratedKey(product: Product): Int =
Fragment.insertOneGenerated(
"product", productCodec, product,
arrayOf("id"), RowCodec.of(PgTypes.int4).exactlyOne(), "id"
).run(conn)
// For databases without RETURNING (DB2, Oracle, SQL Server, MariaDB):
int insertGeneratedKey(Product product) {
return Fragment.insertOneGenerated(
"product",
productCodec,
product,
new String[] {"id"},
RowCodec.of(PgTypes.int4).exactlyOne(),
"id")
.run(conn);
}
// For databases without RETURNING (DB2, Oracle, SQL Server, MariaDB):
def insertGeneratedKey(product: Product)(using Connection): Int =
Fragment
.insertOneGenerated(
"product", productCodec, product,
Array("id"), RowCodec.of(PgTypes.int4).exactlyOne(), "id"
)
.run
Pass the generated column names as the third argument, and the same column names to except so they're excluded from the INSERT's column list and VALUES clause.
Positional codecs
Use positional codecs when column names aren't needed, such as single-use queries. Build a positional codec by listing .field() calls (one per column, in SELECT order) and finishing with .build(constructor):
- Kotlin
- Java
- Scala
data class Person(val id: Int, val name: String, val createdAt: Instant)
val personCodec: RowCodec<Person> =
RowCodec.builder<Person>()
.field(PgTypes.int4, Person::id)
.field(PgTypes.text, Person::name)
.field(PgTypes.timestamptz, Person::createdAt)
.build(::Person)
record Person(Integer id, String name, Instant createdAt) {}
RowCodec<Person> personCodec =
RowCodec.<Person>builder()
.field(PgTypes.int4, Person::id)
.field(PgTypes.text, Person::name)
.field(PgTypes.timestamptz, Person::createdAt)
.build(Person::new);
case class Person(id: Int, name: String, createdAt: Instant)
val personCodec: RowCodec[Person] = RowCodec
.builder[Person]()
.field(PgTypes.int4)(_.id)
.field(PgTypes.text)(_.name)
.field(PgTypes.timestamptz)(_.createdAt)
.build(Person.apply)
Each .field() takes a DbType that models the exact database column type. DbType<A> reads a value of type A from a ResultSet and writes it to a PreparedStatement. Each supported database has its own set (PgTypes, DuckDbTypes, MariaDbTypes, etc.) with mappings for every type. See Database Types for the complete catalog.
The builder is fully type-safe: the constructor receives exactly the types you declared, with no casts. Columns are read by index — the order of .field() calls must match the column order in your SELECT.