Can we have the Standard Library for Macros?

Game of the Year Edition


Mateusz Kubuszok

About me

Why macros?

The background

Why macros

The Chimney Story

Chimney timeline

What we discovered

  • we can share macros code between Scala 2 and Scala 3

  • macros are not slow - abusing type checker via implicit resolution is

  • macros can aggregate errors, and know their context - better errors

  • macros can log their internal logic and resulting expression

  • macros have more opportunities to micro-optimize the code

  • while usage of semi-auto still makes sense - it is not required to use everywhere at all time to keep the project maintainable - that’s issue of Shapeless/Circe approach, not the type class derivation itself

  • eye-opening: we can have much better derivation tools

  • not a matter of Chimney being special — doable by every library doing type class derivation

Glorious Shapeless Derivation Master Race vs Dirty Macro Peasant

"Words are cheap

Show me the code"

"The code" (& numbers)

Compilation time
Runtime performance
implicit not found

 // vs

Failed to derive showing for value:
  example.ShowSanely.User:
No build-in support nor implicit
for type scala.Nothing
Debugging experience

The downside

  • pain to implement for library authors

  • unless we make it easier, nobody would actually use this

  • Chimney’s tools, though a huge improvement, have rough edges

  • not suitable for generic type class derivation as-is

The horror

Why macros suck

Oh noez

Learning resources

  • virtually no sources

  • official Scala documentation is basically…​

How to draw an owl

Some examples

Quoting and Splicing

Scala 2 (Quasiquotes)

Scala 3 (Quotes)

val expr1 =
  c.Expr[Int](q"21")
val expr2 =
  c.Expr[Int](q"37")

val expr3 = c.Expr[Int](
  q"""
  ${ expr1 } + ${ expr2 }
  """
)
val expr1 = Expr(21)
val expr2 = Expr(37)

val expr3 = '{
  ${ expr1 } + ${ expr2 }
}

Matching Types

Scala 2 (Quasiquotes)

Scala 3 (Quotes)

def whenOptionOf[A:c.WeakTypeTag] =...

weakTypeOf[A]
 .dealias
 .widen
 .baseType(
  c.mirror.staticClass("scala.Option")
 ) match {
  case TypeRef(_, _, List(t)) =>
    whenOptionOf(
      c.WeakTypeTag(t.dealias.widen)
    )
  case _ => ...
}
def whenOptionOf[A: Type] = ...

Type.of[A] match {
  case '[Option[t]] =>
    whenOptionOf[t]
  case _ => ...
}

Instantiating an Arbitrary Type

Scala 2 (Quasiquotes)

Scala 3 (Quotes)

val args:List[List[c.Tree]]=
  ...

c.Expr[A](
  q"""
  new ${weakTypeOf[A]}(
    ...${args}
  )
  """
)
val ctor = TypeRepr.of[A]
  .typeSymbol
  .primaryConstructor

val args: List[List[Tree]] =
  ...

New(TypeTree.of[A])
  .select(ctor)
  .appliedToArgss(args)

Constructing a Pattern Match

Scala 2 (Quasiquotes)

Scala 3 (Quotes)

def handleCase[
  A: c.WeakTypeTag
](name: c.Expr[A]) = ...
/* for each case: */
val name = c.internal
  .reificationSupport
  .freshTermName("a")
cq"""
$name: ${weakTypeOf[A]} =>
  ${handleCase(c.Expr[A](q"$name"))}
"""
/* then create the match: */
c.Expr[Result](
  q"""
  $expr match { ...${cases} }
  """
)
def handleCase[
  A: Type
](name: Expr[A]) = ...
/* for each case: */
val name = Symbol.newBind(
  Symbol.spliceOwner,
  Symbol.freshName("a"),
  Flags.Empty,
  TypeRepr.of[A]
)
CaseDef(
  Bind(
    name,
    Typed(Wildcard(),TypeTree.of[A])),
  None,
  handleCase(Ref(name).asExprOf[A]))
Match(expr.asTerm, cases)
  .asExprOf[Result]

Sealed Trait’s Children

Scala 2 (Quasiquotes)

Scala 3 (Quotes)

val symbol = c.weakTypeOf[A]
  .typeSymbol
if (symbol.isSealed) {
  // force Symbol initialization
  symbol.typeSignature
  val children = symbol.asClass
    .knownDirectSubclasses.map{sym =>
      val sEta = sym.asType
        .toType.etaExpand
      sEta.finalResultType
          .substituteTypes(
        sEta.baseType(symbol)
          .typeArgs.map(_.typeSymbol),
        c.weakTypeOf[A].typeArgs
      )
    }
  ...
} else {
  ...
}
val A = TypeRepr.of[A]
val sym = A.typeSymbol
if (sym.flags.is(Flags.Sealed)) {
  val c = sym.children.map: sub =>
    sub.primaryConstructor
        .paramSymss match:
      // manually reapply type params
      case syms :: _
      if syms.exists(_.isType) =>
        val param2tpe = sub.typeRef
          .baseType(sym).typeArgs
          .map(_.typeSymbol.name)
          .zip(A.typeArgs).toMap
        val types = syms.map(_.name)
          .map(param2tpe)
        sub.typeRef.appliedTo(types)
      // subtype is monomorphic
      case _ => sub.typeRef
  ...
} else { ... }

Jackson Pollock

Jackson Pollock’s image

(Not a macro, but looks very similar)

The dream

Let’s imagine such an API

The dream

Macro IO

val a = MIO {
  21
}
val b = MIO {
  37
}

a.map2(b)(_ + _) // applicative syntax
for {
  i <- MIO(1)
  j <- MIO(2)
} yield i + j // monadic syntax
List("1", "2", "3", "a", "b").parTraverse { a =>
  MIO(a.toInt)
} // .par* aggregates errors

Logging

Log.namedScope("All logs here will share the same span") {
  Log.info("Some operation starting") >> // standalone log

    MIO("some operation")
      .log.info("Some operation ended") >> // log after IO

    Log.namedScope("Spans can be nested") {
      Log.info("Nested log") // we can nest as much as we want
    }
}
All logs here will share the same span:
├ [Info]  Some operation starting
├ [Info]  Some operation ended
└ Spans can be nested:
  └ [Info]  Nested log

Let’s assume that it can be a thing

// Yet another utility, because .map/.flatMap
// cannot handle this:
MIO.scoped { runSafe =>

  Expr.quote { // <- instead of '{}/ q"..."
    new Show[A] {

     def show(a: A): String = Expr.splice {// <- instead of ${}
        runSafe {
          deriveShowBody(Expr.quote{ a })// : MIO[Expr[String]]
        } // : Expr[String]
      }
    }
  } // : Expr[Show[A]]
} // : MIO[Expr[Show[A]]]

And this as well:

val OptionType = Type.Ctor1.of[Option]
val EitherType = Type.Ctor2.of[Either]

Type[A] match {
  case OptionType(a) =>
    ... // a is A in Option[A]
  case EitherType(l, r) =>
    ... // l is L and r is R in Either[L, R]
  case _ =>
    ... // A is not an Option or Either
}

Imagine you created instances like this:

CaseClass.parse[A] match {
  case Some(caseClass) =>
   // A(summon[Arg1], summon[Arg2], ...)
   caseClass.construct { parameter =>
      import parameter.tpe.Underlying as Param // <- giving existential type a name!

      Expr.summonImplicit[Param] match {
        case Some(expr) => MIO.pure(expr)

        case None => MIO.fail(
          new Exception(s"No implicit for ${Type.prettyPrint[Param]}")
        )
      }
   }

  case None => MIO.fail(
    new Exception(s"Not a case class: ${Type.prettyPrint[A]}")
  )
} // : MIO[Expr[A]]

And pattern-matched like this:

Enum.parse[A] match {
  case Some(enumm) =>
    // expr match {
    //   case b: B => "B" + " : " + b.toString
    //   ...
    // }
    enumm.matchOn(expr) { matchedSubtype =>
      import matchedSubtype.{Underlying as B, value as b}
      // B <- named existential type
      // b: Expr[B]
      val bName = Expr(B.simpleName) // Expr[String]
      MIO {
        Expr.quote {
          Expr.splice { bName } + " : " + Expr.splice { b }.toString
        }
      }
    }

  case None => MIO.pure(Expr("not an enum"))
} // : MIO[Expr[String]]

And handled collections like this:

val expr: Expr[A] = ...

Type[A] match {
  case IsCollection(isCollection) =>
    import isCollection.{ Underlying as Item, value => isCollectionOfItem }
    // Now we can use `Item` as the type of the collection elements

    val iterable: Expr[Iterable[Item]] = isCollectionOfItem.asIterable(expr)
    Expr.quote {
      Expr.splice(iterable).map { (item: Item) => ... } // Type[Item] is handled
    }
  case _ => ...
}

And value classes like this:

val outer: Expr[A] = ...

Type[A] match {
  case IsValueType(isValueType) =>
    import isValueType.{ Underlying as Inner, value => outerIsValueType }
    // Now we can use Inner as the value class's underlying type

    val unwrapped: Expr[Inner] = outerIsValueType.unwrap(outer)
    outerIsValueType.wrap match {
      case CtorLikeOf.PlainValue(ctor, _) =>
        ctor(unwrapped) // <-- wrapped again, not really useful but possible
      case _ => // other constructors
    }
  case _ => ...
}

And integrated external libraries like this:

import cats.data.NonEmptyList
import eu.timepit.refined.Refined
import eu.timepit.refined.collection.NonEmpty
import eu.timepit.refined.numeric.Positive

case class WithNEL(values: NonEmptyList[Int])
case class RefinedPerson(
  name: String Refined NonEmpty,
  age: Int Refined Positive
)
import hearth.kindling.circederivation._
// No imports for derivation needed! Just add the integration as dependency.

// These just work — encoding, decoding, validation:
KindlingsEncoder.encode(WithNEL(NonEmptyList.of(1, 2, 3)))
// => {"values": [1, 2, 3]}

KindlingsDecoder.decode[WithNEL](Json.obj("values" -> Json.arr()))
// => Left(...) — rejects empty NonEmptyList!

KindlingsDecoder.decode[Int Refined Positive](Json.fromInt(-1))
// => Left(...) — validates the predicate!

And debugged it like this:

// Put outside of companion to prevent auto-summoning!
implicit val logDerivation: Show.LogDerivation = Show.LogDerivation
Log in terminal

And traced it like this:

Test / scalacOptions ++= Seq(
  "-Xmacro-settings:hearth.mioBenchmarkScopes=true",
  s"-Xmacro-settings:hearth.mioBenchmarkFlameGraphDir=${crossTarget.value / "flame-graphs"}"
)
Flame graph
Flame graph
Flame graph
Flame graph

Actually, it’s already possible

with Hearth

Hearth

  • cross-compilable macros: Scala 2.13 & 3

  • including Cross-Quotes: Expr.quote / Expr.splice working on both Scala versions

  • high-level APIs like: CaseClass, Enum, IsCollection, IsValueType, …​

  • MIO monad for laziness, structured logging and error aggregation

  • and flame graphs to give your macros observability and performance insights

Kindlings

The incubator for Hearth-based libraries

  • my ShowPretty

  • Circe, Jsoniter Scala

  • scala-xml, scala-yaml

  • my UBJson

  • my Avro4s port

  • Tapir Schema

  • Cats' Kittens

Circe derivation

  • Encoder[A], Decoder[A], Codec.AsObject[A]

  • all circe-generic-extras features on Scala 2 and Scala 3

  • unified API across Scala versions

  • value class unwrapping (broken in upstream Scala 3)

  • recursive types without Lazy wrappers

  • @fieldName annotation on both Scala 2 and 3 (upstream: Scala 2 only)

  • JVM + JS + Native

Jsoniter-Scala derivation

  • JsonValueCodec[A], JsonCodec[A], JsonKeyCodec[A]

  • virtually all JsonCodecMaker configuration options

  • unified API across Scala versions

  • recursive types without special flags

  • JVM + JS + Native

Tapir Schema derivation

  • reuses your Circe or jsoniter-scala configuration

  • schema always matches your codecs

  • correct generic type parameter names

// Your Tapir schema automatically matches your JSON codec config
def schema[A: Schema]: Schema[Foo[A]] = Schema.derived
// schema[String] will be named Foo[String] instead of Foo[A]!
  • no more config drift between schema and codec

Cats' Kittens port

And if you think it’s still too limited to prove anything useful beyond encoders and decoders…​

Monomorphic (kind *)

Polymorphic (kind * → *)

  • Show, Eq, Order

  • PartialOrder, Hash

  • Semigroup, Monoid

  • CommutativeSemigroup

  • CommutativeMonoid

  • alleycats.Empty

  • Functor, Contravariant, Invariant

  • Apply, Applicative

  • Foldable, Traverse

  • Reducible, NonEmptyTraverse

  • SemigroupK, MonoidK

  • alleycats.Pure, alleycats.EmptyK

  • NonEmptyAlternative (new!)

  • Alternative (new!)

  • alleycats.ConsK (fixed on Scala 3!)

All of this…​

  • sharing the type class derivation logic

  • sharing the unit tests

  • exact same API on Scala 2.13 and Scala 3

  • use them today on Scala 2.13 — migration to Scala 3 is not a blocker

The secret weapon

AI-assisted development

  • virtually every single one of these libraries was completely vibe-coded

  • over one week

  • using Claude Code to implement all of it

  • a single person, one week, if really dedicated

  • libraries that took years to develop — ported in days

  • a lot of tokens burned, then bam — everyone benefits

  • the API is sane enough that AI can use it effectively

  • this is empowering for the community

Summary

  • macros can be better for users than Shapeless/Mirrors

  • Hearth makes macros approachable for library authors

  • Kindlings proves it works across many real-world libraries

  • AI makes the development fast

  • we can give ourselves better tools — today

Thank you