Advent of OCaml, part 1: setup and first puzzle
This is the first part in a series about learning a bit of OCaml by solving Advent of Code puzzles. The zeroth part is my previous post. Life got a bit in the way since I published that one so it took a while for me to sit down and write this, but I figured better late than never.
In this part, the goal is to get a development environment set up, with enough tooling and structure to lay the basis for building, testing, formatting, and benchmarking Advent of Code solutions in OCaml -- and to exercise it all by writing a solution to an actual puzzle! :)
Note: as mentioned in the zeroth part, I was planning to write down my notes and thoughts of my very first exposure to OCaml and its tools. Eventually I decided against it, because when I read over my notes I realized it was not that interesting to read. So this blog post is going to focus more on what I eventually ended up with.
Resources I'm using to learn
Using online resources is all well and good, but for this little adventure I prefer something more comprehensive and cohesive. I want a book! From my cursory browsing of the Internet, I've found two books that are often recommended for those that want to learn OCaml:
- OCaml Programming: Correct + Efficient + Beautiful by Michael R. Clarkson et al. It's the textbook the course CS 3110 Data Structures and Functional Programming at Cornell University (and therefore sometimes the book is referred to as the "CS 3110" book, or by its old title Functional Programming in OCaml).
- Real World OCaml by Yaron Minsky and Anil Madhavapeddy.
I'm primarily going to use the latter as it seems more geared towards, well, real-world use of OCaml. It covers more than the core language, including chapters on testing, parsing, serialization, memory layout, etc. I might still refer to the CS 3110 book if I feel like I need more detail. Also, Real World OCaml is available as a PDF which I can read on my tablet/e-reader, which I much prefer over reading on a website like CS 3110 seems tailored for.
Using Jane Street tools
I'll be upfront: most of this setup will be based on Jane Street technology. I'm not trying to flatter them in the hopes that senpai will notice me; it's simply the case that their name is all over the OCaml world and they put out high quality resources for everyone to use.
Jane Street contributed a great many tools to open source, such as the widely used OCaml build system Dune, as well as several useful libraries such as the standard library replacement Base. The Real World OCaml book I'm reading (see above) is authored by Yaron Minsky who is one of their long-time engineers. The book often refers to tools that Jane Street has put out, so I might as well use those tools.
The shopping list
Okay! So let's write a little list of all the things I need to get at the store want to set up for my Advent of Code solutions.
- A basic Dune project with a few dependencies.
- Unit tests that exercise my solutions, for both example inputs and real inputs.
- Good enough editor integration that I feel productive in iterating on writing code and running tests.
- Basic microbenchmarks that can measure the runtime and memory usage of my solutions, and do comparative benchmarking to figure out if I made my code faster or not.
One thing I'd love to have is detailed profiling of CPU and memory usage. I've taken a cursory glance at the tooling that's available and found the excellent-looking magic-trace tool. However, the only machines available to me is a Linux laptop with an AMD CPU and a macOS laptop with an M4 processor, so I can't use magic-trace. I might look into Linux's perf
tool instead -- would be a very good excuse to learn a useful tool! All else failing, and if I'm feeling like spending some money, I might rent a bare metal machine with an Intel CPU.
Step 0: setting up an OPAM switch
Before I can set up an Dune project, I need to install Dune. Oh, and OCaml. It seems that the recommended way of doing so is via OPAM.
OPAM is a package manager for OCaml. As far as I can tell, it is often used as a basis for installing OCaml, as opposed to installing it directly as a system package. OPAM can also be used to install OCaml packages, which may be various tools needed for OCaml development, or just regular applications written in OCaml.
OPAM lets you define "switches". In a switch you can install your packages, including the OCaml compiler itself, and isolate them so that they don't pollute the entire system or other switches. Since OPAM switches both define the version of the OCaml compiler and various tools and libraries, I view them as something like a mix between Python's virtual environments1 and Rust's rustup
tool.
I installed opam
using my system's package manager[^arch], and then followed the installation instructions for Real World OCaml.
$ sudo pacman -S opam # I use Arch btw
$ opam init
$ opam switch create default 5.3.0 # The latest available OCaml version at the time of writing.
I then set up my shell to invoke eval $(opam env)
every time it starts, so that tools installed through OPAM are available.
Side note: the
opam switch create
command took a minute or two. It seemed to download, compile, and install the OCaml compiler from source -- and I'm quite impressed that a compiler for a language as advanced as OCaml can be compiled in such a short amount of time!
In addition to installing Dune and utop
(a REPL for OCaml), I need to install a few tools so that my chosen editor can work nicely with OCaml code.
$ opam install merlin utop ocp-indent ocamlformat dune
We're ready to set up a Dune project!
Step 1: a basic Dune project
Following the Dune quickstart I set up an executable project, which is the default. You can also set up library projects, and the quick start page explains what the difference is. While I'm mostly intending to rely on unit tests for running my solutions, I found that having a binary executable was a convenient way to run microbenchmarks; more about that later.
$ dune init proj adventofocaml
A good sanity check at this point would be to see what happens if I just try to build the project.
$ dune build
The command produced no output, other than showing a build progress indicator for a split second. That is a good indicator.
Looking around in the generated files a bit, the only skeleton code I can find is a simple Hello World program in bin/main.ml
. The lib
directory only holds a dune
file (defining the build) but no source code. The test
directory contains the empty file test_adventofocaml.ml
along with a dune
file. I find that a little disappointing, because as someone new to OCaml, it would be great to get an idea of what a "library" is in OCaml and what a "test" is.
Let's run the Hello World program:
$ dune exec bin/main.exe
Hello, World!
And the tests, even though there are none:
$ dune runtest
OK, so basic sanity checks are done.
Dune can manage dependencies, and technically we have a dependency on the OCaml compiler version[^compiler-version], so we should create a lockfile to set the versions of all our transitive dependencies, for reproducibility.
$ dune pkg lock
Solution for dune.lock:
- ocaml.5.3.0
- ocaml-base-compiler.5.3.0
- ocaml-compiler.5.3.0
- ocaml-config.3
The only change I made from all the defaults was to set implicit_transitive_deps
to false in dune-project
, since I think it's a very good practice to declare your full set of dependencies in your build, and not rely on stuff implicitly being there. I'm surprised this isn't the default for new Dune projects, since it's a sane default in my opinion.
I know that I'm going to at least use base
as a dependency (as a replacement for the standard library) and expect tests through ppx_expect
, so let's add that.
diff --git a/dune-project b/dune-project
index 002a28a..6e053d1 100644
--- a/dune-project
+++ b/dune-project
@@ -21,7 +21,7 @@
(name adventofocaml)
(synopsis "Advent of Code solutions in OCaml")
(description "A set of Advent of Code solutions, created as a way to learn OCaml and some of its tooling.")
- (depends ocaml)
+ (depends base ppx_expect ocaml)
(tags
(adventofcode)))
followed by dune pkg lock
:
$ dune pkg lock
Solution for dune.lock:
- base.v0.17.3
- base-unix.base
- csexp.1.5.2
- dune-configurator.3.20.2
- jane-street-headers.v0.17.0
- jst-config.v0.17.0
- ocaml.5.3.0
- ocaml-base-compiler.5.3.0
- ocaml-compiler.5.3.0
- ocaml-compiler-libs.v0.17.0
- ocaml-config.3
- ocaml_intrinsics_kernel.v0.17.1
- ppx_assert.v0.17.0
- ppx_base.v0.17.0
- ppx_cold.v0.17.0
- ppx_compare.v0.17.0
- ppx_derivers.1.2.1
- ppx_enumerate.v0.17.0
- ppx_expect.v0.17.3
- ppx_globalize.v0.17.2
- ppx_hash.v0.17.0
- ppx_here.v0.17.0
- ppx_inline_test.v0.17.1
- ppx_optcomp.v0.17.1
- ppx_sexp_conv.v0.17.1
- ppxlib.0.36.0
- ppxlib_jane.v0.17.4
- sexplib0.v0.17.0
- stdio.v0.17.0
- stdlib-shims.0.3.0
- time_now.v0.17.0
Notably, you also need to run dune build
or a similar command to update the generated .opam
file. I keep forgetting to do this.
diff --git a/adventofocaml.opam b/adventofocaml.opam
index e5f7e65..7a25981 100644
--- a/adventofocaml.opam
+++ b/adventofocaml.opam
@@ -12,6 +12,8 @@ doc: "https://github.com/saser/adventofocaml"
bug-reports: "https://github.com/saser/adventofocaml/issues"
depends: [
"dune" {>= "3.20"}
+ "base"
+ "ppx_expect"
"ocaml"
"odoc" {with-doc}
]
Checking the shopping list:
- A basic Dune project with a few dependencies.
- Unit tests that exercise my solutions, for both example inputs and real inputs.
- Good enough editor integration that I feel productive in iterating on writing code and running tests.
- Basic microbenchmarks that can measure the runtime and memory usage of my solutions, and do comparative benchmarking to figure out if I made my code faster or not.
Step 2: unit tests for running my solutions
One thing I'm really excited about with learning OCaml is to play around with "expect tests". They are described in a blog post called What if writing tests was a joyful experience? on the Jane Street blog2. Helpfully, Dune has good support for expect tests, and Jane Street have open sourced their ppx_expect
syntax extension that enables them, so let's go ahead and add them.
To run expect tests, we need three things, as described in Dune's documentation for Inline Expectation Tests:
- A dependency on
ppx_expect
. - Enabling inline tests for our library.
- Enabling the
ppx_expect
preprocessor for our library.
Since we already declared a dependency on ppx_expect
above, let's modify lib/dune
to enable inline tests and the ppx_expect
preprocessor:
diff --git a/lib/dune b/lib/dune
index ba8644b..52de447 100644
--- a/lib/dune
+++ b/lib/dune
@@ -1,2 +1,5 @@
(library
- (name adventofocaml))
+ (name adventofocaml)
+ (inline_tests)
+ (preprocess
+ (pps ppx_expect)))
I ran dune build
here which took a minute or so because it needed to compile everything that ppx_expect
depends on.
Now we can create a basic expect test in lib/hello_expect.ml
:
let%expect_test "greet" =
print_endline "Hello, expect tests!";
[%expect]
;;
and run it:
$ dune runtest
File "lib/hello_expect.ml", line 1, characters 0-0:
diff --git a/_build/default/lib/hello_expect.ml b/_build/.sandbox/b4e25f22f5ef55695e8a02d3a8b7f24d/default/lib/hello_expect.ml.corrected
index 93e17ef..2c91a0f 100644
--- a/_build/default/lib/hello_expect.ml
+++ b/_build/.sandbox/b4e25f22f5ef55695e8a02d3a8b7f24d/default/lib/hello_expect.ml.corrected
@@ -1,4 +1,4 @@
let%expect_test "greet" =
print_endline "Hello, expect tests!";
- [%expect]
+ [%expect {| Hello, expect tests! |}]
;;
Sweet, seems to work as expected! Now we can update the test with this diff, called "promoting":
$ dune promote
Promoting _build/default/lib/hello_expect.ml.corrected to
lib/hello_expect.ml.
It's also possible to combine these two steps into one with dune runtest --auto-promote
.
Let's tick the second item off our shopping list:
- A basic Dune project with a few dependencies.
- Unit tests that exercise my solutions, for both example inputs and real inputs.
- Good enough editor integration that I feel productive in iterating on writing code and running tests.
- Basic microbenchmarks that can measure the runtime and memory usage of my solutions, and do comparative benchmarking to figure out if I made my code faster or not.
Step 3: editor integration
I've recently started using Spacemacs as my main editor, and it's quite enjoyable. It comes with an OCaml layer that I simply enabled.
Generally it all works out of the box. However, I found out the hard way that autoformatting doesn't work until you've created a file called .ocamlformat
. You can create an empty file and you'll get the default formatting config (called a "profile"), or you can give contents like profile=foo
. I decided to use profile=janestreet
just because it was available and I figure they probably have pretty good experience with maintaining readable OCaml code.
I ran into one snag, however. The Dune integration for Emacs comes with a command called dune-runtest-and-promote
(bound to SPC m t p
in Spacemacs by default), which runs dune runtest
, then dune promote
, and then refreshes the Emacs buffer. This command sounds perfect for getting the near-REPL like experience that expect tests can offer. However, whenever I ran dune-runtest-and-promote
the compilation would seemingly just hang forever unless I killed it (SPC c k
).
Debugging this took me a good long time, and I'm not entirely sure what's going on, I think it's this. dune-runtest-and-promote
uses the Emacs function compile
, which is the standard way to run a compilation command. compile
executes the command asynchronously, and there's no way to wait for it to finish as far as I can tell. dune-runtest-and-promote
would run dune runtest
using compile
and, without waiting for it to finish, would very soon thereafter run dune promote
. My hypothesis is that this triggers a deadlock in Dune where both invocations are waiting to acquire a lock on a file. I'm going to file an issue in https://github.com/ocaml/dune which is where dune.el
-- the Dune integration for Emacs -- lives.
My initial workaround for this was to make a local edit to dune-runtest-and-promote
to use shell-command
instead of compile
, as the former is synchronous. It came at the expense of having Emacs freeze while dune runtest
is running, which is a non-trivial amount of time, but I can live with that. However, using shell-command
tended to mask compilation errors since the buffer holding the output of the shell command would quickly disappear or otherwise be unhelpful3.
Instead, the workaround I chose was to remove the part where dune-runtest-and-promote
would run the promote. Instead, I would press SPC m t p
to run the tests, and SPC m t P
(note: capital P) which is bound to dune-promote
. This makes running the tests and accepting the output a two-command operation instead of one. I can live with that too.
Our shopping list is looking good now!
- A basic Dune project with a few dependencies.
- Unit tests that exercise my solutions, for both example inputs and real inputs.
- Good enough editor integration that I feel productive in iterating on writing code and running tests.
- Basic microbenchmarks that can measure the runtime and memory usage of my solutions, and do comparative benchmarking to figure out if I made my code faster or not.
Step 4: basic microbenchmarks
Real World OCaml doesn't have any chapters dedicated specifically to running benchmarks, but it does contain pointers to the Core_bench
package, seen first in the subsection Benchmarking Pattern Matching of Chapter 26. The Compiler Backend: Bytecode and Native code. That's enough for me to go on!
There's a Jane Street blog post called Core_bench: better micro-benchmarks through linear regression. It is very interesting in its own right, since it goes into detail about how to create accurate benchmarks, but it doesn't say much about how to use Core_bench
. Besides, the blog post is quite old: it was published in 2014, which is 11 years ago at the time of writing, and a lot can happen in 11 years in the software world!
Fortunately for us, Core_bench
is open source. Unfortunately for us, the documentation on how to use it is quite scarce, especially on how to use it together with the ppx_bench
syntax extension referred to from the README. I tried it for a while, going on a not-that-wild-but-still-a-bit-wild goose chase across old GitHub issues and poking through the implementation of Core_bench
to figure out how to make it work, but to no avail. It seems that ppx_bench
is tailored towards Jane Street's internal environment and not really intended to be used in the outside world. Fair.
Instead, I used the more verbose API, inspired by the blog post mentioned above, to set up regular "commands" to run benchmarks.
First, we need a function to benchmark. So let's create a dummy one, in lib/hello_benchmarks.ml
:
let sum_10k =
let i = ref 0 in
for j = 1 to 10_000 do
i := !i + j
done;
!i
;;
Then we create the benchmark program. In bin/benchmark.ml
, I added this, pattern matching off of the (old) blog post:
open Adventofocaml
open Core_bench
let () =
Command_unix.run
(Bench.make_command
[ Bench.Test.create ~name:"sum" (fun () -> ignore Hello_benchmarks.sum_10k) ])
;;
together with its dune
file:
(executable
(public_name adventofocaml)
(name benchmark)
(libraries adventofocaml core_bench core_unix.command_unix))
This part illustrates something that often trips me up when looking at OCaml code. Based on the two source files above, how can I tell where names like
Command_unix
andBench.make_command
are coming from? In this case,Command_unix
is a module in the librarycore_unix.command_unix
, but there's nothing that explicitly ties them together! The library could just as well have been calledcore_unix.whatever_else_idc_lolol
and it would have compiled just fine. This irks me a lot because I'm coming from a background dominated by Go, which is delightfully explicit in how you refer to packages and names in those packages.
Then I can run the benchmark like so:
$ dune exec bin/benchmark.exe
Estimated testing time 10s (1 benchmarks x 10s). Change using '-quota'.
ββββββββ¬βββββββββββ¬βββββββββββββ
β Name β Time/Run β Percentage β
ββββββββΌβββββββββββΌβββββββββββββ€
β sum β 1.14ns β 100.00% β
ββββββββ΄βββββββββββ΄βββββββββββββ
It is suspicious that the time/run is just over a nanosecond -- this tells me that the compiler probably optimized away the call to sum_10k
completely, or maybe calculated the result of sum_10k
at compile time and just emitted that as a constant. Either way, running benchmarks works, and that's what I care about.
- A basic Dune project with a few dependencies.
- Unit tests that exercise my solutions, for both example inputs and real inputs.
- Good enough editor integration that I feel productive in iterating on writing code and running tests.
- Basic microbenchmarks that can measure the runtime and memory usage of my solutions, and do comparative benchmarking to figure out if I made my code faster or not.
Looks like we're done!
Wait. I feel like I'm missing something.
Step 5: solving a puzzle
Much like when I go grocery shopping, I realized only after I was done that there was something I forgot to get. In this case, it's the most important part that ties everything together: an actual solution to a puzzle.
To make this easy, let's pick year 2024, day 1. The description of the problem is on that page; go read it if you're not familiar with it.
The first thing I want to do is to implement some basic parsing. I want to take the input, given as one big string, and produce two lists, left
and right
, representing the two columns in the input. Since we're converting text into integers, parsing could technically fail, so I'm going to make use of Or_error.t
to represent the error case here. Let's define the signature of the parse
function:
open Base
open Stdio
let parse (_input : string) : (int list * int list) Or_error.t =
Error (Error.of_string "unimplemented")
;;
(I know that generally you can let type inference do its thing, but I wanted to be explicit here since there's no implementation to run type inference on!)
I also defined something to hold the example input from the problem description:
let example_input =
String.strip
{|
3 4
4 3
2 5
1 3
3 9
3 3
|}
;;
And a simple expect test to exercise the parse
function:
let%expect_test "parse" =
let test input =
match parse input with
| Ok (left, right) ->
print_endline ("left: " ^ Sexp.to_string_hum (List.sexp_of_t Int.sexp_of_t left));
print_endline ("right: " ^ Sexp.to_string_hum (List.sexp_of_t Int.sexp_of_t right))
| Error err -> print_endline ("error: " ^ Error.to_string_hum err)
in
test example_input;
[%expect {| error: unimplemented |}]
;;
Writing this made me think "how difficult can it be to print a list of ints?!". I assume I'm just doing it incorrectly. This is at least enough to get started.
OK! Now we can write the parsing. I started out with literally just this:
(* Removed the type annotations for now; it's inferred to be [string -> 'a Or_error.t]. *)
let parse input =
let lines = String.split_lines input in
List.iter lines ~f:(fun line -> print_endline ("line: " ^ line));
Error (Error.of_string "unimplemented")
;;
Then, when I ran the expect test, I got:
[%expect {|
line: 3 4
line: 4 3
line: 2 5
line: 1 3
line: 3 9
line: 3 3
error: unimplemented
|}]
which is what I expected. And it's great that the test is showing me that! I'm going to continue sprinkling print statements in the implementation of parse
as I'm figuring it out, and then re-running the expect test to see that I'm heading in the right direction.
Eventually I ended up with:
let parse input =
let numbers =
String.split_lines input
|> List.concat_map ~f:(String.split ~on:' ')
|> List.filter_map ~f:Int.of_string_opt
in
let rec loop (left, right) l =
match l with
| [] -> Ok (left, right)
| a :: b :: tl -> loop (a :: left, b :: right) tl
| _ ->
Or_error.errorf
"input contains %d numbers; expected an even number"
(List.length numbers)
in
loop ([], []) numbers
;;
and the test shows it working:
let%expect_test "parse" =
let test input =
match parse input with
| Ok (left, right) ->
print_endline ("left: " ^ Sexp.to_string_hum (List.sexp_of_t Int.sexp_of_t left));
print_endline ("right: " ^ Sexp.to_string_hum (List.sexp_of_t Int.sexp_of_t right))
| Error err -> print_endline ("error: " ^ Error.to_string_hum err)
in
test example_input;
[%expect
{|
left: (3 3 1 2 4 3)
right: (3 9 3 5 3 4)
|}]
;;
One neat thing about this test is that it surfaced to me something I hadn't considered: that the lists returned from parse
are in reverse order! In this case it's fine because the order doesn't matter, but it's something I otherwise probably wouldn't have seen as quickly.
Now that we have a parsing function, let's define a part1
function that solves the whole problem:
let part1 input =
let open Or_error.Let_syntax in
let%bind left, right = parse input in
let left = List.sort ~compare:Int.compare left in
let right = List.sort ~compare:Int.compare right in
let rec loop left right sum =
match left, right with
| [], [] -> Ok sum
| a :: tl_left, b :: tl_right -> loop tl_left tl_right (sum + Int.abs (a - b))
| _ -> failwith "unreachable"
in
loop left right 0
;;
And its expect test:
let%expect_test "part1" =
let test input =
match part1 input with
| Ok sum -> printf "%d\n" sum
| Error err -> print_endline ("error: " ^ Error.to_string_hum err)
in
test example_input;
[%expect {| 11 |}]
;;
which is the correct answer!
Note: the
let%bind
syntax is another syntax extension developed by Jane Street and available as open source. It's calledppx_let
. I added it to mydune-project
file and then as another preprocessor inlib/dune
.
Now let's define run it against the full input as well. I was too lazy to figure out how to read files in the test, so I just dumped the full input as an enormous inline string literal:
let full_input =
String.strip
{|
88159 51481
66127 31794
71500 84893
...
|}
and then expanded the test to exercise the full input:
test full_input;
[%expect {| 1579939 |}]
which is the right answer!
Now that we have a solution for part 1, let's solve part 2. Here's what I came up with. I'm not proud of it, but it works. Probably it's just a case of me not having any real experience with OCaml but I find this code pretty difficult to read, especially because it uses a very "functional" style of programming that I'm not used to (and I don't even know if it's idiomatic).
let part2 input =
let open Or_error.Let_syntax in
let%bind left, right = parse input in
let freq =
List.fold right ~init:[] ~f:(fun alist n ->
let k = Option.value (List.Assoc.find alist ~equal:Int.equal n) ~default:0 in
List.Assoc.add alist ~equal:Int.equal n (k + 1))
in
let scores =
List.map left ~f:(fun n ->
let k = Option.value (List.Assoc.find freq ~equal:Int.equal n) ~default:0 in
n * k)
in
let total_score = List.sum (module Int) scores ~f:Fn.id in
Ok total_score
;;
Together with its test, which gives us the right answers:
let%expect_test "part2" =
let test input =
match part2 input with
| Ok sum -> printf "%d\n" sum
| Error err -> print_endline ("error: " ^ Error.to_string_hum err)
in
test example_input;
[%expect {| 31 |}];
test full_input;
[%expect {| 20351745 |}]
;;
The only thing that remains now is to benchmark it. So I modified bin/benchmark.ml
:
$ dune exec bin/benchmark.exe -- -quota 2s
Estimated testing time 4s (2 benchmarks x 2s). Change using '-quota'.
ββββββββββββββββββββββββ¬βββββββββββββ¬βββββββββββββ¬βββββββββββ¬βββββββββββ¬βββββββββββββ
β Name β Time/Run β mWd/Run β mjWd/Run β Prom/Run β Percentage β
ββββββββββββββββββββββββΌβββββββββββββΌβββββββββββββΌβββββββββββΌβββββββββββΌβββββββββββββ€
β year2024_day01_part1 β 339.01us β 140.48kw β 6.15kw β 6.15kw β 5.93% β
β year2024_day01_part2 β 5_712.39us β 1_154.22kw β 20.11kw β 20.11kw β 100.00% β
ββββββββββββββββββββββββ΄βββββββββββββ΄βββββββββββββ΄βββββββββββ΄βββββββββββ΄βββββββββββββ
Awesome! Who doesn't like benchmarking numbers? :)
I'm vaguely aware that "mWd" and "mjWd" means "minor word" and "major word" and are fundamental concepts in how OCaml manages memory, and I think "Prom" refers to "promotions from minor heap to major heap", but I don't know anything more than that. I'll dive into that at some other point. I'm also not quite sure what the percentage means, since the sum of them is more than 100%. I'm sure this is described in dune exec bin/benchmark.exe -- -help
; I haven't looked.
Next steps
The next thing I'm going to do is to try to profile and optimize my solutions, especially for part 2. My solutions are using regular OCaml lists, which are essentially implemented as linked lists in typical functional programming fashion, but there are also arrays in Base
4 that I might be able to use. I'm going to do that in the next post.
It should be noted that I have written very little Python in my life, and only slightly more Rust, so I'm not well versed in their tooling either. Take what I say with a grain of salt.↩
The blog post is a short and great read, but if you really want the TL;DR it is that you can write tests by treating your test code as a kind of REPL where you run your code and capture its printed output. If the output looks good, then the next time you run your code it passes the test if it produces the same output. If you change your tests or your code under test, you can easily just run it again and capture the new output. If you've ever worked with "snapshot" or "golden file" tests, the idea is very much the same, but the outputs are inlined with the test code instead of in separate files -- this is a crucial difference that changes the workflow significantly!↩
At least that's my recollection of it. It was a while ago so I don't remember exactly.↩
As soon as I wrote this sentence, MotΓΆrhead's Ace of Spades started playing in my head (but as "arrays of base") and I was immediately transported back to being like 9 years old and playing the first level in Tony Hawk's Pro Skater 3 over and over again in the demo provided on a CD that came with some PC magazine. Good times.↩