diff --git a/blogware/symtab.go b/blogware/symtab.go index 2bdf828..def6e29 100644 --- a/blogware/symtab.go +++ b/blogware/symtab.go @@ -50,7 +50,7 @@ var ( SymUnderline = BuiltinCmd("u", ArgTypeSeq) SymNormal = BuiltinCmd("normal", ArgTypeSeq) SymEmphasis = BuiltinCmd("emph", ArgTypeSeq) - SymSmallCaps = BuiltinCmd("sc", ArgTypeSeq) + SymSmallCaps = BuiltinCmd("textsc", ArgTypeSeq) SymCircled = BuiltinCmd("circled", ArgTypeNum) SymCode = BuiltinCmd("code", ArgTypeSeq) SymCenter = BuiltinCmd("center", ArgTypeSeq) diff --git a/posts/01-effective-rust-canisters.tex b/posts/01-effective-rust-canisters.tex index 6f7d817..85950e4 100644 --- a/posts/01-effective-rust-canisters.tex +++ b/posts/01-effective-rust-canisters.tex @@ -719,7 +719,7 @@ \section{infra}{Infrastructure} \subsection{builds}{Builds} People using your canister might want to verify that it does what it claims to do (especially if the canister moves people's tokens around). -The Internet Computer allows anyone to inspect the \sc{sha256} hash sum of the canister WebAssembly module. +The Internet Computer allows anyone to inspect the \textsc{sha256} hash sum of the canister WebAssembly module. However, there are no good tools yet to review the canister's source code. The developer is responsibile for providing a reproducible way of building a WebAssembly module from the published source code. @@ -830,7 +830,7 @@ \subsection{upgrades}{Upgrades} \item Changing the layout of your data might be infeasible. It will simply be too much work for a canister to complete the data migration in one go. - Imagine writing a program that reformats an eight-gigabyte disk from \sc{fat32} to \sc{ntfs} without losing any data. + Imagine writing a program that reformats an eight-gigabyte disk from \textsc{fat32} to \textsc{ntfs} without losing any data. By the way, that program must complete in under 5 seconds. \item You must think carefully about the backward compatibility of your data structures. @@ -843,7 +843,7 @@ \subsection{upgrades}{Upgrades} \subsection{observability}{Observability} -At \href{https://dfinity.org/}{\sc{dfinity}}, we use metrics extensively and monitor all our production services. +At \href{https://dfinity.org/}{\textsc{dfinity}}, we use metrics extensively and monitor all our production services. Metrics are indispensable for understanding the behaviors of a complex distributed system. Canisters are not unique in this regard. diff --git a/posts/03-rust-packages-crates-modules.tex b/posts/03-rust-packages-crates-modules.tex index 73b8bde..e163fb7 100644 --- a/posts/03-rust-packages-crates-modules.tex +++ b/posts/03-rust-packages-crates-modules.tex @@ -261,7 +261,7 @@ \section{code-organization-advice}{Advice on code organization} Especially libraries should expose full flexibility to their users, and not use implicit logging behaviour. Usually \code{Logger} instances fit pretty neatly into data structures in your code representing resources, so it's not that hard to pass them in constructors, and use \code{info!(self.log, ...)} everywhere. -}{\href{https://github.com/slog-rs/slog/wiki/FAQ#do-i-have-to-pass-logger-around}{\code{slog} \sc{faq}}} +}{\href{https://github.com/slog-rs/slog/wiki/FAQ#do-i-have-to-pass-logger-around}{\code{slog} \textsc{faq}}} By passing state implicitly, you gain temporary convenience but make your code less clear, less testable, and more error-prone. Every type of resource we passed implicitly\sidenote{sn-resource-types}{\href{https://crates.io/crates/slog-scope}{scoped} loggers, \href{https://crates.io/crates/prometheus}{Prometheus} metrics registries, \href{https://crates.io/crates/rayon}{Rayon} thread pools, \href{https://crates.io/crates/tokio}{Tokio} runtimes, to name a few.} caused hard-to-diagnose issues and wasted a lot of engineering time. diff --git a/posts/04-square-joy-trapped-rain-water.tex b/posts/04-square-joy-trapped-rain-water.tex index 52e7413..94dd7e7 100644 --- a/posts/04-square-joy-trapped-rain-water.tex +++ b/posts/04-square-joy-trapped-rain-water.tex @@ -23,12 +23,12 @@ \section{the-why}{But why?} Idiomatic solutions require knowledge of many little tricks array-wrangling wizards developed in their chambers over the last fifty years. I would love to learn or rediscover these tricks, and I hope you might derive some pleasure and insights from reading about my little discoveries. -In this article, we'll use the \href{https://www.jsoftware.com/#/}{\sc{j} programming language}. -Why \sc{j} and not, say, \href{https://dyalog.com/}{\sc{apl}}? -\sc{j} is easier to type on most keyboards, and it's \href{https://github.com/jsoftware/jsource}{open source}. +In this article, we'll use the \href{https://www.jsoftware.com/#/}{\textsc{j} programming language}. +Why \textsc{j} and not, say, \href{https://dyalog.com/}{\textsc{apl}}? +\textsc{j} is easier to type on most keyboards, and it's \href{https://github.com/jsoftware/jsource}{open source}. -The code in this article will look like line noise if you aren't familiar with \sc{j}. -Don't be discouraged; this reaction is typical when working with \sc{j}. +The code in this article will look like line noise if you aren't familiar with \textsc{j}. +Don't be discouraged; this reaction is typical when working with \textsc{j}. I'll explain most of the steps, but the steps might still look like black magic sometimes. My goal is not to explain every aspect of the language but to demonstrate the problem-solving approach. @@ -79,9 +79,9 @@ \section{a-solution}{A solution} \section{translating-to-j}{Translating our idea to J} -\sc{j} is an interpreted language and it has a \href{https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop}{\sc{repl}}. -It's pretty standard for \sc{j} programmers to build solutions incrementally by trying snippets of code in the \sc{repl} and observing the effects. -The code in this article is also an interactive \sc{repl} session that you can replicate locally. +\textsc{j} is an interpreted language and it has a \href{https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop}{\textsc{repl}}. +It's pretty standard for \textsc{j} programmers to build solutions incrementally by trying snippets of code in the \textsc{repl} and observing the effects. +The code in this article is also an interactive \textsc{repl} session that you can replicate locally. Let us get some data to play with. \begin{code}[j] @@ -107,10 +107,10 @@ \section{translating-to-j}{Translating our idea to J} Wait, where is all the code? Let me break it down. -In \sc{j}, \href{https://code.jsoftware.com/wiki/Vocabulary/gtdot#dyadic}{\code{>.} (max)} is a verb (\sc{j} word for "function") that, when you use it dyadically (\sc{J} word for ``with two arguments''), computes the maximum of the arguments. +In \textsc{j}, \href{https://code.jsoftware.com/wiki/Vocabulary/gtdot#dyadic}{\code{>.} (max)} is a verb (\textsc{j} word for "function") that, when you use it dyadically (\textsc{J} word for ``with two arguments''), computes the maximum of the arguments. It's easy to guess that \href{https://code.jsoftware.com/wiki/Vocabulary/ltdot#dyadic}{\code{<.} (min)} is an analogous verb that computes the minimum. -The single character \href{https://code.jsoftware.com/wiki/Vocabulary/slash}{\code{/} (insert)} is an adverb (\sc{j} word for ``function modifier'') that takes a dyadic verb to the left and turns it into a verb that folds an entire array. +The single character \href{https://code.jsoftware.com/wiki/Vocabulary/slash}{\code{/} (insert)} is an adverb (\textsc{j} word for ``function modifier'') that takes a dyadic verb to the left and turns it into a verb that folds an entire array. Why is it called ``insert''? Because it inserts the verb between elements of the array it operates on. For example, summing up an array is just \code{+/}. @@ -179,7 +179,7 @@ \section{translating-to-j}{Translating our idea to J} 6 \end{verbatim} -The full implementation of our idea now fits into 12 \sc{ascii} characters. +The full implementation of our idea now fits into 12 \textsc{ascii} characters. One surprising property of array languages is that it's often not worth naming functions. Their entire body is shorter and more expressive than any name we could come up with. @@ -297,13 +297,13 @@ \section{drawing-solutions}{Drawing solutions} █░██░██████ \end{verbatim} -\code{ucp} is a built-in verb that constructs an array of Unicode codepoints from a \sc{utf}-8 encoded string. +\code{ucp} is a built-in verb that constructs an array of Unicode codepoints from a \textsc{utf}-8 encoded string. \href{https://code.jsoftware.com/wiki/Vocabulary/curlylf#dyadic}{\code{\{} (from)} is at the heart of our drawing trick. This verb selects items from the right argument according to the indices from the left argument. The effect is that all zeros got replaced with a space, ones---with a watery-looking glyph, and twos---with a solid rectangle. Our pseudo-graphics look quite impressive already, but we can do even better. -\sc{j} comes with a convenient \href{https://code.jsoftware.com/wiki/Studio/Viewmat}{viewmat} library that can visualize arrays. +\textsc{j} comes with a convenient \href{https://code.jsoftware.com/wiki/Studio/Viewmat}{viewmat} library that can visualize arrays. \begin{code}[j] require 'viewmat' @@ -410,7 +410,7 @@ \subsection{solution-3d}{Solution} \item Iterate the previous step until the distance grid converges. \end{itemize} -Let's put this idea in \sc{j} now. +Let's put this idea in \textsc{j} now. We start by constructing the initial matrix of distances. We'll use the \href{https://code.jsoftware.com/wiki/Vocabulary/dollar}{\code{\$} (shape of, reshape)} verb for that. @@ -510,7 +510,7 @@ \subsection{solution-3d}{Solution} step =. >. ((_1 & (|.!.0)) <. (1 & (|.!.0)) <. (_1 & (|.!.0)"1) <. (1 & (|.!.0)"1)) \end{verbatim} -To apply this function iteratively, we'll use the \href{https://code.jsoftware.com/wiki/Vocabulary/hatco}{\code{^:} (power of verb)} conjunction (another \sc{j} word for ``verb modifier''). +To apply this function iteratively, we'll use the \href{https://code.jsoftware.com/wiki/Vocabulary/hatco}{\code{^:} (power of verb)} conjunction (another \textsc{j} word for ``verb modifier''). If we raise a verb to power \math{N}, we get a verb that applies the original verb \math{N} times in a row. If we raise a verb to infinite power \href{https://code.jsoftware.com/wiki/Vocabulary/under}{\code{_} (infinity)}, the original verb gets applied until the computation reaches a fixed point. @@ -636,13 +636,13 @@ \section{back-to-2d}{Looking back at Flatland} \end{figure} \section{where-to-go-next}{Where to go next} -That was all the \sc{j} magic for today! +That was all the \textsc{j} magic for today! If you are confused and intrigued, I can recommend the following resources: \begin{itemize} \item Solve this problem on \href{https://leetcode.com/problems/trapping-rain-water/}{Leetcode}. \item Watch \href{https://youtu.be/ftcIcn8AmSY}{Four Solutions to a Trivial Problem}, a talk by Guy Steele where he explores the problem from different angles. - \item Read some \href{https://code.jsoftware.com/wiki/Books}{Books on \sc{j}}. + \item Read some \href{https://code.jsoftware.com/wiki/Books}{Books on \textsc{j}}. \item Listen to the \href{https://arraycast.com/}{Arraycast podcast}. \end{itemize} \end{document} diff --git a/posts/05-debug-like-feynman.tex b/posts/05-debug-like-feynman.tex index 9d0eac0..75be7b5 100644 --- a/posts/05-debug-like-feynman.tex +++ b/posts/05-debug-like-feynman.tex @@ -135,10 +135,10 @@ \section{know-your-data}{Know your data} About 40 seconds after the explosion the air blast reached me. I tried to estimate its strength by dropping from about six feet small pieces of paper before, during and after the passage of the blast wave. Since at the time, there was no wind I could observe very distinctly and actually measure the displacement of the pieces of paper that were in the process of falling while the blast was passing. - The shift was about 2½ meters, which, at the time, I estimated to correspond to the blast that would be produced by ten thousand tons of \sc{t.n.t.} + The shift was about 2½ meters, which, at the time, I estimated to correspond to the blast that would be produced by ten thousand tons of \textsc{t.n.t.} }{Enrico Fermi, \href{https://www.atomicarchive.com/resources/documents/trinity/fermi.html}{My Observations During the Explosion at Trinity on July 16, 1945}} -A guy from the \sc{ops} team stops by your desk. +A guy from the \textsc{ops} team stops by your desk. You feel uncomfortable because you forgot his name again. ``Hi! I need to buy machines for that new project you are working on. @@ -182,7 +182,7 @@ \section{know-your-data}{Know your data} The \href{https://carlos.bueno.org/optimization/}{Mature Optimization Handbook} will help you to get started. Having all these data at your disposal will enable you to perform \href{https://en.wikipedia.org/wiki/Back-of-the-envelope_calculation}{back-of-envelope calculations}, of which Enrico Fermi was an absolute master. -Next time that \sc{ops} guy\sidenote{sn-josh}{Josh. His name is Josh.} comes to your desk, you will have an answer for him. +Next time that \textsc{ops} guy\sidenote{sn-josh}{Josh. His name is Josh.} comes to your desk, you will have an answer for him. \section{debug-mental-models}{Debug mental models} @@ -287,20 +287,20 @@ \section{formalize-your-models}{Formalize your models} But it gets better. Once you have a formal specification, you can feed it to a computer and start asking interesting questions. -\href{https://lamport.azurewebsites.net/video/videos.html}{\sc{tla}+} is a powerful and accessible toolbox that can help you write and check formal specifications. +\href{https://lamport.azurewebsites.net/video/videos.html}{\textsc{tla}+} is a powerful and accessible toolbox that can help you write and check formal specifications. This is the system that the Amazon Web Services team used to build their critical systems (see \href{https://lamport.azurewebsites.net/tla/formal-methods-amazon.pdf}{How Amazon Web Services Uses Formal Methods}, Communications of the ACM, Vol. 58 No. 4, Pages 66-73). \href{https://www.linkedin.com/in/chenghuang/}{Cheng Huang}, a principle engineering manager at Microsoft, \href{https://lamport.azurewebsites.net/tla/industrial-use.html}{wrote}: \blockquote{ - \sc{tla}+ uncovered a safety violation even in our most confident implementation. + \textsc{tla}+ uncovered a safety violation even in our most confident implementation. We had a lock-free data structure implementation which was carefully design \& implemented, went through thorough code review, and was tested under stress for many days. As a result, we had high confidence about the implementation. - We eventually decided to write a \sc{tla}+ spec, not to verify correctness, but to allow team members to learn and practice PlusCal. + We eventually decided to write a \textsc{tla}+ spec, not to verify correctness, but to allow team members to learn and practice PlusCal. So, when the model checker reported a safety violation, it really caught us by surprise. - This experience has become the aha moment for many team members and their de facto testimonial about \sc{tla}+. -}{Leslie Lamport, \href{https://lamport.azurewebsites.net/tla/industrial-use.html}{Industrial Use of \sc{tla}+}} + This experience has become the aha moment for many team members and their de facto testimonial about \textsc{tla}+. +}{Leslie Lamport, \href{https://lamport.azurewebsites.net/tla/industrial-use.html}{Industrial Use of \textsc{tla}+}} -I wish I have learned about \sc{tla}+ much earlier in my career. +I wish I have learned about \textsc{tla}+ much earlier in my career. Unlike other formal methods I tried, this tool is easy to pick up. \section{question-method}{Question your method} @@ -324,7 +324,7 @@ \section{question-method}{Question your method} How many pictures have computers on them? The amount of material on software development methodology is overwhelming. -\href{https://agilemanifesto.org/}{Agile}, \href{https://en.wikipedia.org/wiki/Extreme_programming}{\sc{xp}}, \href{https://scrum.org}{Scrum}, \href{https://en.wikipedia.org/wiki/Kanban_(development)}{Kanban}, \href{https://www.lean.org/}{Lean}, \href{https://en.wikipedia.org/wiki/Test-driven_development}{\sc{tdd}}, \href{https://en.wikipedia.org/wiki/Behavior-driven_development}{\sc{bdd}}, and the list goes on. +\href{https://agilemanifesto.org/}{Agile}, \href{https://en.wikipedia.org/wiki/Extreme_programming}{\textsc{xp}}, \href{https://scrum.org}{Scrum}, \href{https://en.wikipedia.org/wiki/Kanban_(development)}{Kanban}, \href{https://www.lean.org/}{Lean}, \href{https://en.wikipedia.org/wiki/Test-driven_development}{\textsc{tdd}}, \href{https://en.wikipedia.org/wiki/Behavior-driven_development}{\textsc{bdd}}, and the list goes on. It is easy to get the impression that everyone knows the best way to develop software but you. This impression is false. No one has a clue. diff --git a/posts/06-ic-orthogonal-persistence.tex b/posts/06-ic-orthogonal-persistence.tex index 8835cc4..0f18473 100644 --- a/posts/06-ic-orthogonal-persistence.tex +++ b/posts/06-ic-orthogonal-persistence.tex @@ -20,7 +20,7 @@ \section{op}{Orthogonal persistence} \begin{figure} \marginnote{mn-ps-algol}{ - A slightly modified snippet of code in \href{https://en.wikipedia.org/wiki/PS-algol}{\sc{ps}-algol} from the paper ``An Approach to Persistent Programming'' by Atkinson et al. + A slightly modified snippet of code in \href{https://en.wikipedia.org/wiki/PS-algol}{\textsc{ps}-algol} from the paper ``An Approach to Persistent Programming'' by Atkinson et al. Note explicit calls to \code{commit} and \code{abort} procedures. } \begin{code}[algol] @@ -109,7 +109,7 @@ \section{snapshots-deltas}{Snapshots and deltas} The implementation divides the contents of each memory into 4096-byte chunks called \emph{pages}. When the actor executes a message, the system automatically detects the memory pages that the actor modifies or \emph{dirties}. -The system uses low-level \sc{unix api}s to detect page modifications. +The system uses low-level \textsc{unix api}s to detect page modifications. Since most operating systems operate on 4096-byte memory pages, using the same page size in the replica is the most natural choice. The memory snapshot of an actor is a map from a page index to the page contents. @@ -160,12 +160,12 @@ \section{detecting-memory-writes}{Detecting memory writes} This approach works on all platforms but can slow down execution by an order of magnitude. \item Use memory protection and signal handlers to detect memory reads and writes at runtime. - This approach works well on \sc{unix} platforms and is reasonably efficient. + This approach works well on \textsc{unix} platforms and is reasonably efficient. We shall discuss this approach in more detail shortly. \item - Implement a custom filesystem backend as a \href{https://en.wikipedia.org/wiki/Filesystem_in_Userspace}{\sc{fuse}} library, then \href{https://man7.org/linux/man-pages/man2/mmap.2.html}{memory-map} actor memory as a virtual file. + Implement a custom filesystem backend as a \href{https://en.wikipedia.org/wiki/Filesystem_in_Userspace}{\textsc{fuse}} library, then \href{https://man7.org/linux/man-pages/man2/mmap.2.html}{memory-map} actor memory as a virtual file. The operating system will call into our library when the actor code reads from or writes to the memory-mapped file, allowing us to perform the required bookkeeping. - This approach works great on platforms that support \sc{fuse} (Linux and macOS), but it has some administrative disadvantages. + This approach works great on platforms that support \textsc{fuse} (Linux and macOS), but it has some administrative disadvantages. For example, the replica will need priveledged access to be able to mount a virtual filesystem. \item Use Linux \href{https://man7.org/linux/man-pages/man1/pmap.1.html}{pmap} utility to extract \href{https://techtalk.intersec.com/2013/07/memory-part-2-understanding-process-memory/#pmap-detailed-mapping}{memory access statistics} right after the message execution. @@ -174,10 +174,10 @@ \section{detecting-memory-writes}{Detecting memory writes} \subsection{signal-handler}{The signal handler} -The \href{https://man7.org/linux/man-pages/man2/mprotect.2.html}{memory protection \sc{api}} allows us to set read and write permissions on page ranges. +The \href{https://man7.org/linux/man-pages/man2/mprotect.2.html}{memory protection \textsc{api}} allows us to set read and write permissions on page ranges. If the process violates the permissions by reading a read-protected region or writing to a write-protected region, the operating system sends a signal (usually \code{SIGSEGV}) to the thread that triggered the violation. -The \href{https://man7.org/linux/man-pages/man7/signal.7.html}{signal handling \sc{api}} allows us to intercept these signals and inspect the address that caused them. -We can build a robust and efficient memory access detection mechanism by combining these \sc{api}s. +The \href{https://man7.org/linux/man-pages/man7/signal.7.html}{signal handling \textsc{api}} allows us to intercept these signals and inspect the address that caused them. +We can build a robust and efficient memory access detection mechanism by combining these \textsc{api}s. \begin{figure} \marginnote{mn-mem-access-pseudocode}{Pseudocode of the memory access detection state machine.} @@ -308,8 +308,8 @@ \section{upgrades}{Surviving upgrades} Such memories open up attractive optimization opportunities, but we did not have enough pressure to implement them. } storage. -Since the IC needed to support upgrades before the multi-memory proposal was implemented, the runtime emulates one extra memory via the \href{https://smartcontracts.org/docs/interface-spec/index.html#system-api-stable-memory}{stable memory \sc{api}}. -This \sc{api} intentionally mimics WebAssembly \href{https://webassembly.github.io/bulk-memory-operations/core/exec/instructions.html#memory-instructions}{memory instructions} to facilitate future migration to multi-memory modules. +Since the IC needed to support upgrades before the multi-memory proposal was implemented, the runtime emulates one extra memory via the \href{https://smartcontracts.org/docs/interface-spec/index.html#system-api-stable-memory}{stable memory \textsc{api}}. +This \textsc{api} intentionally mimics WebAssembly \href{https://webassembly.github.io/bulk-memory-operations/core/exec/instructions.html#memory-instructions}{memory instructions} to facilitate future migration to multi-memory modules. Internally, the main memory and the stable memory share the \href{#snapshots-deltas}{representation}. Currently, most actors serialize their entire state to stable memory in the pre-upgrade hook and read it back in the post-upgrade hook. @@ -318,7 +318,7 @@ \section{upgrades}{Surviving upgrades} \section{conclusion}{Conclusion} We examined the classical approach to orthogonal persistence and its \href{#actors}{synergy} with the actor model. -We learned about the \href{#snapshots-deltas}{page map}, the data structure that IC replicas use to store multiple versions of an actor's memory efficiently, and the \href{#signal-handler}{\sc{sigsegv}-based memory access detection system}. +We learned about the \href{#snapshots-deltas}{page map}, the data structure that IC replicas use to store multiple versions of an actor's memory efficiently, and the \href{#signal-handler}{\textsc{sigsegv}-based memory access detection system}. Lastly, we saw that orthogonal persistence is not the final solution to state persistence and why we need better tools to handle \href{#upgrades}{program upgrades}. \section{references}{References} diff --git a/posts/08-ic-xnet.tex b/posts/08-ic-xnet.tex index 46840e8..d7f79e0 100644 --- a/posts/08-ic-xnet.tex +++ b/posts/08-ic-xnet.tex @@ -116,8 +116,8 @@ \section{garbage-collection}{Garbage collection} \item The full range of message indices in the forward stream \math{X → Y}. \item - The signals for the \emph{reverse} stream (\math{Y → X}): for each message index in the reverse stream, \math{Y} tells whether \math{X} can garbage collect the message (an \sc{ack} signal) or should reroute the message (a \sc{reject} signal). - A \sc{reject} signal indicates that the destination canister moved, so \math{X} should route the message into another stream. + The signals for the \emph{reverse} stream (\math{Y → X}): for each message index in the reverse stream, \math{Y} tells whether \math{X} can garbage collect the message (an \textsc{ack} signal) or should reroute the message (a \textsc{reject} signal). + A \textsc{reject} signal indicates that the destination canister moved, so \math{X} should route the message into another stream. \end{itemize} Signals solve the issue of collecting obsolete messages, but they introduce another problem: now we also need to garbage-collect signals! diff --git a/posts/09-fungible-tokens-101.tex b/posts/09-fungible-tokens-101.tex index 89aa1ac..3831ce0 100644 --- a/posts/09-fungible-tokens-101.tex +++ b/posts/09-fungible-tokens-101.tex @@ -198,7 +198,7 @@ \subsection{approvals}{Approvals} Allen paid for Alex, and Alex transferred some of Geneviève's tokens to Allen in return. This arrangement resulted in two updates to the ledger. The first update is the new transaction in the log. -Note that we need a new column in the table, \sc{on behalf of}, to indicate that Alex initiated the transaction, but Geneviève is the effective payer. +Note that we need a new column in the table, \textsc{on behalf of}, to indicate that Alex initiated the transaction, but Geneviève is the effective payer. \begin{tabular}{l l l r} From & On behalf of & To & Amount \\ diff --git a/posts/10-payment-flows.tex b/posts/10-payment-flows.tex index 328879b..f570e19 100644 --- a/posts/10-payment-flows.tex +++ b/posts/10-payment-flows.tex @@ -18,10 +18,10 @@ \section{introduction}{Introduction} \section{prerequisites}{Prerequisites} \subsection{the-payment-scenario}{The payment scenario} -Abstract protocols can be dull and hard to comprehend, so let us model a specific payment scenario: me buying a new laptop online and paying for it in \sc{wxdr} (wrapped \href{https://en.wikipedia.org/wiki/Special_drawing_rights}{\sc{sdr}}) tokens locked in a ledger hosted on the Internet Computer. +Abstract protocols can be dull and hard to comprehend, so let us model a specific payment scenario: me buying a new laptop online and paying for it in \textsc{wxdr} (wrapped \href{https://en.wikipedia.org/wiki/Special_drawing_rights}{\textsc{sdr}}) tokens locked in a ledger hosted on the Internet Computer. I open the website of the hardware vendor I trust, select the configuration (the memory capacity, the number of cores, etc.) that suits my needs, fill in the shipment details, and go to the payment page. -I choose an option to pay in \sc{wxdr}. +I choose an option to pay in \textsc{wxdr}. In the rest of the article, we will fantasize about what the payment page can look like and how it can interact with the shop. @@ -44,7 +44,7 @@ \subsection{payment-phases}{Payment phases} \item \emph{The negotiation phase}. After I place my order and fill in the shipment details, the shop creates a unique order identifier, \emph{Invoice ID}. - The \emph{web page} displays the payment details (e.g., as a \sc{qr} code of the request I need to sign) and instructions on how to proceed with the order. + The \emph{web page} displays the payment details (e.g., as a \textsc{qr} code of the request I need to sign) and instructions on how to proceed with the order. \item \emph{The payment phase}. I use my \emph{wallet} to execute the transaction as instructed on the \emph{web page}. @@ -112,7 +112,7 @@ \section{invoice-account}{Invoice account} The ledger implementation is straightforward: the subaccounts feature is the only requirement for the flow. \end{itemize} -What happens if I transfer my \sc{wxdr}s but never click the ``Done'' button? +What happens if I transfer my \textsc{wxdr}s but never click the ``Done'' button? Or what if my browser loses network connection right before it sends the shop notification? The shop will not receive any notifications, likely never making progress with my order. One strategy that the shop could use to improve the user experience in such cases is to monitor balances for unpaid invoices and complete transactions automatically if the notification does not arrive in a reasonable amount of time. diff --git a/posts/12-rust-error-handling.tex b/posts/12-rust-error-handling.tex index 41048e6..2987a8b 100644 --- a/posts/12-rust-error-handling.tex +++ b/posts/12-rust-error-handling.tex @@ -86,8 +86,8 @@ \subsection{prefer-specific-enums}{Prefer specific enums} However, I often use the \code{anyhow} approach to simplify structuring errors in command-line tools and daemons. } in the long run: they facilitate \emph{propagating} errors (often with little context about the operation that caused the error), not \emph{handling} errors. -When it comes to interface clarity and simplicity, nothing beats \href{https://en.wikipedia.org/wiki/Algebraic_data_type}{algebraic data types} (\sc{adt}s). -Let us use the power of \sc{adt}s to fix the \code{frobnicate} function interface. +When it comes to interface clarity and simplicity, nothing beats \href{https://en.wikipedia.org/wiki/Algebraic_data_type}{algebraic data types} (\textsc{adt}s). +Let us use the power of \textsc{adt}s to fix the \code{frobnicate} function interface. \begin{figure} \marginnote{mn-adt-frobnicate}{ diff --git a/posts/13-icp-ledger.tex b/posts/13-icp-ledger.tex index 8ba952a..92511b9 100644 --- a/posts/13-icp-ledger.tex +++ b/posts/13-icp-ledger.tex @@ -69,7 +69,7 @@ \section{account-id}{Account identifiers} \begin{itemize} \item Account identifiers is yet another concept that people need to understand. - The \sc{ic} is already heavy on new ideas and terminology; unnecessary complication does not help adoption. + The \textsc{ic} is already heavy on new ideas and terminology; unnecessary complication does not help adoption. For example, a few confused developers tried to pass principal bytes as an account identifier. \item The account identifier is a one-way function of the principal and the subaccount. diff --git a/posts/14-stable-structures.tex b/posts/14-stable-structures.tex index 6c4b9dc..f29a485 100644 --- a/posts/14-stable-structures.tex +++ b/posts/14-stable-structures.tex @@ -32,7 +32,7 @@ \section{introduction}{Introduction} \section{design-principles}{Design principles} \epigraph{ The point is that you must decide, in advance, what the coding priorities and quality bars will be; otherwise, the team will have to waste time rewriting misconceived or substandard code. -}{Steve Maguire, ``Debugging the Development Process'', \sc{the groundwork}, p. 19} +}{Steve Maguire, ``Debugging the Development Process'', \textsc{the groundwork}, p. 19} Software designs reflect their creators' values. The following principles shaped the \code{stable-structures} library design. @@ -75,7 +75,7 @@ \section{abstractions}{Abstractions} \epigraph{ In solving a problem with or without a computer it is necessary to choose an abstraction of reality, i.e., to define a set of data that is to represent the real situation. The choice must be guided by the problem to be solved. -}{Niklaus Wirth, ``Algorithms + Data Structures = Programs'', \sc{fundamental data structures}, p. 1} +}{Niklaus Wirth, ``Algorithms + Data Structures = Programs'', \textsc{fundamental data structures}, p. 1} \subsection{memory}{Memory} @@ -203,7 +203,7 @@ \subsection{storable-types}{Storable types} \section{data-structures}{Data structures} \epigraph{ One has an intuitive feeling that data precede algorithms: you must have some objects before you can perform operations on them. -}{Niklaus Wirth, ``Algorithms + Data Structures = Programs'', \sc{preface}, p. xiii} +}{Niklaus Wirth, ``Algorithms + Data Structures = Programs'', \textsc{preface}, p. xiii} The heart of the \code{stable-structures} library is a collection of data structures, each spanning one or more \href{#memory}{memories}. @@ -380,7 +380,7 @@ \subsection{stable-log}{Stable log} \subsection{stable-btree}{Stable B-tree} \epigraph{ \emph{Deletion} of items from a B-tree is fairly straightforward in principle, but it is complicated in the details. -}{Niklaus Wirth, ``Algorithms + Data Structures = Programs'', \sc{dynamic information structures}, p. 250} +}{Niklaus Wirth, ``Algorithms + Data Structures = Programs'', \textsc{dynamic information structures}, p. 250} The \href{https://docs.rs/ic-stable-structures/0.4.0/ic_stable_structures/btreemap/struct.BTreeMap.html}{\code{BTreeMap}} stable structure is an associative container that can hold any \href{#storable-types}{bounded storable type}. The map must know the sizes of the keys and values because it allocates nodes from a pool of fixed-size tree nodes\sidenote{sn-}{ @@ -481,7 +481,7 @@ \section{constructing-ss}{Constructing stable structures} \begin{itemize} \item\code{T::new} allocates a new copy of \code{T} in the given \href{#memory}{memory}, potentially overwriting previous memory content. \item\code{T::load} recovers a previously constructed \code{T} from the given \href{#memory}{memory}. - \item\code{T::init} is a \href{https://en.wikipedia.org/wiki/DWIM}{\sc{dwim}} constructor acting as \code{T::new} if the given \href{#memory}{memory} is empty and as \code{T::load} otherwise. + \item\code{T::init} is a \href{https://en.wikipedia.org/wiki/DWIM}{\textsc{dwim}} constructor acting as \code{T::new} if the given \href{#memory}{memory} is empty and as \code{T::load} otherwise. \end{itemize} In practice, most canisters need only \code{T::init}. @@ -570,7 +570,7 @@ \section{tying-together}{Tying it all together} The next step is to decide on the data serialization format. I use \href{https://cbor.io/}{Concise Binary Object Representation} in the example because this encoding served me well in production. -Instead of implementing the \href{#storable-trait}{\code{Storable}} trait for \code{Metadata} and \code{Tx}, I define a generic wrapper type \code{Cbor} that I use for all types I want to encode as \sc{cbor} and implement \href{#storable-trait}{\code{Storable}} only for the wrapper. +Instead of implementing the \href{#storable-trait}{\code{Storable}} trait for \code{Metadata} and \code{Tx}, I define a generic wrapper type \code{Cbor} that I use for all types I want to encode as \textsc{cbor} and implement \href{#storable-trait}{\code{Storable}} only for the wrapper. I also implement \href{https://doc.rust-lang.org/std/ops/trait.Deref.html}{\code{std::ops::Deref}} to improve the ergonomics of the wrapper type. \begin{code}[rust] diff --git a/posts/16-building-a-second-brain.tex b/posts/16-building-a-second-brain.tex index eedee4b..4f9adcf 100644 --- a/posts/16-building-a-second-brain.tex +++ b/posts/16-building-a-second-brain.tex @@ -11,11 +11,11 @@ \section{introduction}{Introduction} This article summarizes the \href{https://www.buildingasecondbrain.com/}{Building a Second Brain} book by Tiago Forte. -The book describes the advantages of \href{https://en.wikipedia.org/wiki/Personal_knowledge_management}{personal knowledge management} (\sc{pkm}) systems and offers many tips on using these systems efficiently. +The book describes the advantages of \href{https://en.wikipedia.org/wiki/Personal_knowledge_management}{personal knowledge management} (\textsc{pkm}) systems and offers many tips on using these systems efficiently. \section{code}{CODE} -Tiago organized the book around the four activities required for maintaining a \sc{pkm} system that he describes using the CODE acronym, which stands for ``Capture, Organize, Distill, Express.'' +Tiago organized the book around the four activities required for maintaining a \textsc{pkm} system that he describes using the CODE acronym, which stands for ``Capture, Organize, Distill, Express.'' Expression is the most crucial part of the process; all other steps exist to facilitate it. \begin{figure}[grayscale-diagram] @@ -24,7 +24,7 @@ \section{code}{CODE} \subsection{capture}{Capture} -Capturing is the process of adding new notes to your \sc{pkm}. +Capturing is the process of adding new notes to your \textsc{pkm}. Tiago's primary recommendation is not to capture too much: \begin{itemize} @@ -40,7 +40,7 @@ \subsection{capture}{Capture} \subsection{organize}{Organize} -Tiago proposes an outcome-oriented organization system that he abbreviates as \href{https://fortelabs.com/blog/para/}{\sc{para}}. +Tiago proposes an outcome-oriented organization system that he abbreviates as \href{https://fortelabs.com/blog/para/}{\textsc{para}}. The crux of the method is classifying your notes into four top-level categories: \begin{enumerate} \item @@ -55,7 +55,7 @@ \subsection{organize}{Organize} \emph{Archives} contain inactive items from other categories. \end{enumerate} -One helpful analogy for the \sc{para} system the book mentions is cooking in a kitchen: +One helpful analogy for the \textsc{para} system the book mentions is cooking in a kitchen: \begin{itemize} \item Archives are like the freezer. These items wait in cold storage until you need them. \item Resources are like the pantry. These items are available for use in any meal you make but tucked away in the meantime. @@ -100,7 +100,7 @@ \subsection{express}{Express} The purpose of the previous three activities is to help us stay focused when we enter the creative mode. Tiago argues that most projects consist of smaller units or increments that he calls ``\href{https://fortelabs.com/blog/just-in-time-pm-4-intermediate-packets/}{intermediate packets}.'' -Use your \sc{pkm} to track these knowledge pieces so you can find them quickly when you work on a project where they could be helpful. +Use your \textsc{pkm} to track these knowledge pieces so you can find them quickly when you work on a project where they could be helpful. Strive to split your project into chunks and deliver them separately, receiving feedback as soon as possible. According to Tiago, the creative process usually goes through two stages: \href{https://fortelabs.com/blog/divergence-and-convergence-the-two-fundamental-stages-of-the-creative-process/}{\emph{divergence} and \emph{convergence}}. @@ -126,7 +126,7 @@ \subsection{express}{Express} \section{habits}{PKM habits} -Tiago compares maintaining your \sc{pkm} to \href{https://en.wikipedia.org/wiki/Mise_en_place}{mise en place}, a set of habits that cooks use to keep their workplace clean and organized. +Tiago compares maintaining your \textsc{pkm} to \href{https://en.wikipedia.org/wiki/Mise_en_place}{mise en place}, a set of habits that cooks use to keep their workplace clean and organized. He mentions a few helpful routines for managing a second brain: \begin{itemize} \item diff --git a/posts/19-eventlog.tex b/posts/19-eventlog.tex index 6d13ee1..bc00344 100644 --- a/posts/19-eventlog.tex +++ b/posts/19-eventlog.tex @@ -21,7 +21,7 @@ \section{motivation}{Motivation} \begin{itemize} \item - \href{https://en.wikipedia.org/wiki/Unspent_transaction_output}{Unspent Transaction Outputs} (\sc{utxo}s) the minter owns on the Bitcoin network, indexed and sliced in various ways (by account, state, etc.). + \href{https://en.wikipedia.org/wiki/Unspent_transaction_output}{Unspent Transaction Outputs} (\textsc{utxo}s) the minter owns on the Bitcoin network, indexed and sliced in various ways (by account, state, etc.). \item ckBTC to Bitcoin conversion requests, indexed by state and the arrival time. \item @@ -136,16 +136,16 @@ \subsection{what-is-an-event}{What is an event?} \item The user calls the \code{update_balance} endpoint on the minter. \item - The minter fetches the list of its \sc{utxo}s matching the user account from the \href{https://github.com/dfinity/bitcoin-canister}{Bitcoin canister} and checks whether there are new items in the list. + The minter fetches the list of its \textsc{utxo}s matching the user account from the \href{https://github.com/dfinity/bitcoin-canister}{Bitcoin canister} and checks whether there are new items in the list. \item - The minter mints ckBTC tokens on the ledger smart contract for each new \sc{utxo} and reports the results to the user. + The minter mints ckBTC tokens on the ledger smart contract for each new \textsc{utxo} and reports the results to the user. \end{enumerate} That's a lot of interactions! -Luckily, the only significant outcome of these actions is the minter acquiring a new \sc{utxo}. +Luckily, the only significant outcome of these actions is the minter acquiring a new \textsc{utxo}. That's our event type: \code{minted(utxo, account)}. -If any of the intermediate steps fails or there are no new \sc{utxo}s in the list, the original request doesn't affect the minter state. +If any of the intermediate steps fails or there are no new \textsc{utxo}s in the list, the original request doesn't affect the minter state. Thus a malicious user cannot fill the minter's memory with unproductive events. Creating an event requires sending funds on the Bitcoin network, and that's a slow and expensive operation. diff --git a/posts/20-candid-for-engineers.tex b/posts/20-candid-for-engineers.tex index 31b8909..a40f029 100644 --- a/posts/20-candid-for-engineers.tex +++ b/posts/20-candid-for-engineers.tex @@ -426,7 +426,7 @@ \subsection{encoding-an-empty-tuple}{Example: encoding an empty tuple} \begin{itemize} \item All Candid messages begin with the \code{DIDL} byte string. - Most likely, \code{DIDL} stands for ``\sc{dfinity} Interface Definition Language''. + Most likely, \code{DIDL} stands for ``\textsc{dfinity} Interface Definition Language''. \item This message's values is missing in this message because the tuple size is zero. \end{itemize} diff --git a/posts/22-flat-in-order-trees.tex b/posts/22-flat-in-order-trees.tex index 398b576..1346aa2 100644 --- a/posts/22-flat-in-order-trees.tex +++ b/posts/22-flat-in-order-trees.tex @@ -87,7 +87,7 @@ \section{sec-addressing}{Addressing tree nodes} We'll first examine addressing nodes within a flat in-order perfect binary tree and then extend the approach to left-perfect binary trees. -\sc{theorem}: in a flat in-order perfect binary tree of depth \math{D}, node's index \math{i} has the following structure: +\textsc{theorem}: in a flat in-order perfect binary tree of depth \math{D}, node's index \math{i} has the following structure: \begin{itemize} \item The longest run of least-significant set bits in \math{i}'s binary representation indicates the depth of subtrees rooted at \math{i}. @@ -104,7 +104,7 @@ \section{sec-addressing}{Addressing tree nodes} \includegraphics{/images/22-addressing-scheme.svg} \end{figure} -\sc{proof}: by induction on the tree depth \math{D}. +\textsc{proof}: by induction on the tree depth \math{D}. Base case: the theorem is vacuously true for trees of depth one (a single node with index zero). The case of a tree of depth two is more instructive: diff --git a/posts/24-ocr.tex b/posts/24-ocr.tex index 60671c4..65bc0a1 100644 --- a/posts/24-ocr.tex +++ b/posts/24-ocr.tex @@ -10,12 +10,12 @@ \begin{document} \section{abstract}{Abstract} -This article is a high-level overview of the Off-Chain Reporting protocol (\sc{ocr}) powering most of \href{https://chain.link/}{Chainlink} products. +This article is a high-level overview of the Off-Chain Reporting protocol (\textsc{ocr}) powering most of \href{https://chain.link/}{Chainlink} products. The protocol allows a group of \math{n} nodes called \emph{oracles}, up to \math{f} of which could be \href{https://en.wikipedia.org/wiki/Byzantine_fault}{byzantine} (\math{f < n⁄3}), to agree on a data point and record it on a blockchain supporting smart contracts (e.g., Ethereum). \section{components}{Protocol components} -All \sc{ocr} deployments have two parts with different execution models: the on-chain part, implemented as a smart contract called the \emph{aggregator}, and the off-chain part, implemented as a peer-to-peer network of oracles. +All \textsc{ocr} deployments have two parts with different execution models: the on-chain part, implemented as a smart contract called the \emph{aggregator}, and the off-chain part, implemented as a peer-to-peer network of oracles. The off-chain communication protocol, in turn, consists of three sub-protocols layered on top of one another: the \href{#pacemaker}{pacemaker}, the \href{#report-generation}{report generation}, and the \href{#transmission}{transmission} protocols. \subsection{aggregator-contract}{The aggregator contract} @@ -31,7 +31,7 @@ \subsection{aggregator-contract}{The aggregator contract} \subsection{pacemaker}{Pacemaker} -The \emph{pacemaker} algorithm of the \sc{ocr} protocol periodically assigns a node to be a \emph{leader} coordinating the rest of the protocol functions. +The \emph{pacemaker} algorithm of the \textsc{ocr} protocol periodically assigns a node to be a \emph{leader} coordinating the rest of the protocol functions. The period between two consecutive leader assignments is called an \emph{epoch}. Within each epoch, the leader initiates \emph{rounds} of the \href{#report-generation}{report generation} algorithm. The tuple \math{(e, r)}, where \math{e} is the epoch number and \math{r} is the round number, serves as a logical clock for the protocol. @@ -56,7 +56,7 @@ \subsection{pacemaker}{Pacemaker} \subsection{report-generation}{Report generation} The report generation algorithm produces a data point for the \href{#aggregator-contract}{aggregator contract}. -For example, if the aggregator contract records an asset price in \sc{usd}, the algorithm produces a price that faulty oracles can't manipulate. +For example, if the aggregator contract records an asset price in \textsc{usd}, the algorithm produces a price that faulty oracles can't manipulate. First, the leader initiates a new round by picking a \emph{query} describing the task the followers need to execute and sending it to all the followers. \begin{figure}[grayscale-diagram,medium-size] @@ -111,18 +111,18 @@ \subsection{transmission}{Transmission} \section{iterations}{Protocol version iterations} \begin{itemize} - \item \sc{ocr 1.0} specifically targeted data feeds for \sc{evm} blockchains. - \item \sc{ocr 2.0} is a major iteration that introduced a plugin architecture, significantly extending the protocol's capabilities. - This version also features reduced gas costs, a more secure \sc{p2p} networking stack based on \sc{tls1.3}, and better performance characteristics (lower latency, higher throughput). - \item \sc{ocr 3.0} made the plugin interface more flexible, introduced the observation history chain, reduced the latency, and improved throughput using the report batching feature. + \item \textsc{ocr 1.0} specifically targeted data feeds for \textsc{evm} blockchains. + \item \textsc{ocr 2.0} is a major iteration that introduced a plugin architecture, significantly extending the protocol's capabilities. + This version also features reduced gas costs, a more secure \textsc{p2p} networking stack based on \textsc{tls1.3}, and better performance characteristics (lower latency, higher throughput). + \item \textsc{ocr 3.0} made the plugin interface more flexible, introduced the observation history chain, reduced the latency, and improved throughput using the report batching feature. \end{itemize} \section{resources}{Resources} \begin{itemize} \item The \href{https://research.chain.link/ocr.pdf}{Chainlink Off-chain Reporting Protocol} whitepaper by Lorenz Breidenbach et al. describes the first protocol version in great detail and contains proofs of its security properties. - \item The \href{https://github.com/smartcontractkit/libocr/blob/6359502d2ff1165c7e6b77b9eff2c5a46a7a4fbb/offchainreporting2plus/ocr3types/plugin.go#L94}{\sc{ocr3} plugin interface} documents the general flow and extention points. - \item In the \href{https://youtu.be/XKiLkmwVaYA}{Looking under the hood of \sc{ocr} 2.0} video, Lorenz Breidenbach explains the protocol evolution and the plugin architecture. - \item In the \href{https://youtu.be/VPVH3QCwc0U}{\sc{ocr3} protocol overview} video, Chrysa Stathakopoulou outlines the protocol structure and mentions features added in its third iteration. + \item The \href{https://github.com/smartcontractkit/libocr/blob/6359502d2ff1165c7e6b77b9eff2c5a46a7a4fbb/offchainreporting2plus/ocr3types/plugin.go#L94}{\textsc{ocr3} plugin interface} documents the general flow and extention points. + \item In the \href{https://youtu.be/XKiLkmwVaYA}{Looking under the hood of \textsc{ocr} 2.0} video, Lorenz Breidenbach explains the protocol evolution and the plugin architecture. + \item In the \href{https://youtu.be/VPVH3QCwc0U}{\textsc{ocr3} protocol overview} video, Chrysa Stathakopoulou outlines the protocol structure and mentions features added in its third iteration. \end{itemize} \end{document} diff --git a/posts/25-domain-types.tex b/posts/25-domain-types.tex index 98c1c55..d678d52 100644 --- a/posts/25-domain-types.tex +++ b/posts/25-domain-types.tex @@ -11,7 +11,7 @@ \begin{document} \epigraph{ - \sc{in strong typing we trust} + \textsc{in strong typing we trust} }{Inscription on the \href{https://people.cs.kuleuven.be/~dirk.craeynest/ada-belgium/pictures/ada-strong.html}{Ada coin}} \section{intro}{Introduction} @@ -160,7 +160,7 @@ \subsection{identifiers}{Identifiers} \subsection{amounts}{Amounts} -Another typical use of domain types is representing quantities, such as the amount of money in \sc{usd} on a bank account or the file size in bytes. +Another typical use of domain types is representing quantities, such as the amount of money in \textsc{usd} on a bank account or the file size in bytes. Being able to compare, add, and subtract amounts is essential. Generally, we cannot multiply or divide two compatible amounts and expect to get the amount of the same type back\sidenote{sn-amount-probability}{ @@ -258,15 +258,15 @@ \subsection{Loci}{Loci} \end{figure} Timestamps offer an excellent demonstration of the ``distance type + the origin'' concept. -Go and Rust represent timestamps as a number of \emph{nanoseconds} passed from the \sc{unix} epoch (midnight of January 1st, 1970), -The C programming language defines the \href{https://en.cppreference.com/w/c/chrono/time_t}{\code{time\_t}} type, which is almost always the number of \emph{seconds} from the \sc{unix} epoch. +Go and Rust represent timestamps as a number of \emph{nanoseconds} passed from the \textsc{unix} epoch (midnight of January 1st, 1970), +The C programming language defines the \href{https://en.cppreference.com/w/c/chrono/time_t}{\code{time\_t}} type, which is almost always the number of \emph{seconds} from the \textsc{unix} epoch. The \href{https://en.wikipedia.org/wiki/Q_(programming_language_from_Kx_Systems)}{q programming language} also uses nanoseconds, but \href{https://code.kx.com/q4m3/2_Basic_Data_Types_Atoms/#253-date-time-types}{chose the \emph{millennium}} (midnight of January 1st, 2000) as its origin point. -Changing the distance type (e.g., seconds to nanoseconds) or the origin (e.g., \sc{unix} epoch to the millennium) calls for a different timestamp type. +Changing the distance type (e.g., seconds to nanoseconds) or the origin (e.g., \textsc{unix} epoch to the millennium) calls for a different timestamp type. The Go standard library employs the locus type design for its \href{https://pkg.go.dev/time}{\code{time}} package, differentiating the time instant (\href{https://pkg.go.dev/time#Time}{\code{time.Time}}) and time duration (\href{https://pkg.go.dev/time#Duration}{\code{time.Duration}}). The Rust standard module \href{https://doc.rust-lang.org/std/time/index.html}{\code{std::time}} is a more evolved example. -It defines the \href{https://doc.rust-lang.org/std/time/struct.SystemTime.html}{\code{SystemTime}} type for wall clock time (the origin is the \href{https://doc.rust-lang.org/std/time/struct.SystemTime.html#associatedconstant.UNIX_EPOCH}{\sc{unix} epoch}), \href{https://doc.rust-lang.org/std/time/struct.Instant.html}{\code{Instant}} for monotonic clocks (the origin is ``some unspecified point in the past'', usually the system boot time), and the \href{https://doc.rust-lang.org/std/time/struct.Duration.html}{\code{Duration}} type for distances between two clock measurements. +It defines the \href{https://doc.rust-lang.org/std/time/struct.SystemTime.html}{\code{SystemTime}} type for wall clock time (the origin is the \href{https://doc.rust-lang.org/std/time/struct.SystemTime.html#associatedconstant.UNIX_EPOCH}{\textsc{unix} epoch}), \href{https://doc.rust-lang.org/std/time/struct.Instant.html}{\code{Instant}} for monotonic clocks (the origin is ``some unspecified point in the past'', usually the system boot time), and the \href{https://doc.rust-lang.org/std/time/struct.Duration.html}{\code{Duration}} type for distances between two clock measurements. \subsection{quantities}{Quantities} @@ -277,7 +277,7 @@ \subsection{quantities}{Quantities} We can model complex type interactions using methods of \href{https://en.wikipedia.org/wiki/Dimensional_analysis}{dimensional analysis}. If we view \href{#amounts}{amounts} as values with an attached label identifying their unit, then our new types are a natural extension demanding a more structured label equivalent to a vector of base units raised to rational powers. -For example, acceleration would have label \math{(distance \times time\sup{-2})}, and the \sc{usd}/\sc{eur} \href{https://en.wikipedia.org/wiki/Currency_pair}{pair} exchange rate would have label \math{(eur \times usd\sup{-1})}. +For example, acceleration would have label \math{(distance \times time\sup{-2})}, and the \textsc{usd}/\textsc{eur} \href{https://en.wikipedia.org/wiki/Currency_pair}{pair} exchange rate would have label \math{(eur \times usd\sup{-1})}. I call types with such rich label structure \emph{quantities}. Quantities are a proper extension of amounts: addition, subtraction, and scalar multiplication work the same way, leaving the label structure untouched. diff --git a/posts/26-good-names-form-galois-connections.tex b/posts/26-good-names-form-galois-connections.tex index 12ec189..8919570 100644 --- a/posts/26-good-names-form-galois-connections.tex +++ b/posts/26-good-names-form-galois-connections.tex @@ -174,7 +174,7 @@ \section{names-as-galois-connections}{Names as Galois connections} If the expressions are equally expressive, we define the shorter element as smaller. The reader context is a crucial component of the \math{\leq} operator. For example, ``cat'' is more expressive than ``felis catus'' in a regular conversation, but it might be the other way around in a scientific paper. -This paragraph doesn't do a good job at defining expression ordering, but it's \sc{ok}: the whole point of the Galois connection is to make expression comparison precise through its connection with better-defined objects. +This paragraph doesn't do a good job at defining expression ordering, but it's \textsc{ok}: the whole point of the Galois connection is to make expression comparison precise through its connection with better-defined objects. Our second set is the set of concepts \math{C}. If we have concepts \math{c\sub{1}, c\sub{2} \in C}, we say that \math{c\sub{1} \leq c\sub{2}} if \math{c\sub{1}} is at least as specific as \math{c\sub{2}}. @@ -186,7 +186,7 @@ \section{names-as-galois-connections}{Names as Galois connections} The connection becomes \math{decode(e) \leq c \iff e \leq encode(c)}. Even the specialized equivalence looks unintuitive, so let's see how it holds with an example. -Imagine we want to name a variable holding the amount of \sc{usd} in a bank account (that's our concept \math{c}). +Imagine we want to name a variable holding the amount of \textsc{usd} in a bank account (that's our concept \math{c}). Would \math{value} be an appropriate variable name? Let's see what our connection tells us: \math{decode(value)} is a vague concept, less specific than the bank account balance. diff --git a/posts/27-extending-https-outcalls.tex b/posts/27-extending-https-outcalls.tex index f42e1a0..c52b522 100644 --- a/posts/27-extending-https-outcalls.tex +++ b/posts/27-extending-https-outcalls.tex @@ -9,7 +9,7 @@ \begin{document} \section* -The more I learn about the \href{https://chain.link/}{Chainlink platform}, the more parallels I see between Chainlink's systems and the \href{https://internetcomputer.org/}{Internet Computer} (\sc{ic}) network I helped design and implement. +The more I learn about the \href{https://chain.link/}{Chainlink platform}, the more parallels I see between Chainlink's systems and the \href{https://internetcomputer.org/}{Internet Computer} (\textsc{ic}) network I helped design and implement. Both projects aim to provide a solid platform for \href{https://blog.chain.link/what-is-trust-minimization/}{trust-minimized computation}, but they take different paths toward that goal. One of the limitations of blockchains is their self-contained nature. @@ -17,20 +17,20 @@ This problem is commonly called the \href{https://chain.link/education-hub/oracle-problem}{oracle problem}. Oracles are services that bring external data, such as price feeds and weather conditions, into a blockchain. -The Chainlink network and \sc{ic} solve the Oracle problem by providing byzantine fault-tolerant protocols. -Chainlinks relies on the \href{/posts/24-ocr.html}{Off-chain reporting protocol} (\sc{ocr}), while \sc{ic} provides the \href{https://internetcomputer.org/docs/current/references/https-outcalls-how-it-works}{\sc{https} outcalls} feature. -\sc{ocr} is more general, while \sc{https} outcalls are readily available to all developers and are easier to use. +The Chainlink network and \textsc{ic} solve the Oracle problem by providing byzantine fault-tolerant protocols. +Chainlinks relies on the \href{/posts/24-ocr.html}{Off-chain reporting protocol} (\textsc{ocr}), while \textsc{ic} provides the \href{https://internetcomputer.org/docs/current/references/https-outcalls-how-it-works}{\textsc{https} outcalls} feature. +\textsc{ocr} is more general, while \textsc{https} outcalls are readily available to all developers and are easier to use. This article explores how to bridge the gap between the two protocols. -We will start with an \href{#https-outcalls-overview}{overview} of the \sc{https} outcalls feature. -Then, we will design an \href{#multi-https-outcalls}{extension} to support cases when \sc{http} responses are not deterministic. +We will start with an \href{#https-outcalls-overview}{overview} of the \textsc{https} outcalls feature. +Then, we will design an \href{#multi-https-outcalls}{extension} to support cases when \textsc{http} responses are not deterministic. Finally, we will see how to use this extension to implement a robust \href{#price-feeds}{price feed} canister. \section{https-outcalls-overview}{HTTPS outcalls in a nutshell} -Smart contracts on the \sc{ic} network can initiate \sc{https} requests to external services. +Smart contracts on the \textsc{ic} network can initiate \textsc{https} requests to external services. -First, the canister sends a message to the management canister that includes the \sc{https} request payload and the \href{https://internetcomputer.org/docs/current/references/https-outcalls-how-it-works#transformation-function}{transform callback function}. +First, the canister sends a message to the management canister that includes the \textsc{https} request payload and the \href{https://internetcomputer.org/docs/current/references/https-outcalls-how-it-works#transformation-function}{transform callback function}. The management canister includes this request in a dedicated queue in the node's replicated state. A background process independent from the replicated state machine called \emph{adapter} periodically inspects the request queue and executes requests from the queue. @@ -41,7 +41,7 @@ \section{https-outcalls-overview}{HTTPS outcalls in a nutshell} \end{figure} If the original canister specified the transform callback, the adapter invokes the callback on the canister as a query. -The callback accepts the raw \sc{http} response and returns its canonicalized version. +The callback accepts the raw \textsc{http} response and returns its canonicalized version. One universal use case for transform callbacks is stripping the response headers since they can contain information unique to the response, such as timestamps, that can make it impossible to reach a consensus. \begin{figure}[grayscale-diagram,p75] @@ -64,26 +64,26 @@ \section{https-outcalls-overview}{HTTPS outcalls in a nutshell} \section{extending-https-outcalls}{Extending HTTPS outcalls} -It turns out, \sc{https} outcalls implement a special case of the \sc{ocr}'s \href{/posts/24-ocr.html#report-generation}{report generation protocol}, where participants are \sc{ic} nodes. -The \sc{ocr} protocol defines three stages: +It turns out, \textsc{https} outcalls implement a special case of the \textsc{ocr}'s \href{/posts/24-ocr.html#report-generation}{report generation protocol}, where participants are \textsc{ic} nodes. +The \textsc{ocr} protocol defines three stages: \begin{enumerate} \item In the \emph{query} stage, the participants receive a task to observe an external data source. - This stage is implicit in \sc{https} outcalls: instead of the protocol leader initiating the query, a canister triggers a query using the system interface. + This stage is implicit in \textsc{https} outcalls: instead of the protocol leader initiating the query, a canister triggers a query using the system interface. \item In the \emph{observation} stage, each node observes the data source, signs its observation, and sends it over the network. - The \sc{ic} implements this step through the adapter process discussed in the previous section and the consensus algorithm. - The adapter executes an \sc{https} request and filters it through the calling canister's transformation function. + The \textsc{ic} implements this step through the adapter process discussed in the previous section and the consensus algorithm. + The adapter executes an \textsc{https} request and filters it through the calling canister's transformation function. The transformation result is the observation. \item In the \emph{report} stage, the network aggregates participant observations into the final report. - This stage is hard-coded in the \sc{ic} consensus protocol. - If \math{2f + 1} nodes observed the same \sc{http} response, its value becomes the report. + This stage is hard-coded in the \textsc{ic} consensus protocol. + If \math{2f + 1} nodes observed the same \textsc{http} response, its value becomes the report. \end{enumerate} \subsection{multi-https-outcalls}{Multi-HTTP outcalls} -To make \sc{https} outcalls as general as the full report generation protocol, we must make the report stage customizable. -The \sc{ic} consensus algorithm must allow the canister to observe all response versions and distill them into a report. +To make \textsc{https} outcalls as general as the full report generation protocol, we must make the report stage customizable. +The \textsc{ic} consensus algorithm must allow the canister to observe all response versions and distill them into a report. The most straightforward way to achieve this goal is to include all response versions in the block and deliver this batch to the canister. \begin{figure}[grayscale-diagram,p75] @@ -100,7 +100,7 @@ \subsection{multi-https-outcalls}{Multi-HTTP outcalls} \begin{figure} \marginnote{mn-multi-http-request}{ - A hypothetical extension to the \sc{ic} management interface allowing the canister to inspect \sc{http} responses from multiple nodes. + A hypothetical extension to the \textsc{ic} management interface allowing the canister to inspect \textsc{http} responses from multiple nodes. } \begin{code}[candid] service ic : { @@ -115,20 +115,20 @@ \subsection{multi-https-outcalls}{Multi-HTTP outcalls} Since there are at most \math{f} Byzantine nodes, and the \code{responses} vector contains at least \math{2f + 1} elements, only the top and bottom \math{f} responses can skew the observation significantly. If there are more than \math{2f + 1} responses, the aggregation function can make a better choice if it knows the value of \math{f}. -Thus, the \href{https://internetcomputer.org/docs/current/references/ic-interface-spec/#system-api}{system \sc{api}} might provide a function to obtain the maximum number of faulty nodes in a subnet: +Thus, the \href{https://internetcomputer.org/docs/current/references/ic-interface-spec/#system-api}{system \textsc{api}} might provide a function to obtain the maximum number of faulty nodes in a subnet: \begin{code} ic0.max_faulty_nodes : () -> (i32); \end{code} -This design significantly restricts the \sc{http} response size. +This design significantly restricts the \textsc{http} response size. Since the vector of responses might contain 34 entries on large subnets, and all the responses must fit in the block size and the two-megabyte response size limits, each response must not exceed 58 kilobytes. Luckily, that's enough for many essential use cases, such as observing a frequently updating price feed or the latest Ethereum block number. \subsection{aggregation-callbacks}{Faulty design: aggregation callbacks} -My first attempt at extending \sc{https} outcalls relied on allowing the canister to specify an additional callback to aggregate multiple observations. -\href{https://manu.drijve.rs/}{Manu Drijvers} pointed out a fatal flaw in this design, and I think it's helpful to outline it here because it highlights differences and parallels between \sc{ic}'s and \sc{ocr}'s approach to consensus. +My first attempt at extending \textsc{https} outcalls relied on allowing the canister to specify an additional callback to aggregate multiple observations. +\href{https://manu.drijve.rs/}{Manu Drijvers} pointed out a fatal flaw in this design, and I think it's helpful to outline it here because it highlights differences and parallels between \textsc{ic}'s and \textsc{ocr}'s approach to consensus. The faulty protocol extension would kick in after the consensus algorithm \href{#fig-consensus-shares-transformed}{distributes transformed observations} through the peer-to-peer network. Instead of checking whether there are \math{2f + 1} equal observations, the consensus would invoke the aggregation callback on the canister to obtain a report. @@ -156,23 +156,23 @@ \subsection{aggregation-callbacks}{Faulty design: aggregation callbacks} Each healthy node in the network of \math{3f + 1} nodes will see responses from \emph{some} other nodes (at least \math{2f + 1}), but the exact subset might differ for each node. Different observation subsets will lead to unequal aggregated reports, and the system might fail to reach consensus. -The \sc{ocr} protocol solves this issue by electing a leader node that picks the subset of observations and distributes it to the followers. +The \textsc{ocr} protocol solves this issue by electing a leader node that picks the subset of observations and distributes it to the followers. Thus, all honest nodes must derive the same report from these observations. -There is no leader in the \sc{ic} consensus protocol; \href{https://internetcomputer.org/how-it-works/consensus/#block-making}{blockmaker rank} governs node priority in each round. -\sc{ic} nodes must agree on the observation subset using block proposals, so including all observations in the block is inevitable. -However, that requirement doesn't mean that \sc{ic} consensus protocol is less efficient: We can view \sc{ocr} leader as the sole zero-rank block maker that sends the ``block'' with observations to all participants. +There is no leader in the \textsc{ic} consensus protocol; \href{https://internetcomputer.org/how-it-works/consensus/#block-making}{blockmaker rank} governs node priority in each round. +\textsc{ic} nodes must agree on the observation subset using block proposals, so including all observations in the block is inevitable. +However, that requirement doesn't mean that \textsc{ic} consensus protocol is less efficient: We can view \textsc{ocr} leader as the sole zero-rank block maker that sends the ``block'' with observations to all participants. \section{price-feeds}{Use-case: price feeds} One of the most popular use cases for oracles is delivering price feeds to power DeFi applications. -Unsurprisingly, the \href{https://internetcomputer.org/docs/current/developer-docs/defi/exchange-rate-canister}{exchange rate canister} was one of the first users of the \sc{https} outcalls feature. -This section is a walk through an implementation of a simplistic price feed canister using the \sc{ocr}-inspired extension of the \sc{https} outcalls feature discussed in the previous section. +Unsurprisingly, the \href{https://internetcomputer.org/docs/current/developer-docs/defi/exchange-rate-canister}{exchange rate canister} was one of the first users of the \textsc{https} outcalls feature. +This section is a walk through an implementation of a simplistic price feed canister using the \textsc{ocr}-inspired extension of the \textsc{https} outcalls feature discussed in the previous section. -The canister queries a hypothetical price feed \sc{api} and returns the observed price and the timestamp. +The canister queries a hypothetical price feed \textsc{api} and returns the observed price and the timestamp. Treat the code as pseudo-code: it has never been tested or compiled. -First, we import the necessary \sc{api} to make \sc{https} requests. +First, we import the necessary \textsc{api} to make \textsc{https} requests. Imports marked in bold do not exist yet. \begin{code}[rust] @@ -184,7 +184,7 @@ \section{price-feeds}{Use-case: price feeds} }; \end{code} -Next, we define the data structures specifying the format of the \sc{api} response and the price report the canister produces. +Next, we define the data structures specifying the format of the \textsc{api} response and the price report the canister produces. Since the block space is precious, the \code{ExamplePriceResponse} structure restricts the response contents to the fields we need to construct the report. \begin{code}[rust] @@ -202,7 +202,7 @@ \section{price-feeds}{Use-case: price feeds} } \end{code} -We then define the transformation function for the \sc{api} response. +We then define the transformation function for the \textsc{api} response. The function removes the response headers and replaces the response body with its restricted canonical version. \begin{code}[rust] @@ -218,7 +218,7 @@ \section{price-feeds}{Use-case: price feeds} \end{code} -It's time to send a multi-\sc{http} request to the example price feed \sc{api}. +It's time to send a multi-\textsc{http} request to the example price feed \textsc{api}. \begin{code}[rust] #[ic_cdk::update] @@ -235,7 +235,7 @@ \section{price-feeds}{Use-case: price feeds} assert!(http_responses.len() >= 2 * f + 1, "not enough responses for consensus"); \end{code} -In the next snipped, we parse the \sc{http} responses into a vector of \code{ExamplePriceResponse} objects. +In the next snipped, we parse the \textsc{http} responses into a vector of \code{ExamplePriceResponse} objects. Note that we cannot assume that all responses are parseable since malicious nodes can intentionally reply with garbage. \begin{code}[rust] @@ -277,8 +277,8 @@ \section{price-feeds}{Use-case: price feeds} \section{conclusion}{Conclusion} -\sc{https} outcalls feature allows anyone to deploy an oracle service on the \sc{ic} network with minimal effort. -Unfortunately, the current implementation is limited to use cases of deterministic \sc{http} responses. -This article explored how to lift this limitation by taking inspiration from the \sc{ocr} protocol and including all the \sc{http} request versions to the requesting canister. +\textsc{https} outcalls feature allows anyone to deploy an oracle service on the \textsc{ic} network with minimal effort. +Unfortunately, the current implementation is limited to use cases of deterministic \textsc{http} responses. +This article explored how to lift this limitation by taking inspiration from the \textsc{ocr} protocol and including all the \textsc{http} request versions to the requesting canister. \end{document} diff --git a/posts/28-enlightenmentware.tex b/posts/28-enlightenmentware.tex index a34dde5..2d05209 100644 --- a/posts/28-enlightenmentware.tex +++ b/posts/28-enlightenmentware.tex @@ -19,7 +19,7 @@ I call such software \emph{enlightenmentware}. The most common source of enlightenment for programmers is the programming language they use at work or learn as a hobby. -I experienced many jolts of enlightenment from fiddling with programming languages, from \href{https://en.wikipedia.org/wiki/Microsoft_Macro_Assembler}{\sc{masm}} and \href{https://en.wikipedia.org/wiki/C_(programming_language)}{C} to \href{https://en.wikipedia.org/wiki/Prolog}{Prolog} and \href{https://www.idris-lang.org/}{Idris}. +I experienced many jolts of enlightenment from fiddling with programming languages, from \href{https://en.wikipedia.org/wiki/Microsoft_Macro_Assembler}{\textsc{masm}} and \href{https://en.wikipedia.org/wiki/C_(programming_language)}{C} to \href{https://en.wikipedia.org/wiki/Prolog}{Prolog} and \href{https://www.idris-lang.org/}{Idris}. I won't focus on languages, however, since the effects of language learning on mind expansion is old news\sidenote{sn-norvig}{ See, for example, Peter Norvig's ``\href{https://norvig.com/21-days.html}{Teach Yourself Programming in Ten Years}''. }. @@ -29,36 +29,36 @@ \section{unix}{UNIX} \epigraph{ - \sc{unix} is user-friendly---it's just choosy about who its friends are. + \textsc{unix} is user-friendly---it's just choosy about who its friends are. }{ - Anonymous, in the ``Art of \sc{unix} Programming'' by Eric S. Raymond + Anonymous, in the ``Art of \textsc{unix} Programming'' by Eric S. Raymond } I started looking for my first real programming job around 2008, while studying at university in my hometown of Nizhny Novgorod. -Almost all the open positions required knowledge of mysterious things called \sc{unix} and \emph{sockets}. -My curriculum didn't offer a course on \sc{unix} or operating systems in general, so I decided to get a textbook and master the topic myself. +Almost all the open positions required knowledge of mysterious things called \textsc{unix} and \emph{sockets}. +My curriculum didn't offer a course on \textsc{unix} or operating systems in general, so I decided to get a textbook and master the topic myself. -``\href{https://www.goodreads.com/book/show/22066650-unix}{The \sc{unix} Operating System}'' by Andrey Robachevsky et al., also known as the \emph{turtle book} in Russia because of its cover, introduced me to the magical world of \sc{unix}-like operating systems. -\sc{unix} became something I could understand, explore, and programmatically interact with. +``\href{https://www.goodreads.com/book/show/22066650-unix}{The \textsc{unix} Operating System}'' by Andrey Robachevsky et al., also known as the \emph{turtle book} in Russia because of its cover, introduced me to the magical world of \textsc{unix}-like operating systems. +\textsc{unix} became something I could understand, explore, and programmatically interact with. All pieces of the puzzle---the filesystem interface, the process model with environments and permissions, forking, sockets, and signals---fell into place and revealed a coherent, beautiful picture. -A search for a working \sc{unix} installation led me to \href{https://en.wikipedia.org/wiki/Mandriva_Linux}{Mandriva Linux}. -It was like discovering a parallel universe where you don't have to pirate software or spend forty minutes installing an \sc{ide} to compile a \sc{c} program. +A search for a working \textsc{unix} installation led me to \href{https://en.wikipedia.org/wiki/Mandriva_Linux}{Mandriva Linux}. +It was like discovering a parallel universe where you don't have to pirate software or spend forty minutes installing an \textsc{ide} to compile a \textsc{c} program. Here, people developed software for fun and shared it freely. I couldn't fathom why anyone would use Windows\sidenote{sn-windows}{ I became significantly more tolerant since my early university years. Windows (specifically the \href{https://en.wikipedia.org/wiki/Windows_NT}{NT family}) is a great operating system. - I even have it installed on my gaming \sc{pc} so that I can buy games I never play. + I even have it installed on my gaming \textsc{pc} so that I can buy games I never play. }. -From that moment on, \sc{unix} followed me through all stages of my life: +From that moment on, \textsc{unix} followed me through all stages of my life: the toddler phase of keeping up with the cutting-edge \href{https://ubuntu.com}{Ubuntu} releases, the rebellious teens of compiling custom kernels for my \href{https://www.thinkwiki.org/wiki/Category:T61p}{Thinkpad T61p} and \href{https://wiki.gentoo.org/wiki/Emerge}{emerging} the \href{https://wiki.gentoo.org/wiki/World_set_(Portage)}{\code{@world}} on \href{https://www.gentoo.org/}{Gentoo}, -the maturity of returning to Ubuntu \sc{lts} and delaying upgrades until the first dot one release, +the maturity of returning to Ubuntu \textsc{lts} and delaying upgrades until the first dot one release, and to the overwhelmed parent stage of becoming a happy macOS user. -\sc{unix} also became an essential building block in my profession. -Most of the software I wrote operates in a \sc{unix} environment, and I still occasionally consult my copy of \href{https://www.goodreads.com/book/show/603263.Advanced_Programming_in_the_UNIX_Environment}{Advanced Programming in the \sc{unix} Environment}. +\textsc{unix} also became an essential building block in my profession. +Most of the software I wrote operates in a \textsc{unix} environment, and I still occasionally consult my copy of \href{https://www.goodreads.com/book/show/603263.Advanced_Programming_in_the_UNIX_Environment}{Advanced Programming in the \textsc{unix} Environment}. \section{git}{Git} @@ -120,7 +120,7 @@ \section{emacs}{Emacs} My university used Pascal for introductory programming classes, so I also used Turbo Pascal for my assignments. Later courses introduced C++ and Java, for which we used \href{https://en.wikipedia.org/wiki/Visual_Studio#6.0_(1998)}{Visual Studio 6.0} and \href{https://en.wikipedia.org/wiki/JBuilder}{JBuilder}. -Although we learned to invoke compilers from the command line, \sc{ide}s dominated my early code-editing experience. +Although we learned to invoke compilers from the command line, \textsc{ide}s dominated my early code-editing experience. At my first programming job, I worked on a remote Solaris workstation over a \href{https://en.wikipedia.org/wiki/Citrix_Systems}{Citrix} connection. Almost everyone in our group used \href{https://en.wikipedia.org/wiki/NEdit}{NEdit} to edit the code. @@ -185,7 +185,7 @@ \section{boost-graph}{Boost.Graph} Even though I never had a chance to use the library in practice\sidenote{sn-boost-graph-use}{ Given that I share Donald Knuth's attitude opening this section, I would probably not use the library even if I had a chance. I'd rather write one page of interesting code traversing a graph than two pages of boring adapters required to invoke the algorithm. -}, its design helped me deepen my understanding of \href{https://en.wikipedia.org/wiki/Standard_Template_Library}{\sc{stl}} design and generic programming in general. +}, its design helped me deepen my understanding of \href{https://en.wikipedia.org/wiki/Standard_Template_Library}{\textsc{stl}} design and generic programming in general. It also helped me understand the motivation for advanced type-level programming features in other programming languages, such as \href{https://wiki.haskell.org/GHC/Type_families}{type families} in Haskell. Overall, Boost.Graph is one of the most enlightening pieces of software that I've never used. @@ -199,7 +199,7 @@ \section{bazel}{Bazel} I wrote my first \code{Makefile} around 2009 while working on a research project in computational mathematics for my degree. I already used \href{https://en.wikipedia.org/wiki/Make_(software)}{\code{make}} at work, but I didn't need to understand how it worked. -This time, I had to compile a \sc{fortran} program mixing sources adhering to different language standards: from venerable \sc{fortran} 77 to hip Fortran 2003. +This time, I had to compile a \textsc{fortran} program mixing sources adhering to different language standards: from venerable \textsc{fortran} 77 to hip Fortran 2003. To get a deeper understanding of the tool, I referred to \href{https://www.oreilly.com/library/view/managing-projects-with/0596006101/}{Managing Projects with GNU Make} by Robert Mecklenburg. Most books on technology excite me: I become enthusiastic about the subject and want to try it out in practice. @@ -212,7 +212,7 @@ \section{bazel}{Bazel} After my deep dive into \code{make}, I often fiddled with build systems at work: I introduced \href{https://cmake.org/}{CMake} to my first C++ project to replace complex and scarily incorrect \code{Makefile} files and -replaced an inflexible \href{https://ant.apache.org/}{Ant}-based build system in a 500 \sc{kloc} Java project with \href{https://gradle.org/}{Gradle} scripts that everyone on the team could contribute to. +replaced an inflexible \href{https://ant.apache.org/}{Ant}-based build system in a 500 \textsc{kloc} Java project with \href{https://gradle.org/}{Gradle} scripts that everyone on the team could contribute to. But all of the tools I tried, including \href{https://cmake.org/}{CMake}, \href{https://ant.apache.org/}{Ant}, \href{https://maven.apache.org/}{Maven}, \href{https://gradle.org/}{Gradle}, \href{https://www.scons.org/}{SCons}, and \href{https://www.gnu.org/software/automake/manual/html_node/Autotools-Introduction.html}{autotools} left me deeply unsatisfied. They were clanky, awkward, and hard to extend and compose. @@ -230,7 +230,7 @@ \section{bazel}{Bazel} Bazel build file is a program that constructs a slice of the build artifact graph. Bazel rules don't \emph{run} the build commands; they \emph{declare} how to transform inputs into outputs, and the Bazel engine figures out the rest. -My relationship with the tool reached true intimacy when I helped \href{/posts/17-scaling-rust-builds-with-bazel.html}{transition \sc{dfinity}'s build system to Bazel}. +My relationship with the tool reached true intimacy when I helped \href{/posts/17-scaling-rust-builds-with-bazel.html}{transition \textsc{dfinity}'s build system to Bazel}. Despite all the challenges I faced on the way, Bazel is still my favorite build system. It's fast, correct, easy to use, and language-agnostic. @@ -248,7 +248,7 @@ \section{conclusion}{Conclusion} All these tools address a deep problem, and a kind of problem that I face every day, such as making programs on my computer cooperate, managing concurrent work streams, or generalizing a piece of code. \item They are ``round'': they pack the most volume in the smallest surface area. - \sc{unix} surface area is tiny, but it unlocks much power. + \textsc{unix} surface area is tiny, but it unlocks much power. Emacs and Git are all over the place, but their \emph{core} is small, sweet, and easy to appreciate. \item They invite and encourage you to explore their internals.