summaryrefslogtreecommitdiff
path: root/reviews/blog-roll.md
diff options
context:
space:
mode:
Diffstat (limited to 'reviews/blog-roll.md')
-rw-r--r--reviews/blog-roll.md315
1 files changed, 159 insertions, 156 deletions
diff --git a/reviews/blog-roll.md b/reviews/blog-roll.md
index 97e2d74..92981c2 100644
--- a/reviews/blog-roll.md
+++ b/reviews/blog-roll.md
@@ -6,31 +6,33 @@ that I would like not to forget.
Eric Normand's musings on programming paradigms and their application,
with a soft spot for functional programming.
-[When in doubt, refactor at the bottom]
-: Quoting Sandi Metz:
+## [When in doubt, refactor at the bottom]
- > Duplication is far cheaper than the wrong abstraction.
+Quoting Sandi Metz:
- The point being that blindly following the letter of the DRY law
- can lead developers to add complexity to extracted functions
- because "it almost does what I want; if I could add just one more
- parameter to it…".
+> Duplication is far cheaper than the wrong abstraction.
- Normand and Metz encourage developers to "mechanically" extract
- small pieces of logic; even if they are not re-usable, bundling
- things together and naming them helps make the potential
- abstractions more visible.
+The point being that blindly following the letter of the DRY law can
+lead developers to add complexity to extracted functions because "it
+almost does what I want; if I could add just one more parameter to
+it…".
-[Programming Paradigms and the Procedural Paradox]
-: A discussion on our tendency to conflate *paradigms* with their
- *features*; for example, when trying to answer "can this language
- express that paradigm?", we often reduce the question to "does
- this language possess those features?".
+Normand and Metz encourage developers to "mechanically" extract small
+pieces of logic; even if they are not re-usable, bundling things
+together and naming them helps make the potential abstractions more
+visible.
- Normand wonders whether we do this because the procedural
- paradigm's metaphor (a series of steps that each may contain any
- number of sub-tasks) maps so well to its features (sequential
- statements, subroutines) that it trained us to mix those up.
+## [Programming Paradigms and the Procedural Paradox]
+
+A discussion on our tendency to conflate *paradigms* with their
+*features*; for example, when trying to answer "can this language
+express that paradigm?", we often reduce the question to "does this
+language possess those features?".
+
+Normand wonders whether we do this because the procedural paradigm's
+metaphor (a series of steps that each may contain any number of
+sub-tasks) maps so well to its features (sequential statements,
+subroutines) that it trained us to mix those up.
[LispCast]: https://lispcast.com/category/writing/
[When in doubt, refactor at the bottom]: https://lispcast.com/refactor-bottom/
@@ -81,11 +83,12 @@ Some recurring topics I enjoy reading about:
# [Et tu, Cthulhu]
-[A hash table re-hash]
-: A benchmark of hash tables that manages to succinctly explain
- common performance issues and tradeoffs with this data structure,
- to show results across a wide range of implementations, and to
- provide very understandable interepretations for those results.
+## [A hash table re-hash]
+
+A benchmark of hash tables that manages to succinctly explain common
+performance issues and tradeoffs with this data structure, to show
+results across a wide range of implementations, and to provide very
+understandable interepretations for those results.
[Et tu, Cthulhu]: https://hpjansson.org/blag/
[A hash table re-hash]: https://hpjansson.org/blag/2018/07/24/a-hash-table-re-hash/
@@ -97,44 +100,45 @@ The down-to-earth commentary made me feel like the author both
appreciates the thought process that went into the design, and has
enough hindsight to find where that thought process fell short.
-[A Taste of Rust]
-: An overview of some of the language's features. Some comments
- resonated particularly well with me, e.g. on nested functions:
+## [A Taste of Rust]
- > With other languages, I’m never quite sure where to put
- > helper functions. I’m usually wary of factoring code into
- > small, “beautiful” functions because I’m afraid they’ll end
- > up under the couch cushions, or behind the radiator next to
- > my car keys. With Rust, I can build up a kind of organic
- > tree of function definitions, each scoped to the place where
- > they’re actually going to be used, and promote them up the
- > tree as they take on the Platonic form of Reusable Code.
+An overview of some of the language's features. Some comments
+resonated particularly well with me, e.g. on nested functions:
+
+> With other languages, I’m never quite sure where to put helper
+> functions. I’m usually wary of factoring code into small,
+> “beautiful” functions because I’m afraid they’ll end up under the
+> couch cushions, or behind the radiator next to my car keys. With
+> Rust, I can build up a kind of organic tree of function definitions,
+> each scoped to the place where they’re actually going to be used,
+> and promote them up the tree as they take on the Platonic form of
+> Reusable Code.
[Evanmiller.org]: https://www.evanmiller.org/
[A Taste of Rust]: https://www.evanmiller.org/a-taste-of-rust.html
# [Bartosz Ciechanowski]
-[Alpha Compositing]
-: The good, bad and ugly of how we discretize colors, and
- color-blending. With helpful interactive simulations.
+## [Alpha Compositing]
+
+The good, bad and ugly of how we discretize colors, and
+color-blending. With helpful interactive simulations.
[Bartosz Ciechanowski]: https://ciechanow.ski/
[Alpha Compositing]: https://ciechanow.ski/alpha-compositing/
# [Red Hat Developer]
-[10 tips for reviewing code you don't like]
-: The article could basically be included as-is in a [nonviolent
- communication] textbook and renamed "application to code reviews".
+## [10 tips for reviewing code you don't like]
- AFAICT the underlying principle to all these tips is: scrub
- judgmental statements out of your responses, and state your
- concerns openly. Nobody should expect you to hold all the
- answers; express your uncertainty, and let the submitter do the
- work of convincing you (e.g. checking for performance regressions,
- splitting patch series).
+The article could basically be included as-is in a [nonviolent
+communication] textbook and renamed "application to code reviews".
+AFAICT the underlying principle to all these tips is: scrub judgmental
+statements out of your responses, and state your concerns openly.
+Nobody should expect you to hold all the answers; express your
+uncertainty, and let the submitter do the work of convincing you
+(e.g. checking for performance regressions, splitting patch series).
[Red Hat Developer]: https://developers.redhat.com/blog/
[10 tips for reviewing code you don't like]: https://developers.redhat.com/blog/2019/07/08/10-tips-for-reviewing-code-you-dont-like/
@@ -174,114 +178,113 @@ Satirical websites fighting [web bloat] with minimalist designs.
# [Joe Duffy's Blog]
-[The Error Model]
-: An in-depth look at what "errors" are in the context of software,
- how some languages choose to deal with them, and what model the
- Midori team implemented.
-
- > Our overall solution was to offer a two-pronged error model. On
- > one hand, you had fail-fast – we called it abandonment – for
- > programming bugs. And on the other hand, you had statically
- > checked exceptions for recoverable errors.
-
- Starts by outlining the "performance metrics" for a good error
- model, then goes over unsatisfactory models:
-
- - **Error codes** clutter a function's signature with an extra
- return value, and the resulting branches degrade performance;
- they do not automatically interrupt execution, thus when bugs
- finally show up, it can be hard to track their origin down.
- - Though they did provide their developers with an escape
- hatch that lets them ignore return values, their `ignore`
- keyword is at least auditable.
-
- - **Unchecked exceptions** make it hard to reason about a
- program's flow. They persist because in the grand scheme of
- things, they stay out of the way When Things Work™.
-
- - **Exceptions in general** tend to come with some performance
- baggage (e.g. symbols for stack traces), and encourage coarse
- error-handling (i.e. throwing a `try` blanket spanning several
- statements instead of focusing on individual calls).
-
- All of these models conflate *recoverable errors* (e.g. invalid
- program inputs) that the application can act on (by telling users
- about their mistakes, assuming a transient environment failure and
- re-trying, or ignoring the error) with *bugs*, i.e. unexpected
- conditions that, when unhandled, create bogus states and
- transitions in the program, and may only have visible impacts
- further down the line.
-
- As these bugs are "unrecoverable", the team chose the
- "**abandonment**" strategy (aka "fail-fast") to deal with them.
-
- > My impression is that, largely because of the continued success
- > of monolithic kernels, the world at large hasn’t yet made the
- > leap to “operating system as a distributed system” insight.
- > Once you do, however, a lot of design principles become
- > apparent.
-
- > As with most distributed systems, our architecture assumed
- > process failure was inevitable.
-
- The micro-kernel architecture, where basic kernel features such as
- "the scheduler, memory manager, filesystem, networking stack, and
- even device drivers" are all run as isolated user-mode processes,
- encourages "wholesale abandonment" as an error-handling stragegy,
- since the failure remains contained and does not bring down the
- whole system.
-
- They implemented **contracts** using dedicated syntax and made
- them part of a function's interface. Delegating contract-checking
- to a library would have buried pre- and post-conditions as regular
- calls inside a function's definition, whereas they wanted them to
- become part of the function's metadata, where they can be analyzed
- by optimizers, IDEs, etc.
-
- > Contracts begin where the type system leaves off.
-
- Contract violations trigger (at worst) program abandonment;
- **type-system** violations plainly prevent the program from
- existing.
-
- Since "90%" of their contracts were either null or range checks,
- they found a way to encode nullability and ranges in the
- type-system, reducing this:
-
- public virtual int Read(char[] buffer, int index, int count)
- requires buffer != null
- requires index >= 0
- requires count >= 0
- requires buffer.Length - index < count {
- ...
- }
-
- To this:
-
- public virtual int Read(char[] buffer) {
- ...
- }
-
- While preserving the same guarantees, checked at compile-time.
-
- **Nullable** types were designated with `T?`; `T` implicitly
- converted to `T?`, but conversion from `T?` to `T` required noisy
- (i.e. auditable) operators which would trigger abandonment when
- needed.
-
- *Recoverable errors* were handled with checked exceptions, which
- were part of a function's signature. Since most bugs were dealt
- with through abandonment, most of their APIs didn't throw.
-
- To make it easier to reason about control flow, callers had to
- explicitly say `try` before calling a function that might throw,
- which did what Rust's deprecated `try!()` did: yield the return
- value on the happy path, else re-throw.
-
- Covers performance concerns, composition with concurrency; muddies
- the waters somewhat with "aborts" which kind of look like
- `longjmp`s to me? Except it runs the code in `catch` blocks it
- finds while crawling back the stack?
+## [The Error Model]
+
+An in-depth look at what "errors" are in the context of software, how
+some languages choose to deal with them, and what model the Midori
+team implemented.
+
+> Our overall solution was to offer a two-pronged error model. On one
+> hand, you had fail-fast – we called it abandonment – for programming
+> bugs. And on the other hand, you had statically checked exceptions
+> for recoverable errors.
+
+Starts by outlining the "performance metrics" for a good error model,
+then goes over unsatisfactory models:
+
+- **Error codes** clutter a function's signature with an extra return
+ value, and the resulting branches degrade performance; they do not
+ automatically interrupt execution, thus when bugs finally show up,
+ it can be hard to track their origin down.
+ - Though they did provide their developers with an escape hatch
+ that lets them ignore return values, their `ignore` keyword is
+ at least auditable.
+
+- **Unchecked exceptions** make it hard to reason about a program's
+ flow. They persist because in the grand scheme of things, they stay
+ out of the way When Things Work™.
+
+- **Exceptions in general** tend to come with some performance baggage
+ (e.g. symbols for stack traces), and encourage coarse error-handling
+ (i.e. throwing a `try` blanket spanning several statements instead
+ of focusing on individual calls).
+
+All of these models conflate *recoverable errors* (e.g. invalid
+program inputs) that the application can act on (by telling users
+about their mistakes, assuming a transient environment failure and
+re-trying, or ignoring the error) with *bugs*, i.e. unexpected
+conditions that, when unhandled, create bogus states and transitions
+in the program, and may only have visible impacts further down the
+line.
+
+As these bugs are "unrecoverable", the team chose the
+"**abandonment**" strategy (aka "fail-fast") to deal with them.
+
+> My impression is that, largely because of the continued success of
+> monolithic kernels, the world at large hasn’t yet made the leap to
+> “operating system as a distributed system” insight. Once you do,
+> however, a lot of design principles become apparent.
+
+> As with most distributed systems, our architecture assumed process
+> failure was inevitable.
+
+The micro-kernel architecture, where basic kernel features such as
+"the scheduler, memory manager, filesystem, networking stack, and even
+device drivers" are all run as isolated user-mode processes,
+encourages "wholesale abandonment" as an error-handling stragegy,
+since the failure remains contained and does not bring down the whole
+system.
+
+They implemented **contracts** using dedicated syntax and made them
+part of a function's interface. Delegating contract-checking to a
+library would have buried pre- and post-conditions as regular calls
+inside a function's definition, whereas they wanted them to become
+part of the function's metadata, where they can be analyzed by
+optimizers, IDEs, etc.
+
+> Contracts begin where the type system leaves off.
+
+Contract violations trigger (at worst) program abandonment;
+**type-system** violations plainly prevent the program from existing.
+
+Since "90%" of their contracts were either null or range checks, they
+found a way to encode nullability and ranges in the type-system,
+reducing this:
+
+ public virtual int Read(char[] buffer, int index, int count)
+ requires buffer != null
+ requires index >= 0
+ requires count >= 0
+ requires buffer.Length - index < count {
+ ...
+ }
+
+To this:
+
+ public virtual int Read(char[] buffer) {
+ ...
+ }
+
+While preserving the same guarantees, checked at compile-time.
+
+**Nullable** types were designated with `T?`; `T` implicitly converted
+to `T?`, but conversion from `T?` to `T` required noisy
+(i.e. auditable) operators which would trigger abandonment when
+needed.
+
+*Recoverable errors* were handled with checked exceptions, which were
+part of a function's signature. Since most bugs were dealt with
+through abandonment, most of their APIs didn't throw.
+
+To make it easier to reason about control flow, callers had to
+explicitly say `try` before calling a function that might throw, which
+did what Rust's deprecated `try!()` did: yield the return value on the
+happy path, else re-throw.
+
+Covers performance concerns, composition with concurrency; muddies the
+waters somewhat with "aborts" which kind of look like `longjmp`s to
+me? Except it runs the code in `catch` blocks it finds while crawling
+back the stack?
[Joe Duffy's Blog]: http://joeduffyblog.com/
[The Error Model]: http://joeduffyblog.com/2016/02/07/the-error-model/