Jamie on Software

This is the blog of Jamie Rumbelow: a software engineer, writer and philosophy graduate who lives in London.

tags: aesthetics ai books climate decentralisation economics effective-altruism ethereum fintech food housing javascript links london music personal philosophy politics productivity progress science space startups statistics urbanism

months: January 2023 December 2022 November 2022 October 2022 September 2022 August 2022 July 2022 June 2022 May 2022 April 2022 March 2022 January 2022 May 2021 February 2020 January 2020 December 2019


Links, May 2022

May was a fine month with lots of social engagements, less writing than I wanted, but quite a lot of reading. I ran the Edinburgh Marathon, my first marathon ever, in four hours and 33 minutes. I wrote a piece on Decentralisation as a trade-off space.

To prepare for a debate with my friend David, I read two books on the history of housing development: All That is Solid and Municipal Dreams. The latter was very good. The former descended into Tory-bashing in the key of Owen Jones, which might be righteous but is also a little tiresome. I also read bits of Order Without Design, which was truly excellent; it’s good to see urban theory that grounds itself in, and has respect for, economics.

I also read the second volume of Alastair Campbell’s diaries, covering 1997-1999 and the first few salvos of a triumphantly New Labour. The diary format is excellent, since you get an obviously singular perspective as it unfolds. I hadn’t realised quite how little communication mattered in the civil service prior to Blair; lots of Campbell’s agonies involved getting various govt departments to coordinate messaging, routing comms through Number 10. I also had very little idea of how much work went into the Good Friday agreement, or how tenuous it was. Many many chances for it to fall apart. Had Paisley or Trimble or Adams woken up on the wrong side of bed on the wrong morning and the whole thing would have been doomed. From inside government policy seems much more chaotic and stochastic than I had suspected (which might be a reason to be less worried, at least on the margin, about Moloch tendencies.)

I listened to In the Shadow of the Moon while falling asleep most nights, a very thorough set of biographies and history of the Gemini missions, up to Apollo 11.

As for other links, and continuing on the theme of housing, I also read a few good papers worth reading if the subject appeals to you. Anthony Breach’s Capital Cities: How the Planning System Creates Housing Shortages and Drives Wealth Inequality was extremely clear and thorough, UK-specific, and perfect for preparing for an argument with David. The Housing Theory of Everything helps drive home why this matters so much. YIMBY is a moral argument as much as an economic one.

Campbell’s diaries got me on a bit of a New Labour kick, so I watched last year’s excellent series on Blair and Brown and the 13 years of New Labour government. I’ve also been enjoying The Rest is Politics podcast, hosted by Campbell and Rory Stewart.

Dwarkesh Patel wrote a good post on applying the ‘Barbell Strategy’ to everyday life: reframing habit formation and intellectual projects in terms of oscillating between intense focus on one thing and the simplest, lowest-effort thing possible –– which is often nothing at all.

Ken Shiriff is writing some truly excellent, deep work on the technical substrate of the Apollo missions. This is a post on the premodulation processor, the signal combinator and splitter in the command module.

We can now make clocks so sensitive that they can detect the relativistic difference caused by being one milimetre deeper in the Earth’s gravity well.

Facebook open-sourced a logbook documented while building and deploying one of their NLP models. More companies and people should do this sort of stuff.

Finally, I signed a contract with Apress this week to publish a book on Product Engineering on Ethereum. My aim is to raise the relative status of product engineers – those of us who build everything around smart contracts, UIs, tooling, infrastructure – and explore how the unique processing model of Ethereum puts important constraints on the way we build software. I’ll be posting some pieces here as I work through the first draft, so keep an eye out if you’re interested.

3:26pm. June 2, 2022.

Decentralisation is a trade-off space

When you ask a crypto-sceptic what decentralisation means wrt crypto, they offer these sorts of properties:

  1. Many nodes are run across many jurisdictions; computation happens in a way that can’t ever be shut down or censored
  2. Individuals can access the system for whatever nefarious purposes they wish, without the need for inter alia KYC checks
  3. Power is completely diffused rather than concentrated in any one party; therefore, no individual party is responsible, either legally, operationally, morally, or all of the above.

These sorts of conditions actually refer to what I’ll call ‘maximal decentralisation’ – a system which is decentralised in every way possible, up to whatever point of diminishing marginal returns seems appropriate. It is different from decentralisation, which refers to a system that can have more or less participation, more or less accessibility, more or less diffused power.

This might seem like an unimportant semantic distinction, but not all semantic distinctions are meaningless, and some are indeed important. How we define our terms matters, because it limns the shape of what we build and gives us criteria against which we evaluate our success. Moreover, if we disagree over what we mean by a term like ‘decentralisation’, we end up arguing fruitlessly. We exchange claims and both sides miss the point.

A lot of crypto people also believe, somewhat reflexively, that we should be aiming for maximal decentralisation. The worst crypto-fanatics are just as ideological as the worst crypto-sceptics. This is a problem. People shouldn’t be fanatical.

For almost every category, maximalism is bad. It ties you into a priori commitments, which reduce your optionality and cripple your ability to adjust to new facts. Strong beliefs are important, but they should generally be held weakly. It also makes you much more likely to act tribally. Humans are tribal, and ‘decentralise everything’ can become a rallying cry, a standard around which troops array for battle. It makes technical questions political, and politics is the mind-killer.1

Crypto people shouldn’t be fanatical, because computing is the art of the science of trade-offs, and ‘decentralisation’ gives us a lot more room for manoeuvre than either its critics or fanatics seem to allow. Trade-offs are useful, and understanding the trade-off space is most of the work.

I know that crypto people don’t actually want decentralisation per se, because when you ask why decentralisation matters, you get answers like:

It makes the technology more accessible and more censor-resistant. It gives broader access to a broader set of people, especially those less well-served by existing financial infrastructure.

Which means that their motivations are prior to decentralisation. They want accessibility! They want censorship-resistance, and greater equality of opportunity!

We also want other things. Decentralisation means more competition, and more competition means more innovation. (Diversity is good! Let a thousand flowers bloom!) The ideas being generated and built upon by the crypto community are not commodities: they are meaningfully new contributions to a meaningfully new technology stack; we get more ideas when we have a more competitive space, and we get more competition when people are able to use the technology and gain access to the liquidity needed to prove the new ideas out.

More subtly, another important corollary of decentralisation is that it leads to implicit coordination through standards. And standards make composability possible. So decentralisation is also good because it leads to more composable technologies.

So how then can we think about decentralisation in a more yielding way? How can we make it more supportive of our broader aims – accessibility, censorship-resistance, competition, composability – without holding ourselves hostage to its demands, on the one hand, or giving our opponents a stick to beat us with, on the other?

First, we must realise that decentralisation is a spectrum. It isn’t a binary state. Systems can be more or less decentralised, and what we’re actually arguing over is how decentralised a given system should be.

Second, we must realise that there are different sorts of components in a web3 system, and decentralisation doesn’t just apply to the computation. There are the RPC nodes, the API interface into the computation layer: they can be more or less decentralised. There are frontends, which serve user-friendly interfaces to interact with the contracts via these RPC nodes: they can be more or less decentralised. There is the data itself, the state, which can be more or less decentralised. There is the liquidity, the value locked in the system, which can also be thought of as more or less decentralised. Etc etc etc.

Third, we must realise that how decentralised a given system should be depends on the purpose of that system. A platform that needs to perform a lot of complicated computation can trade off a bit of decentralisation by pulling some computation off-chain.2 Many web3 frontends use the semi-decentralised The Graph to provide indexing, trading off some centralisation in exchange for faster data retrieval. Almost no web3 frontend uses decentralised alternatives to the substrate of internet technologies on which they sit, DNS and web hosts and friends. This is all okay.

Fourth, we must realise that often just the ability to decentralise means that the goals of decentralisation can be achieved. In other words, centralised parts of a decentralised system don’t immediately make the system totally centralised – it’s not a binary – and therefore useless vis-a-vis the goals of decentralising. This is one of the reasons why moxie is wrong: while it’s true that there are a few dominant hosts of RPC nodes, such as Alchemy, it is very easy to run nodes, and would not be especially capital intensive to start competitors.3 Alchemy won’t abuse their power, because if they did it would be extremely easy to compete with them. The fact that the computation layer and its implementation – e.g. the geth codebase – is permissionlessly accessible means that there are downstream pressures on people who use those implementations to not abuse their power. ‘Counterfactually decentralised’ is a real thing!

Finally, and most importantly, we must realise that we have a choice, as builders and consumers, and that the truest expression of a decentralised system is one in which individuals are given freedom to engage (in the broadest possible sense) with the project (in the broadest possible sense) –– on their own terms. On this count, at least, crypto seems to be doing well, even though aspects of it are indeed centralised.

My point is that crypto-fanatics and -sceptics alike seem to think that the fight is over whether decentralisation is ‘necessary’, which is about as helpful as asking whether a bridge is strong enough, or a cow is fat enough, or a child is confident enough.

Necessary for what?

  1. I hope to write more about the relationship between fact-questions and values-questions in the future. I feel intuitively and reflexively annoyed by the post-modern adage that everything is political, because grounding facts-questions in values removes our one objective standard, our one ability to offer a response beyond ‘I don’t like it’, not to mention all the other socio-psychological effects that make respectful debate impossible and seem to have poisoned the epistemic commons. 

  2. Ethereum does this itself by not permitting contracts to autonomously call other contracts; mechanisms like MakerDAO’s liquidation process need to be triggered by off-chain actors incentivised through rewards. There are ways of solving the obvious trust problems, usually through incentives, if you believe they are problems. Much of the time they are not. 

  3. We know this, since there actually are a bunch of competitors

9:36pm. May 24, 2022.

Links, April 2022

April has been busy and changeable – mirroring the weather, I suppose – I began a new role at Fei Labs and am getting settled. I wrote three pieces on this blog: something technical on HD wallets and network switching, something equally technical about Safari iOS extensions, and the aforementioned notes on the t11s episode of Solidity Fridays.

Notwithstanding my own contributions, this was a good month for interesting reading. The Star Builders is a very accessible and fast read on the why and how of nuclear fusion. It’s a contribution to the – I think underrated – genre of “here’s a plausible pathway to some major scientific discovery, and a set of good reasons to be optimistic we’re on that pathway” science writing. One for the techno-optimists’ bookshelves.

Emily St. John Mandel’s new novel, Sea of Tranquility, is out, and is splendid. I found some of the science fiction elements a little bit hamfisted, and this is definitely Mandel finding her voice and her groove. But it’s worth reading, especially if you’ve read Station Eleven (one of the best novels I’ve read in the last few years) and The Glass Hotel.

Seven Brief Lessons on Physics was enjoyable, too. Best supplemented by something more technical, I think.

Finally, I just finished London Under by Peter Ackroyd. I loved Ackroyd’s biography of Sir Thomas More, but hadn’t read any of his other books. He’s an excellent writer. There were a handful of places where I thought he made non-sequiturs – the sort of non-sequiturs quite common in a certain style of proto-academic culture writing – but I’m being really picky.

The excellent Noah Smith did an interview with the excellent David Roberts on climate change, climate tech, climate activism, writing in public. Both combative and thoughtful throughout.

A challenging post on child abuse videos and a global coordinated effort to bust a specific site and its clients. A long and difficult read but very worthwhile.

Something to keep an eye on – an economics lab imaginging systems through science-fiction.

In case you’ve woken in a sweat, panicked that you’re not subscribed to enough substacks (subsstack?), here’s a list of good, subject-specific newsletters.

Very very good and comprehensive and thorough guide to direct carbon renewal. We need to do more here; Stripe and others have launched an advanced market commitment to help.

Stripe have also launched crypto payments.

The beginning of the conversation, not the end of it, but some reflections on SpaceX’s technologies and warfare.

Really enjoyed this article on the urban history and development of London’s planned and, blisfully, abandoned Ringways, the persistence of political structures and the perversity of political incentives, and the pitfalls of top-down urban planning. Works in Progress is putting out a lot of excellent stuff.

On top of my normal reading, I listened to three audiobooks this month, all spacey or science-related content. Another for the techno-optimist bookshelf, The Case for Space by Zubrin was good fun, opinionated, rigorous and rather inspiring.

Forces of Nature, from the same authorship of The Planets (mentioned last month), was also splendid. The narrator is truly excellent.

Into That Silent Sea: Trailblazers of the Space Era, 1961-1965 by French and Burgess is part of a broader series of histories on various topics in spaceflight. The whole series is pretty good, with only a few exceptions, and this isn’t one of them. Meticulous and narrative-driven, I learnt a lot about Gagarin, Leonov, the Mercury Seven, and the politics that made it all happen. I’ll be continuing this series.

I also finally got a Calendly set up, so if you’d like to chat, feel free to book in a call.

9:10am. May 1, 2022.

Notes on Solidity Fridays Episode with Transmissions11

December’s Solidity Fridays with transmissions11 is really excellent, and I think a very high-leverage way to learn how good Solidity actually gets written. When the guest is especially good, as with transmissions, I put Solidity Fridays in the same sort of category as Destroy All Software – teaching by communicating models and patterns of thought, rather than regurgitating tutorial content.

Anyway, here are my notes in case somebody else finds them useful:

  • When emitting events, t11s emits instances of the contract rather than the address directly; the compiler will swap it out for an address anyway, but this approach gives you greater type safety

  • Call CREATE2 to revert if the contract has already been deployed, by adding a salt of the underlying address to the deployment

  • Addresses are 20 bytes, salts have to be 32 bytes. So we call fillLast12Bytes to add the remaining bytes (provided in Solmate’s bytes32 library)

  • The gas cost of > is equivalent to != when the comparator is 0, more expensive otherwise.

  • Illustration of his opinionated approach to smart contract development: “the performance of the code for users who aren’t stupid matters” - that’s why the [[ERC20]] implementation in [[Solmate]] doesn’t stop you from transferring into the contract’s address. “I’m not raising the cost for everyone else”

  • fdiv is like division, but accounting for the bases. ‘Scale this down by the contract’s base’. Multiply the numerator by baseUnit and then divide the numerator * baseUnit by the denominator. Also checks for overflow on numerator * baseUnit, since overflow isn’t protected in assembly calls.

  • In general, keep external calls all the way at the end of the function – including after any events are emitted – to make reentrancy more difficult.

  • More important with eg ERC777 since eg safeTransferFrom might allow arbitrary code execution (https://eips.ethereum.org/EIPS/eip-777)

  • Removing things from the end of an array is significantly cheaper than from the beginning, since in the latter case you have to move everything over.

  • uint256 currentIndex = withdrawalQueue.length - 1; for (; ; currentIndex--) is better in this case than initialising the uint56 i = withdrawalQueue.length - 1 since we’re only doing effects (the tx will revert by underflow automatically), so we don’t need to check the length. Saves gas.

  • Add a trusted boolean to the strategies, which is then checked on deposit and withdrawal. Makes it easier for EOAs to manage vaults without having to be wrapped in some other contract. Also makes it possible to disable withdrawal from strategies easily if they’re malicious in some way.

  • Two reads to the same struct from getStrategyData[strategy] has no extra gas cost, since it gets optimised by the compiler into one single SLOAD. & makes it clearer to read by a dev where it’s coming from.

  • Use unchecked when you know you won’t underflow or overflow, and so can therefore do without the safety. Saves gas.

  • unchecked isn’t leaky - it won’t uncheck in nested function calls

  • The implementation of Compound’s cToken is a little funky, since in a lot of places function calls return an error code rather than revert. So sometimes you need to require(cToken.blah() == 0) to ensure it succeeded.

6:38pm. April 6, 2022.

Links, March 2022

A lot of spacey content this month, and a lot of crypto, as I left my old job at Pactio and moved into crypto full-time:

Finally got round to reading Values by Mark Carney. Seesaws from economic theory to memoir in a not-uninteresting way. Carney writes well, but sets things up in such a manner as to make his premises seem more interesting than this conclusions. A very safe book. I imagine he’s going to run for public office in Canada some time soon.

Also enjoyed The Power Law by Sebastian Mallaby. Clean writing, thoroughly researched.

Curricula for self-teaching maths and physics from Susan Rigetti.

I wrote a couple of posts on being an enthusiastic amateur.

The first test image from the James Webb Telescope, of the star HD 84406 is pretty spectacular (and even more so the more you learn about it.) You can clearly see the spiralling of the galaxies in the background, each one comprising on average 100 billion stars, and many of them billions of lightyears away. The scale of space is very hard to comprehend.

Nadia Eghbal is writing again, which is always a joyous event, this first new essay a gesture toward a broader project on philanthropy and the tech industry. Her prose is both incisive and imagistic, twisting and deforming ideas in the best way possible, finding their veins, snapping them like kindling.

An essay on infinite ethics, an approach to ethics that takes the existence of infinites seriously, and how infinity fits into the logical structure of existing mainstream ethical theories.

All of physics in nine lines. I’m surprised the basic theoretical scheme of physics is so parsimonious. (Although it might not actually be that parsimonious and this is expository slight-of-hand. What, for instance, explains why there are 27 constants?)

A fun collection of weird ERC-20 contracts, mostly exploits or incompatibilities with conventions.

I enjoyed watching this episode of Solidity Fridays with transmissions11. He articulates trade-offs very well. I took voluminous notes that I’ll type up soon.

On top of my normal reading, I listened to three audiobooks this month. The first, Spacefarers by Christopher Wanjek, is freely available to Audible subscribers, and a smart and deeply technical look about the next thousand years of spaceflight.

The second, The Planets by Andrew Cohen and Brian Cox, is a book about the history and physics of the Solar System, a companion to the 2019 BBC television series (which is itself really excellent.) Samuel West’s narration is extremely good, and Cohen is a talented science writer.

The third, also available for free on Audible, was a collection of Scientific American articles about Exoplanets. The article format is helpful, and the narrator’s voice is just monotonal enough to fall asleep to.

Emily St. John Mandel wrote a series of notes on GoodReads, discussing various passages from her excellent novel Station Eleven.

Vitalik on the roads not taken.

The user experience problems of quadratic voting. It’s easy to evaluate an approach to some problem in terms of its technical feasibility, or how attractive it is with respect to various theoretical constraints. A lot of the time, its success hinges simply on whether people can understand it.

A very, very good blog post on NHS performance. We need more LessWrong-style analyses of British government policy.

5:46pm. April 4, 2022.

A quick note on web_accessible_resources in Safari Web Extensions

The Safari Web Extension mechanism is an important step forward for iOS apps – and a significant part of my current project, on which more soon – but the documentation is a little fragmented and the tooling isn’t nearly solid enough just yet. Developing these new extensions is challenging.

A small contribution to changing that:

By default, extension files that aren’t explicitly whitelisted in the manifest.json file are inaccessible from the browser. One common use case of the content.js script is to inject scripts into the same execution environment of the active web page. However, no such injection is possible unless the loaded file is whitelisted.

The manifest.json spec defines a web_accessible_resources parameter, which allows extensions to whitelist resources for access from the browser.

Thus, the following whitelist:

"web_accessible_resources": ["images/foo.png"]

Allows you to generate the resource’s URL with the following code:

browser.runtime.getURL("images/foo.png")

Which gives you the following sort of URL:

safari-web-extension://2AB33852-D69B-4ED6-99AD-A4839DFEC7ED/images/foo.png

The ID directly after the protocol in that URL is generated on a per-browser-instance basis, so you can’t guess it ahead of time. You have to call getURL to generate it.

There are two important things that the API documentation doesn’t specify, which took me some time to figure out:

  • browser.runtime.getURL is only defined inside the extension’s execution contexts. So the content.js and background.js files are fine, but the webpage itself is not. If you want to use it in the webpage context, you’ll need to generate the URL in the extension somewhere and communicate it via the window.postMessage event trigger.

  • Any values used in the web_accessible_resources parameter must be nested under a subdirectory. If you try to call a top-level file (such as getURL("foo.png")), the URL will generate fine, but the file itself won’t be loadable. The browser will simply report it as inaccessible.

Hopefully this saves somebody else some time.

10:30pm. April 3, 2022.

HD wallets and network switching

Blockchain ‘wallets’ are generally just pairs of public and private keys with some UI wrapped around them.1 We take the private key, and use it to derive the public key, which we then use to derive the wallet’s address.

What’s important is that the process of derivation is very difficult to reverse, in the same way that a hashing function is difficult to reverse: the chance of you guessing the private key correctly at random is about the same as selecting one atom from all the atoms in the universe – and there’s no better way than guessing at random.2 We can therefore use the wallet address publicly, being able to prove mathematically that we own it, without ever leaking information about the private key we used to generate it.

This works great, until you need more than one wallet. You might be concerned about privacy, or you might want to keep certain types of transactions separated for tax or other organisational reasons. If you have more than one wallet, you need to manage more than one set of private keys, back each key up separately, store each key separately, restore each key separately, etc. This presents a user experience problem: it is inconvenient, and clunky, and pushes a lot of the infosec responsibility onto the user. That might be acceptable for a bunch of nerds or anarcho-libertarians, but isn’t going to cut it for the median user.

The agreed-upon solution to these UX problems is Hierarchical Deterministic (HD) wallets, proposed in the Bitcoin BIP-32/44 standards and used by most other chains. This post considers this standard, how we’re not meeting it, and why it matters.

The plan, in three sections:

  • A short overview of what HD wallets are. Feel free to skip over this if you’re familiar with the spec already.
  • A discussion of how common wallets are not meeting this standard
  • A discussion of why that matters, and what we could do about it.

HD Wallets

Hierarchical Deterministic (HD) wallets take the basic derivation mechanism and encode structure into it. We take a master password – a single thing for the user to remember, to back up, etc. – and combine it with a path, a string following an a priori agreed-upon schema that allows us to generate multiple private keys from the same master password.

But it needn’t actually have much structure at all. You could simply take a master password and append 1, 2, 3, and so on, to generate different wallet addresses. This strategy would generate perfectly usable wallets with no obvious link between them. And since the generation process follows the same general sort of process as it does for the single-key case, the generation process produces hashed values that are similarly difficult to reverse.

We therefore only really need two pieces of information to calculate our wallet address:

  • Our master password
  • Some sort of seed

The master password is the user’s responsibility; it’s her input, her secret. What seed should we use?

One option is to let the user specify whatever sort of seed she wishes. But this doesn’t really solve our problem: instead of multiple private keys, we instead have to deal with a single password plus multiple paths. We’ve just given ourselves more passwords to remember.

Another is to do what I suggested above: append an incrementing integer to the end of it to generate different wallets. This is equivalent to giving ourselves more passwords, but at least there’s some rationale to it: our first wallet has a 1 at the end, our second wallet a 2, etc. It gives us some psychological safety: it means that our wallet is recoverable (assuming we can remember which number we used to generate it, or assuming we don’t mind iterating through a few guesses). This approach is fine, as far as it goes, but this is crypto, so, given the opportunity, we should make it more complicated.

A third approach is to develop a common standard for generating our seeds with more variables than just an incrementing number. This way, we can describe a tree structure independent of its values, embedding multiple values with which we might want to generate differing wallets. The benefit to this approach is that we can encode information about the purpose of the wallet into the seed itself, and then recover it later using our knowledge of those purposes without having to remember many arbitrary numbers. The standard gives us the template, and the purposes give us the values of the variables; all we have to do is fill them in. The other benefit to using a common standard is that wallet software can implement the standards too, so you don’t need to generate the wallets off-site somewhere.

This standard is called BIP-44 (it was originally a Bitcoin standard), and it presents this exactly this sort of predictable tree structure that we’ve been discussing. The goal here is minimises user input and maximise the number of wallets that can be generated with a single master password.

The standard calls the seed a derivation path, since it’s a path in a tree that we append to a master password and use the resulting string to derive a public address. The standard gives derivation paths the following structure:

m/purpose'/coin'/account'/change/index

And here’s the trick: most of these values are knowable by the wallet software, based on what sort of wallet you’re using:

  • purpose is always 44'.3 They gave it a value to allow them to upgrade the standard if they wanted to.
  • coin varies depending on the crypto network. For instance, coin = 60' is Ethereum mainnet, and coin = 966' is Polygon.
  • account gives the wallet a degree of freedom to support multiple user accounts (c.f. to the /Users/username directory on your OS)
  • change will generally be 0; it refers to whether the wallet should be used externally, or whether it should be use internal to the wallet for Bitcoin-based transaction change reasons. I’ve read somewhere that Ethereans sometimes use it, though for what I’m not sure.

The only non-guessable input value is index, which gives the user a degree of freedom to generate multiple wallets for under the same tree. This parameter is why the user can generate many wallets for a single password: she can keep incrementing index to generate more! It’s also exactly the same as my much simpler idea discussed previously.

These parameters then get put into the structure, like so:

m/44'/60'/0'/0/2

The structure then gets combined with the master password (or, more precisely, with a key generated from the master password), and users (or wallets) can vary coin, account and index to generate various wallet addresses.

Existing UIs and a Subtle Incompatibility

This isn’t a huge, bombshell-dropped discovery, I’ll admit it, but I’ve noticed that most wallets with support for both HD wallets and network switching don’t actually implement the BIP-44 correctly, or, at least, there is a tension between the model used for network switching and the model used for wallet generation.

Generally, what happens is:

  • Users add a master password (or its equivalent in the form of a mnemonic phrase) from which the wallet derives a single keypair
  • As far as I can make out, the ‘default wallet’ generated through this mechanism still uses the HD standard, it just relies implicitly upon the m/44'/60'/0'/0/0 derivation path (i.e. “give me external index 0 at account 0 for the Ethereum chain”).
  • When the user switches between compatible chains – from Mainnet to Arbitrum, for instance – the wallet software uses the same wallet address and private key to sign new transactions. It just switches the RPC endpoint it uses to make the request.

If wallets were to follow the standard correctly, they would be varying the coin value when switching networks, generating different wallet addresses for use depending on the network being used. In other words, according to BIP-44 at least, there’s no such thing as a ‘cross-network address’ – and existing wallets ignore this subtle fact entirely.

I’ve been looking at how various different wallets handle this, and they all seem to do the same thing:

  • Metamask’s network switcher is entirely independent from the wallet list, allowing the user to switch networks on the current wallet, even if that wallet was generated through a derivation path
  • MyEtherWallet do the same thing, switching the network URL used for chain interactions and not (as far as I can see) adjusting the corresponding wallets.
  • Similarly, there is nothing in the WalletConnect spec preventing this behaviour, meaning that any HD-compatible wallet software using the protocol facilitates wallet-independent network switching

The problem is not so much that nobody’s trying to follow the spec. The problem is that the spec is ambiguous with respect to the UI in which it’s being implemented. The community therefore has implicitly converged on this non-standard behaviour because of the ostensible UI benefits. This has created an implicit standard incompatible with the original BIP-32/44 proposals.

It gets even more confusing when you notice that there is a third, Ethereum-specific standard, EIP-601, designed to modify the BIP-44 standard for Ethereum use cases. From a brief google, I can’t see any mentions of 601 that aren’t merely links to the spec itself. But this ambiguity – what should happen to the valid wallet list when the user switches networks? – isn’t resolved by EIP-601 either.

This ambiguity is born because the BIP-32/44 standards were built around the assumption that the different networks a user might switch between were mutually incompatible. It didn’t foresee the rise of EVM-compatible layer 2s, and a range of dapps built to run on several of them concurrently, and therefore the capacity for the user to switch between them easily, in-app.

Why this matters, and what to do

Of course, this doesn’t seem like a critical problem – there are bigger problems we could be tackling, for sure. Indeed, there’s even something comforting about going from Polygon to Ethereum Mainnet and taking your address with you. It’s certainly convenient. But this isn’t what the BIP-32/44 specs say, and I think there actually are good reasons to obey them more precisely:

  1. It makes it possible to upgrade the spec in the future. The standard can evolve safely, and those implementing it correctly are able to evolve without having to hack in workarounds for backward compatibility, and keep track of previous fringe behaviours.

  2. It makes interoperability with other wallets easier. Wallet onboarding and offboarding isn’t a light matter; the more activation energy required to move to one wallet from another, or from no wallet at all, the more intimidating crypto as a whole will become to the marginal user. Problems at the tail-end often get publicised more than problems at the mean.

  3. Not doing so undermines one of the main reasons to use HD wallets in the first place: HD wallets allow you to keep public references to different addresses separated, increasing privacy. A wallet address that comes with you cross-network just makes your transactions that much easier to track.

Fortunately, I don’t believe that the UI concessions made by existing wallet implementations need to be locked in. There are some steps that wallets could take today, such as triggering a confirmation model when changing networks, that would enable users to opt-out of the spec. Many users don’t knowingly use HD wallets at all; in these cases, the default behaviour could just clear the wallet list and regenerate using the standard specs on network change.

Or, alternatively, we could develop a new, more parsimonious standard to capture the semantics of cross-chain wallets, compatible with the current UI approach. One simple method would be to amend the current spec such that network = 0 means ‘no specific network’, allowing cross-chain wallets to be represented in the existing spec. If a network changes while a user is connected with a wallet known to be generated with network = 0, the wallet persists.

Either way, this is the exactly the sort of subtle incompatibility that could prove to be an increasing nuisance, compounded by the ongoing growth in usage of layer 2s. Our standards for network switching were designed at a time when the only networks we would switch between were testnets. Today, the UI implications of network switching are a lot more important. And, today, that is incompatible with one of the most useful standards we have for managing multiple wallets.

Multiple wallets, multiple networks, good UX. We don’t need to pick only two.

  1. The name wallet is therefore a misnomer, since the wallet itself doesn’t store anything; it’s much closer to a username and password for online banking, than the vault itself. 

  2. Ethereum private keys are 256 bits. Since a bit has two possible states, guessing a 256 bit sequence correctly at random has a chance of 1/2^256. There are ~10^78 atoms in the observable universe, which is ~2^260. If you know the Ethereum address of the wallet you’re trying to get into it’s slightly easier, since wallet addresses are only 160 bits long, but it’s still a very big number

  3. The apostrophe in the path tells the key generation algorithm to use the ‘hardened’ form of the derivation, which puts extra constraints on the derivation such that derived public keys can’t be proven to be derived from a given parent public key using only public information. The details here are a little tricky, and outside the scope of this post. 

12:35pm. April 2, 2022.

Enthusiastic amateurs: some practical tips

Yesterday, I wrote about enthusiastic amateurs, a model for how I think about the trade-off between expertise and being a generalist. The median person is unlikely to become an expert, and pursuing expertise can be very costly, so perhaps there is a better path for the median person to take. This path is, roughly, to explore more, learn broadly, and rely on the interconnections between ideas to add value.

If that discussion is at all salient for you, a natural next question is how one ought best cultivate the characteristics of an enthusiastic amateur.

With the very big caveat that I’m still figuring this out myself, here are a few ways that seem to work well for expanding my interests and developing the sorts of knowledge that are additive rather than distracting:

  • Optimise for breadth. This might seem like trivial advice given the definition of an enthusiastic amateur, but it’s amazing how much more breadth can be gained by asking at a higher-than-normal rate “does this decision expose me in a meaningful way to more interesting stuff”. Follow blogs on subjects you know nothing about. Listen to lots of podcasts from lots of experts. Get used to clicking around Wikipedia aimlessly.
  • Avoid optimising for depth. I think optimising for depth is the default pathway, in many important ways, for lots of mostly contingent cultural reasons. If you want to be an enthusiastic amateur, you should resist the urge to optimise for depth. A lot of the stuff you do will also involve developing depth in a given field, but you shouldn’t be afraid to forgo depth in service of breadth, and then let depth develop naturally across a range of subjects, rather than by sacrificing breadth on the altar of depth.
  • Cultivate enthusiastic amateur friends. Enthusiastic amateurs usually have a richer and more idiosyncratic answer to “how can you do X better?”, generally because they actually end up answering a different question: “how do I think about X differently?”. They’re also very likely to recommend books and other media sources that might stray from expert’s canon. (Interintellect is a good place to start! So is Twitter!)
  • Cultivate expert friends too, but recognise their expertise might skew their answer away from breadth. Expert friends can teach you things you will never learn otherwise. They’re extremely good at nudging you away from dead-ends in their fields. It can also be a valuable way to get personalised feedback on your projects that sit in their domains. But experts are also more likely to rank you and your work against the norms and common knowledge in their field, which can lead you to develop the same sorts of blind spots that they do. It’s difficult to see the water you swim in.
  • Quit more. Quit early, quit often. Discipline is overrated. Projects that languish can be discarded. You shouldn’t forget that the sunk cost fallacy is still a fallacy, even when you’re labouring under it. If you’re at all like me, you should give yourself more permission to halt, reverse, rework or otherwise abandon some interests and projects as others begin to take their place. You’ll float back to things as and when you’re in the mood.
  • Use Anki and take notes. Breadth means you need to build more branches on the knowledge tree. You’ve got fewer coat-hooks, as it were, upon which to hang new facts. Popular science television, for instance, even the not-so-good stuff, is usually packed full of non-obvious observations and the distilled wisdom of experts. Even more so for the really good stuff; even more so still for books. People consume this content passively, and so don’t retain it. The trick is to consume actively.
  • Talk more. Connections between ideas often rely on pragmatics – how things are said, and in what context – rather than the actual semantic content of the connection. So talking about your latest subject of interest with smart people in a variety of different contexts can help you see beyond the standard narrative and see links that others might not.
  • Write more. This is the sort of trite advice you hear often, but it can’t be repeated enough: so much of the process of writing just is thinking, and forcing yourself to write helps you retain what you’re learning and distill an argument or theory or model to its essential components.
  • Think about thinking. How can you better distill an idea to its essence? How can you better think about the content you’re consuming, and what you do with it once it’s consumed? How can you curate your inputs in a way that leans towards high-quality breadth?

None of these methods are foolproof, but they point towards an enjoyable and rich intellectual lifestyle that doesn’t involve the sort of high-risk turmoil attached to pursuing expertise.

Being an enthusiastic amateur is like giving up for smart people.

6:25pm. March 10, 2022.

Enthusiastic amateurs

In this post, I’d like to try to raise the relative status of the casual polymath, at least insofar as it motivates an individual to decide what she should work on. It seems likely to me that pursuing expertise is overrepresented in career-advice-giving contexts, and that we should try to reframe not being an expert in a more positive light. We fetishise a very specific sort of expertise – A Beautiful Mind, 100-hours-a-week, obsessional expertise – as the gold standard for living meaningful intellectual lives. I’d like to suggest that there’s an alternative approach.

So here’s my reframing: more of us should try to be enthusiastic amateurs.

Firstly, as should be clear to anybody who interacts with me, I am not an expert at anything. And it’s quite possible that I’m telling this story to coddle my psyche; to bolster my self-confidence as I cling tightly to µ; to sit more readily on my averagely-comfortable chair, drinking Nescafé, as I type on my mid-range laptop with my average-sized hands. I will never have the temperament or talent to be world-class at anything, and I’d still like to be able to sleep at night.

However, I think the life of an enthusiastic amateur is not only a good one, but that cultivating it is also often the rational choice for somebody to make.

At the very least, the expertise norm is overrepresented, and that there’s value to be gained by exploring an alternative.


Who are enthusiastic amateurs? Enthusiastic amateurs are people that work hard to see the world through as many lenses as possible. They care less about being great, and more about being good enough. They aspire to be polymaths, but recognise that the definition is wanting.1

How do they compare to experts? An expert is a hedgehog; an enthusiastic amateur is a fox. An expert relies on precision; an enthusiastic amateur relies on scope. An expert is tenacious; an enthusiastic amateur is mercurial. An expert toils; an enthusiastic amateur plays.

There are obvious trade-offs to chasing expertise, and the world obviously needs people willing to make those trade-offs (an enthusiastic amateur isn’t going to engineer a rocket good enough to get humans safely to Mars, although one might build a company capable of doing so). The trouble is, you don’t really know if you’ve got the capacity to be an expert at anything until you already are one. Mozart is the exception, not the rule. Unless you feel the gods have conspired to put you where you are, expertise is the sort of thing you need to work very, very hard to achieve.

From a position of uncertainty relative to one’s own abilities, then, deciding to pursue excellence in one thing seems like a risky strategy. You could chase expertise, drill, rinse, repeat. Develop slowly a garrison of discipline and knowledge and finely-honed tools for solving the more abstruse problems in your field. Learn deeply, and feel engaged in some sort of higher purpose; luxuriate in our collective teleological hangover.

That’s the success path. There’s a failure path too. You chase expertise, drill, rinse, repeat. You spend early mornings and late nights playing your scales. You run up against your natural limits, and you don’t push past them. You continue to push, because you’re told there are diminishing returns and you need to keep working. But you never actually get past that point. You learn to work around your limitations in various ways in order to make this Sisyphean effort seem worthwhile, but you’re really just fooling yourself into thinking you’re getting more competent. Or, even if you are improving, you might never close the gap between you and your nearest competitor. There are a lot of dedicated, hard-working, decidedly non-expert people. There are far fewer who will meaningfully change their field.

You’re probably not going to become an expert. And if you’re probably not going to be an expert in the one or two things you care enough about in order to try, you’re more likely than not setting yourself up to fail. Failing can be a pretty unpleasant experience, and fighting through failure is so often a pyrrhic victory. Being irredeemably bad at something isn’t fun. This is an important psychological cost to factor in before you dedicate your life to something. (Expected value theory might be useful here: what’s the cost of failure multiplied by the probability that failure will happen? If you’re honest with yourself, it’s not likely that that number will be a ringing endorsement of pursuing expertise.)

As well as the psychological cost incurred when you wrap up your identity in a métier and then fail to live up to your own expectations, the pursuit of expertise has high opportunity costs, too: the costs incurred by not doing the other things that you could be doing while you pursue expertise. What you enjoy doing often changes, so if you spend the time becoming an expert, slogging over the plateau, it’s likely that you’ll miss out on a bunch of possible fun that you could have were your focus more elastic.

Another cost: I’m not convinced that there are always diminishing X-returns for X-ing[1], but there is a subset of Xs for which there are certainly diminishing social returns. You don’t need to be a Master of Wine to impress most dining companions: even if they are Masters of Wine, most other people are so far away from even passably knowledgeable about wine that a middling level of understanding can yield the majority of the benefits – the signalling power – that you can get from knowing about wine. In other words, you don’t need to be an expert to be impressive, even in the eyes of other experts.

If there are less obvious costs to being an expert, there are also less obvious benefits to being an enthusiastic amateur. It’s easy to underrate the benefits to being competent at a lot of things, especially when they’re compared to being excellent at one thing.

The world can often seem set up to reward experts more and reward enthusiastic amateurs less: academia seems to be a 1000-year experiment to institutionalise this model. But such entrenched reward systems often offer the opportunity for arbitrage. Being good enough at lots of things means that you can often see connections between subjects that experts, siloed into their conceptual schemes, can’t.2 Phillip Tetlock argues that being a fox makes you, on average, a better predictor of the future, for much the same reasons. Academia is famously siloed, but some of the best papers I’ve read are clever precisely because they apply techniques from one field to the problems of another. There is such a thing as gestalt knowledge, and I’d wager that enthusiastic amateurs are better at finding it than experts.

On the other hand, there’s definitely some class of problems which require deep expertise to see and understand and solve. Some problems need smart people to sit and think very hard about for a long time. But I think we generally over-index on this sort of expertise, both institutionally (via the peer-review process) and in a more broad sense, culturally.

Being good enough has another interesting corollary: being good enough at a range of things creates interesting intersections at which you can be an expert. I might not be an expert programmer, or the world’s best philosopher, or the world’s foremost authority on wine or US politics or any of my other interests. But I’m probably in the top 1% of the general population at the intersection of those things, simply by virtue of the rarity of that intersection and my enthusiasm in pursuing them. By cultivating wider interests, and by getting good enough at a broad range of things, you can carve out interesting niches which give you both the ability to be world-leading in that niche and also emerge naturally from the explorations you make, rather than because of your dogged pursuit of decisions made a priori. In other words: being an enthusiastic amateur doesn’t mean you need to give up your edge. (As long as there are only a few enthusiastic amateurs, being an enthusiastic amateur might itself be an edge.) And, nowadays, niches can pay.

One of my smartest friends pointed out that the pursuit of enthusiastic amateurness is a very Theory of Action-driven thing. That is, it suggests answers to the question “what should I do next?” rather than “what should I do in order to achieve XYZ?”. He’s right, of course, but a priorly-formed want to achieve XYZ is the hallmark of a wannabe-expert, and therefore not per se the sort of thing that enthusiastic amateurs will be concerned with. The sort of long-term goals that Theories of Change point toward are often, at least at the subject-level3, underspecified or weighed inappropriately in the more general calculation about how one should live one’s life.

The same friend also pointed out that advice is often written for the wrong people, that being an enthusiastic amateur might also incur costs. One potential cost here: it might make it more challenging to signal your commitment to a group, and therefore make it more difficult to be embedded in a community of peers. He suggested for this reason that to the extent that one can choose their community, it’s better to be more specialised (and therefore expertise is rewarded proportionally). I don’t disagree. It might be. But presumably groups of enthusiastic amateurs – LessWrong? Interintellect? – interested in how best they can be enthusiastic amateurs, exhibit the same sorts of dynamics. If you’re looking to signal your commitment to a group, “enthusiastic amateurs” might not be a bad group to join.


Enthusiastic amateurs aren’t sloppy, or dismissive of expertise. The point is not to be bad at lots of things. It’s to recognise that expertise isn’t the end of the story, and that being good enough at a lot of stuff is often so much more rewarding than being really good at one thing. For many people, expertise is just out of acceptable reach. Whatever you’re good at, there is likely a Chinese toddler doing it better than you could ever hope to. Some people are born with the requisite interest and determination and tenacity to pursue excellence at one big thing. Many, many people are not. I’m pretty sure that a lot of what becoming an expert in something and sustaining that expertise is a slog, and that a lot of people don’t enjoy it as much as they think they should, and that their response to being uninspired is to accept being mediocre, and that this shouldn’t be where careers advice leads. As a result, I don’t think that traditional expertise-oriented career advice is especially good advice for the median person.

Being ‘good enough at X’ for many Xs is completely attainable and, I think, can often set you up to be rewarded socially and commercially. There are lots of people who should know, be emboldened by the fact that expertise is one way amongst several to slice the pie. You can have a rich and rewarding intellectual life without demanding of yourself that you know what you’re destined to do from an early age, or even be destined to do anything. That you can indulge your broader interests without it immediately being written off as procrastination. It’s also playful, in an earnest sense. For many, the life of an enthusiastic amateur is, I really, truly, believe, a lot more fun.

  1. I don’t, for instance, think that the piano-benefits to becoming an expert pianist diminish with more practice. As far as I can tell, being able to play a complicated piece marginally better does unlock new modes of expression and new value in the piece, and, moreover, that seems to happen proportionally to the amount of work you put in. Once you’re an expert, small variations can produce outsized results. This seems especially true in competitive zero-sum games that get repeated over time, like two tennis players facing off regularly. Relative to me, Federer’s marginal training session likely won’t change the outcome of our match. Relative to Nadal, it seems, that extra practice might make all the difference. 

  2. For some examples, see David Epstein’s Range

  3. I’m not quite sure about the meta-level. It might still make sense to pursue expertise at goal formation, or productivity, or something. Enthusiastic amateurs tend to be quite productive, relative to the mean. But maybe that’s because a lot of what they do is amongst the lower-hanging fruit, rather than because they’re productivity experts or aiming to be such. 

9:23pm. March 9, 2022.

How to think about recommendations

I make a lot of recommendations for restaurants. I also receive a fair few.

Unless the facts change from out under my feet – one day I’ll tell you a story about The Marksman – I think my recommendations are generally pretty good. But I would, wouldn’t I? Unless I don’t like you, I’m not going to recommend things I don’t think are good recommendations.

It’s very important to be careful when recommending. If you eat out often, say ~3 times / week, you can expect to have ~9,300 meals over a 60-year adulthood of eating. That isn’t many meals! I read roughly a book per week. That’s ~3,120 books in the same adulthood. That isn’t many books! So each meal and each book has to count. & many people eat out many fewer times per month and read much less. Centrally: you should respect the time and money that people will spend based on your recommendations.

It’s also easier to recommend things in the indirect-objectless sense, as I do in the restaurant list above. But recommendations are often recommendations to somebody, in some context, for some purpose.

In these cases, how should we tell which recommendations to listen to, and which to ignore? How reliable is the average recommendation? How can you reliably make good recommendations to others?

Off the top of my head, there are obvious heuristics we can use:

  • Prior experience of the recommender’s recommendations. Have you been to restaurants with this person before? Did you like the last movie she recommended?
  • The recommender’s knowledge of the subject matter. Is he an expert? Are they at least an enthusiastic amateur? Are you confident they know a lot about this?
  • The recommender’s knowledge of the recommendee’s tastes. Does this person know you? Do you have confidence in her model of your preferences? Does he buy you good novels at Christmas?
  • Consensus amongst more than one recommender. Have you heard from multiple people that this restaurant is good? Have each of them been consistent in their reasons for recommending it, or have the different reasons been intriguing and appealing?

These each seem non-controversially true: if these conditions are met, it seems more likely that you’ll get a good recommendation. But it’s not at all obvious to me that you’ll reliably (i.e. >50% of the time) get a good recommendation.

For one thing, the facts can change from out under the feet of the recommender. In a large city like London, you’re not likely to revisit the same restaurant more than a few times a month (unless it’s provenly reliable and local). Staff turns over, the great old chef moves to her new place, your friend goes on a busy night, has horrible service, and it’s game over.

For another, it’s only semi-plausible that good taste clusters. So if the recommender’s taste in novels is good, that doesn’t on the face of it seem to suggest his taste in restaurants will necessarily be good; or art, or music, or whatever else. Some people seem blessed with good taste across the board, but that’s far from true universally.

For a third – and this is my central point here – contexts which involve taste exhibit huge interpersonal variation no matter how persuasive the a priori justification happens to be.

So what are some things we can do to ensure we receive better recommendations, and can filter out the bad ones that slip through?

  • Surround yourself by people with good taste. This seems like an easy one, but something I think not enough people act on meaningfully. It’s worth selecting good taste into your friendship group, not just because the quality of the recommendations you’ll receive will increase, but because you’ll develop a better appreciation for what sorts of people are the sorts of people who make good recommendations, which of course generalises.
  • Cultivate better taste yourself; learn more. Another easy one too easily forgotten. Do you reflect on your aesthetic experiences, note what you enjoyed and what you didn’t? Do you move outside of your comfort zone frequently, and take the hits (so your recommendees don’t have to?) Do you make an active and regular effort to learn more?1
  • Select for order-of-magnitude differences. You should aim to find recommenders who have at least an order-of-magnitude more experience than you, and try to tailor your recommendations to people with at least an order-of-magnitude less. There’s enough noise so that the marginal next % of exposure seems much less important. I wouldn’t, for instance, trust the judgement of somebody who had been to strictly one more opera than me. (This perhaps isn’t the case if I’ve never been to an opera.) Another reason why a little learning is a dangerous thing.
  • Go wide then deep then wide again. A good way to think about taste is effective pattern-matching. For this you first need a broad range of knowledge to anchor novel experiences, and then enough depth of understanding to discriminate between the great and merely good. But it’s important to back out of the rabbit hole and dig yourself another one. Eat fifteen different cuisines, then pick a few and learn the regional variances within them, then eat fifteen more.2
  • Consider the incentives. Tyler Cowen’s famous piece on restaurant recommendations makes this point well. If a restaurant is full of good-looking people, it will attract more people, holding fixed the quality of the food, which reduces the incentive for the restaurant to care about the food as much. (The restaurant, in effect, stops competing on quality of food and thus stops caring about it.) These sorts of incentives are everywhere, and it’s both fun and useful to be a little cynical and consider how they might affect your experience, and the recommendations you receive and make on the basis of it.

Two final points to consider. Firstly, perhaps try to elicit and make anti-recommendations rather than positive recommendations. It can often be more helpful to know where to avoid rather than where to go. This seems a little counterintuitive, since we’re optimising for the positive case – i.e. the case in which we in fact do go to the restaurant – but it’s useful because it provides useful information and still ‘frees up’ the higher end of the recommendation spectrum to float more independently. Similar concerns apply to groups of positive recommendations (“eat in Shoreditch”, “read novels from feminist authors in the 1920s”). You can then use your good taste to narrow it down further.

Finally, and most importantly, try to keep an open mind, and give others as many opportunities to be open-minded as possible. (If that means hiding certain things from your recommendees, so be it.) This can be very high-leverage, because the best sort of recommendation (at least in an information-theoretic sense) is the recommendation somebody is unlikely to receive from anybody else. For instance, many people miss out on amazing food because they dislike the idea of offal, while at the same time are fine with a chicken liver pâté. It’s not that they won’t like offal, it’s that they’re unlikely to follow a recommendation that mentions it, and therefore people are unlikely to make these recommendations in the first place. Sometimes it takes a bit of energy to get past the inertial resistance.

My mother hates the idea of lardo, but couldn’t stop eating the lardo-fried rice at Smoking Goat. I may have forgotten to tell her what it was.

  1. There are some interesting questions about the dynamics of taste. Tastes appear to ossify as you get older, which is a shame since your knowledge accumulates (generally) monotonically. I need to think about this more. 

  2. This approach also helps counter Gell-Mann amnesia, because you interlace the development of expertise with novelty and force yourself to consider whether and in which ways the experiences cross-cut. 

9:31am. January 3, 2022.