This is the blog of Jamie Rumbelow: a software engineer, writer and philosophy graduate who lives in London.
The Safari Web Extension mechanism is an important step forward for iOS apps – and a significant part of my current project, on which more soon – but the documentation is a little fragmented and the tooling isn’t nearly solid enough just yet. Developing these new extensions is challenging.
A small contribution to changing that:
By default, extension files that aren’t explicitly whitelisted in the
manifest.json file are inaccessible from the browser. One common use case of the
content.js script is to inject scripts into the same execution environment of the active web page. However, no such injection is possible unless the loaded file is whitelisted.
manifest.json spec defines a
web_accessible_resources parameter, which allows extensions to whitelist resources for access from the browser.
Thus, the following whitelist:
Allows you to generate the resource’s URL with the following code:
Which gives you the following sort of URL:
The ID directly after the protocol in that URL is generated on a per-browser-instance basis, so you can’t guess it ahead of time. You have to call
getURL to generate it.
There are two important things that the API documentation doesn’t specify, which took me some time to figure out:
browser.runtime.getURLis only defined inside the extension’s execution contexts. So the content.js and background.js files are fine, but the webpage itself is not. If you want to use it in the webpage context, you’ll need to generate the URL in the extension somewhere and communicate it via the
Any values used in the
web_accessible_resourcesparameter must be nested under a subdirectory. If you try to call a top-level file (such as
getURL("foo.png")), the URL will generate fine, but the file itself won’t be loadable. The browser will simply report it as inaccessible.
Hopefully this saves somebody else some time.
Blockchain ‘wallets’ are generally just pairs of public and private keys with some UI wrapped around them.1 We take the private key, and use it to derive the public key, which we then use to derive the wallet’s address.
What’s important is that the process of derivation is very difficult to reverse, in the same way that a hashing function is difficult to reverse: the chance of you guessing the private key correctly at random is about the same as selecting one atom from all the atoms in the universe – and there’s no better way than guessing at random.2 We can therefore use the wallet address publicly, being able to prove mathematically that we own it, without ever leaking information about the private key we used to generate it.
This works great, until you need more than one wallet. You might be concerned about privacy, or you might want to keep certain types of transactions separated for tax or other organisational reasons. If you have more than one wallet, you need to manage more than one set of private keys, back each key up separately, store each key separately, restore each key separately, etc. This presents a user experience problem: it is inconvenient, and clunky, and pushes a lot of the infosec responsibility onto the user. That might be acceptable for a bunch of nerds or anarcho-libertarians, but isn’t going to cut it for the median user.
The agreed-upon solution to these UX problems is Hierarchical Deterministic (HD) wallets, proposed in the Bitcoin BIP-32/44 standards and used by most other chains. This post considers this standard, how we’re not meeting it, and why it matters.
The plan, in three sections:
- A short overview of what HD wallets are. Feel free to skip over this if you’re familiar with the spec already.
- A discussion of how common wallets are not meeting this standard
- A discussion of why that matters, and what we could do about it.
Hierarchical Deterministic (HD) wallets take the basic derivation mechanism and encode structure into it. We take a master password – a single thing for the user to remember, to back up, etc. – and combine it with a path, a string following an a priori agreed-upon schema that allows us to generate multiple private keys from the same master password.
But it needn’t actually have much structure at all. You could simply take a master password and append
3, and so on, to generate different wallet addresses. This strategy would generate perfectly usable wallets with no obvious link between them. And since the generation process follows the same general sort of process as it does for the single-key case, the generation process produces hashed values that are similarly difficult to reverse.
We therefore only really need two pieces of information to calculate our wallet address:
- Our master password
- Some sort of seed
The master password is the user’s responsibility; it’s her input, her secret. What seed should we use?
One option is to let the user specify whatever sort of seed she wishes. But this doesn’t really solve our problem: instead of multiple private keys, we instead have to deal with a single password plus multiple paths. We’ve just given ourselves more passwords to remember.
Another is to do what I suggested above: append an incrementing integer to the end of it to generate different wallets. This is equivalent to giving ourselves more passwords, but at least there’s some rationale to it: our first wallet has a 1 at the end, our second wallet a 2, etc. It gives us some psychological safety: it means that our wallet is recoverable (assuming we can remember which number we used to generate it, or assuming we don’t mind iterating through a few guesses). This approach is fine, as far as it goes, but this is crypto, so, given the opportunity, we should make it more complicated.
A third approach is to develop a common standard for generating our seeds with more variables than just an incrementing number. This way, we can describe a tree structure independent of its values, embedding multiple values with which we might want to generate differing wallets. The benefit to this approach is that we can encode information about the purpose of the wallet into the seed itself, and then recover it later using our knowledge of those purposes without having to remember many arbitrary numbers. The standard gives us the template, and the purposes give us the values of the variables; all we have to do is fill them in. The other benefit to using a common standard is that wallet software can implement the standards too, so you don’t need to generate the wallets off-site somewhere.
This standard is called BIP-44 (it was originally a Bitcoin standard), and it presents this exactly this sort of predictable tree structure that we’ve been discussing. The goal here is minimises user input and maximise the number of wallets that can be generated with a single master password.
The standard calls the seed a derivation path, since it’s a path in a tree that we append to a master password and use the resulting string to derive a public address. The standard gives derivation paths the following structure:
And here’s the trick: most of these values are knowable by the wallet software, based on what sort of wallet you’re using:
44'.3 They gave it a value to allow them to upgrade the standard if they wanted to.
coinvaries depending on the crypto network. For instance,
coin = 60'is Ethereum mainnet, and
coin = 966'is Polygon.
accountgives the wallet a degree of freedom to support multiple user accounts (c.f. to the
/Users/usernamedirectory on your OS)
changewill generally be
0; it refers to whether the wallet should be used externally, or whether it should be use internal to the wallet for Bitcoin-based transaction change reasons. I’ve read somewhere that Ethereans sometimes use it, though for what I’m not sure.
The only non-guessable input value is
index, which gives the user a degree of freedom to generate multiple wallets for under the same tree. This parameter is why the user can generate many wallets for a single password: she can keep incrementing
index to generate more! It’s also exactly the same as my much simpler idea discussed previously.
These parameters then get put into the structure, like so:
The structure then gets combined with the master password (or, more precisely, with a key generated from the master password), and users (or wallets) can vary
index to generate various wallet addresses.
Existing UIs and a Subtle Incompatibility
This isn’t a huge, bombshell-dropped discovery, I’ll admit it, but I’ve noticed that most wallets with support for both HD wallets and network switching don’t actually implement the BIP-44 correctly, or, at least, there is a tension between the model used for network switching and the model used for wallet generation.
Generally, what happens is:
- Users add a master password (or its equivalent in the form of a mnemonic phrase) from which the wallet derives a single keypair
- As far as I can make out, the ‘default wallet’ generated through this mechanism still uses the HD standard, it just relies implicitly upon the
m/44'/60'/0'/0/0derivation path (i.e. “give me external index 0 at account 0 for the Ethereum chain”).
- When the user switches between compatible chains – from Mainnet to Arbitrum, for instance – the wallet software uses the same wallet address and private key to sign new transactions. It just switches the RPC endpoint it uses to make the request.
If wallets were to follow the standard correctly, they would be varying the
coin value when switching networks, generating different wallet addresses for use depending on the network being used. In other words, according to BIP-44 at least, there’s no such thing as a ‘cross-network address’ – and existing wallets ignore this subtle fact entirely.
I’ve been looking at how various different wallets handle this, and they all seem to do the same thing:
- Metamask’s network switcher is entirely independent from the wallet list, allowing the user to switch networks on the current wallet, even if that wallet was generated through a derivation path
- MyEtherWallet do the same thing, switching the network URL used for chain interactions and not (as far as I can see) adjusting the corresponding wallets.
- Similarly, there is nothing in the WalletConnect spec preventing this behaviour, meaning that any HD-compatible wallet software using the protocol facilitates wallet-independent network switching
The problem is not so much that nobody’s trying to follow the spec. The problem is that the spec is ambiguous with respect to the UI in which it’s being implemented. The community therefore has implicitly converged on this non-standard behaviour because of the ostensible UI benefits. This has created an implicit standard incompatible with the original BIP-32/44 proposals.
It gets even more confusing when you notice that there is a third, Ethereum-specific standard, EIP-601, designed to modify the BIP-44 standard for Ethereum use cases. From a brief google, I can’t see any mentions of 601 that aren’t merely links to the spec itself. But this ambiguity – what should happen to the valid wallet list when the user switches networks? – isn’t resolved by EIP-601 either.
This ambiguity is born because the BIP-32/44 standards were built around the assumption that the different networks a user might switch between were mutually incompatible. It didn’t foresee the rise of EVM-compatible layer 2s, and a range of dapps built to run on several of them concurrently, and therefore the capacity for the user to switch between them easily, in-app.
Why this matters, and what to do
Of course, this doesn’t seem like a critical problem – there are bigger problems we could be tackling, for sure. Indeed, there’s even something comforting about going from Polygon to Ethereum Mainnet and taking your address with you. It’s certainly convenient. But this isn’t what the BIP-32/44 specs say, and I think there actually are good reasons to obey them more precisely:
It makes it possible to upgrade the spec in the future. The standard can evolve safely, and those implementing it correctly are able to evolve without having to hack in workarounds for backward compatibility, and keep track of previous fringe behaviours.
It makes interoperability with other wallets easier. Wallet onboarding and offboarding isn’t a light matter; the more activation energy required to move to one wallet from another, or from no wallet at all, the more intimidating crypto as a whole will become to the marginal user. Problems at the tail-end often get publicised more than problems at the mean.
Not doing so undermines one of the main reasons to use HD wallets in the first place: HD wallets allow you to keep public references to different addresses separated, increasing privacy. A wallet address that comes with you cross-network just makes your transactions that much easier to track.
Fortunately, I don’t believe that the UI concessions made by existing wallet implementations need to be locked in. There are some steps that wallets could take today, such as triggering a confirmation model when changing networks, that would enable users to opt-out of the spec. Many users don’t knowingly use HD wallets at all; in these cases, the default behaviour could just clear the wallet list and regenerate using the standard specs on network change.
Or, alternatively, we could develop a new, more parsimonious standard to capture the semantics of cross-chain wallets, compatible with the current UI approach. One simple method would be to amend the current spec such that
network = 0 means ‘no specific network’, allowing cross-chain wallets to be represented in the existing spec. If a network changes while a user is connected with a wallet known to be generated with
network = 0, the wallet persists.
Either way, this is the exactly the sort of subtle incompatibility that could prove to be an increasing nuisance, compounded by the ongoing growth in usage of layer 2s. Our standards for network switching were designed at a time when the only networks we would switch between were testnets. Today, the UI implications of network switching are a lot more important. And, today, that is incompatible with one of the most useful standards we have for managing multiple wallets.
Multiple wallets, multiple networks, good UX. We don’t need to pick only two.
The name wallet is therefore a misnomer, since the wallet itself doesn’t store anything; it’s much closer to a username and password for online banking, than the vault itself. ↩
Ethereum private keys are 256 bits. Since a bit has two possible states, guessing a 256 bit sequence correctly at random has a chance of
1/2^256. There are ~10^78 atoms in the observable universe, which is ~2^260. If you know the Ethereum address of the wallet you’re trying to get into it’s slightly easier, since wallet addresses are only 160 bits long, but it’s still a very big number. ↩
The apostrophe in the path tells the key generation algorithm to use the ‘hardened’ form of the derivation, which puts extra constraints on the derivation such that derived public keys can’t be proven to be derived from a given parent public key using only public information. The details here are a little tricky, and outside the scope of this post. ↩
Yesterday, I wrote about enthusiastic amateurs, a model for how I think about the trade-off between expertise and being a generalist. The median person is unlikely to become an expert, and pursuing expertise can be very costly, so perhaps there is a better path for the median person to take. This path is, roughly, to explore more, learn broadly, and rely on the interconnections between ideas to add value.
If that discussion is at all salient for you, a natural next question is how one ought best cultivate the characteristics of an enthusiastic amateur.
With the very big caveat that I’m still figuring this out myself, here are a few ways that seem to work well for expanding my interests and developing the sorts of knowledge that are additive rather than distracting:
- Optimise for breadth. This might seem like trivial advice given the definition of an enthusiastic amateur, but it’s amazing how much more breadth can be gained by asking at a higher-than-normal rate “does this decision expose me in a meaningful way to more interesting stuff”. Follow blogs on subjects you know nothing about. Listen to lots of podcasts from lots of experts. Get used to clicking around Wikipedia aimlessly.
- Avoid optimising for depth. I think optimising for depth is the default pathway, in many important ways, for lots of mostly contingent cultural reasons. If you want to be an enthusiastic amateur, you should resist the urge to optimise for depth. A lot of the stuff you do will also involve developing depth in a given field, but you shouldn’t be afraid to forgo depth in service of breadth, and then let depth develop naturally across a range of subjects, rather than by sacrificing breadth on the altar of depth.
- Cultivate enthusiastic amateur friends. Enthusiastic amateurs usually have a richer and more idiosyncratic answer to “how can you do X better?”, generally because they actually end up answering a different question: “how do I think about X differently?”. They’re also very likely to recommend books and other media sources that might stray from expert’s canon. (Interintellect is a good place to start! So is Twitter!)
- Cultivate expert friends too, but recognise their expertise might skew their answer away from breadth. Expert friends can teach you things you will never learn otherwise. They’re extremely good at nudging you away from dead-ends in their fields. It can also be a valuable way to get personalised feedback on your projects that sit in their domains. But experts are also more likely to rank you and your work against the norms and common knowledge in their field, which can lead you to develop the same sorts of blind spots that they do. It’s difficult to see the water you swim in.
- Quit more. Quit early, quit often. Discipline is overrated. Projects that languish can be discarded. You shouldn’t forget that the sunk cost fallacy is still a fallacy, even when you’re labouring under it. If you’re at all like me, you should give yourself more permission to halt, reverse, rework or otherwise abandon some interests and projects as others begin to take their place. You’ll float back to things as and when you’re in the mood.
- Use Anki and take notes. Breadth means you need to build more branches on the knowledge tree. You’ve got fewer coat-hooks, as it were, upon which to hang new facts. Popular science television, for instance, even the not-so-good stuff, is usually packed full of non-obvious observations and the distilled wisdom of experts. Even more so for the really good stuff; even more so still for books. People consume this content passively, and so don’t retain it. The trick is to consume actively.
- Talk more. Connections between ideas often rely on pragmatics – how things are said, and in what context – rather than the actual semantic content of the connection. So talking about your latest subject of interest with smart people in a variety of different contexts can help you see beyond the standard narrative and see links that others might not.
- Write more. This is the sort of trite advice you hear often, but it can’t be repeated enough: so much of the process of writing just is thinking, and forcing yourself to write helps you retain what you’re learning and distill an argument or theory or model to its essential components.
- Think about thinking. How can you better distill an idea to its essence? How can you better think about the content you’re consuming, and what you do with it once it’s consumed? How can you curate your inputs in a way that leans towards high-quality breadth?
None of these methods are foolproof, but they point towards an enjoyable and rich intellectual lifestyle that doesn’t involve the sort of high-risk turmoil attached to pursuing expertise.
Being an enthusiastic amateur is like giving up for smart people.
In this post, I’d like to try to raise the relative status of the casual polymath, at least insofar as it motivates an individual to decide what she should work on. It seems likely to me that pursuing expertise is overrepresented in career-advice-giving contexts, and that we should try to reframe not being an expert in a more positive light. We fetishise a very specific sort of expertise – A Beautiful Mind, 100-hours-a-week, obsessional expertise – as the gold standard for living meaningful intellectual lives. I’d like to suggest that there’s an alternative approach.
So here’s my reframing: more of us should try to be enthusiastic amateurs.
Firstly, as should be clear to anybody who interacts with me, I am not an expert at anything. And it’s quite possible that I’m telling this story to coddle my psyche; to bolster my self-confidence as I cling tightly to µ; to sit more readily on my averagely-comfortable chair, drinking Nescafé, as I type on my mid-range laptop with my average-sized hands. I will never have the temperament or talent to be world-class at anything, and I’d still like to be able to sleep at night.
However, I think the life of an enthusiastic amateur is not only a good one, but that cultivating it is also often the rational choice for somebody to make.
At the very least, the expertise norm is overrepresented, and that there’s value to be gained by exploring an alternative.
Who are enthusiastic amateurs? Enthusiastic amateurs are people that work hard to see the world through as many lenses as possible. They care less about being great, and more about being good enough. They aspire to be polymaths, but recognise that the definition is wanting.1
How do they compare to experts? An expert is a hedgehog; an enthusiastic amateur is a fox. An expert relies on precision; an enthusiastic amateur relies on scope. An expert is tenacious; an enthusiastic amateur is mercurial. An expert toils; an enthusiastic amateur plays.
There are obvious trade-offs to chasing expertise, and the world obviously needs people willing to make those trade-offs (an enthusiastic amateur isn’t going to engineer a rocket good enough to get humans safely to Mars, although one might build a company capable of doing so). The trouble is, you don’t really know if you’ve got the capacity to be an expert at anything until you already are one. Mozart is the exception, not the rule. Unless you feel the gods have conspired to put you where you are, expertise is the sort of thing you need to work very, very hard to achieve.
From a position of uncertainty relative to one’s own abilities, then, deciding to pursue excellence in one thing seems like a risky strategy. You could chase expertise, drill, rinse, repeat. Develop slowly a garrison of discipline and knowledge and finely-honed tools for solving the more abstruse problems in your field. Learn deeply, and feel engaged in some sort of higher purpose; luxuriate in our collective teleological hangover.
That’s the success path. There’s a failure path too. You chase expertise, drill, rinse, repeat. You spend early mornings and late nights playing your scales. You run up against your natural limits, and you don’t push past them. You continue to push, because you’re told there are diminishing returns and you need to keep working. But you never actually get past that point. You learn to work around your limitations in various ways in order to make this Sisyphean effort seem worthwhile, but you’re really just fooling yourself into thinking you’re getting more competent. Or, even if you are improving, you might never close the gap between you and your nearest competitor. There are a lot of dedicated, hard-working, decidedly non-expert people. There are far fewer who will meaningfully change their field.
You’re probably not going to become an expert. And if you’re probably not going to be an expert in the one or two things you care enough about in order to try, you’re more likely than not setting yourself up to fail. Failing can be a pretty unpleasant experience, and fighting through failure is so often a pyrrhic victory. Being irredeemably bad at something isn’t fun. This is an important psychological cost to factor in before you dedicate your life to something. (Expected value theory might be useful here: what’s the cost of failure multiplied by the probability that failure will happen? If you’re honest with yourself, it’s not likely that that number will be a ringing endorsement of pursuing expertise.)
As well as the psychological cost incurred when you wrap up your identity in a métier and then fail to live up to your own expectations, the pursuit of expertise has high opportunity costs, too: the costs incurred by not doing the other things that you could be doing while you pursue expertise. What you enjoy doing often changes, so if you spend the time becoming an expert, slogging over the plateau, it’s likely that you’ll miss out on a bunch of possible fun that you could have were your focus more elastic.
Another cost: I’m not convinced that there are always diminishing X-returns for X-ing, but there is a subset of Xs for which there are certainly diminishing social returns. You don’t need to be a Master of Wine to impress most dining companions: even if they are Masters of Wine, most other people are so far away from even passably knowledgeable about wine that a middling level of understanding can yield the majority of the benefits – the signalling power – that you can get from knowing about wine. In other words, you don’t need to be an expert to be impressive, even in the eyes of other experts.
If there are less obvious costs to being an expert, there are also less obvious benefits to being an enthusiastic amateur. It’s easy to underrate the benefits to being competent at a lot of things, especially when they’re compared to being excellent at one thing.
The world can often seem set up to reward experts more and reward enthusiastic amateurs less: academia seems to be a 1000-year experiment to institutionalise this model. But such entrenched reward systems often offer the opportunity for arbitrage. Being good enough at lots of things means that you can often see connections between subjects that experts, siloed into their conceptual schemes, can’t.2 Phillip Tetlock argues that being a fox makes you, on average, a better predictor of the future, for much the same reasons. Academia is famously siloed, but some of the best papers I’ve read are clever precisely because they apply techniques from one field to the problems of another. There is such a thing as gestalt knowledge, and I’d wager that enthusiastic amateurs are better at finding it than experts.
On the other hand, there’s definitely some class of problems which require deep expertise to see and understand and solve. Some problems need smart people to sit and think very hard about for a long time. But I think we generally over-index on this sort of expertise, both institutionally (via the peer-review process) and in a more broad sense, culturally.
Being good enough has another interesting corollary: being good enough at a range of things creates interesting intersections at which you can be an expert. I might not be an expert programmer, or the world’s best philosopher, or the world’s foremost authority on wine or US politics or any of my other interests. But I’m probably in the top 1% of the general population at the intersection of those things, simply by virtue of the rarity of that intersection and my enthusiasm in pursuing them. By cultivating wider interests, and by getting good enough at a broad range of things, you can carve out interesting niches which give you both the ability to be world-leading in that niche and also emerge naturally from the explorations you make, rather than because of your dogged pursuit of decisions made a priori. In other words: being an enthusiastic amateur doesn’t mean you need to give up your edge. (As long as there are only a few enthusiastic amateurs, being an enthusiastic amateur might itself be an edge.) And, nowadays, niches can pay.
One of my smartest friends pointed out that the pursuit of enthusiastic amateurness is a very Theory of Action-driven thing. That is, it suggests answers to the question “what should I do next?” rather than “what should I do in order to achieve XYZ?”. He’s right, of course, but a priorly-formed want to achieve XYZ is the hallmark of a wannabe-expert, and therefore not per se the sort of thing that enthusiastic amateurs will be concerned with. The sort of long-term goals that Theories of Change point toward are often, at least at the subject-level3, underspecified or weighed inappropriately in the more general calculation about how one should live one’s life.
The same friend also pointed out that advice is often written for the wrong people, that being an enthusiastic amateur might also incur costs. One potential cost here: it might make it more challenging to signal your commitment to a group, and therefore make it more difficult to be embedded in a community of peers. He suggested for this reason that to the extent that one can choose their community, it’s better to be more specialised (and therefore expertise is rewarded proportionally). I don’t disagree. It might be. But presumably groups of enthusiastic amateurs – LessWrong? Interintellect? – interested in how best they can be enthusiastic amateurs, exhibit the same sorts of dynamics. If you’re looking to signal your commitment to a group, “enthusiastic amateurs” might not be a bad group to join.
Enthusiastic amateurs aren’t sloppy, or dismissive of expertise. The point is not to be bad at lots of things. It’s to recognise that expertise isn’t the end of the story, and that being good enough at a lot of stuff is often so much more rewarding than being really good at one thing. For many people, expertise is just out of acceptable reach. Whatever you’re good at, there is likely a Chinese toddler doing it better than you could ever hope to. Some people are born with the requisite interest and determination and tenacity to pursue excellence at one big thing. Many, many people are not. I’m pretty sure that a lot of what becoming an expert in something and sustaining that expertise is a slog, and that a lot of people don’t enjoy it as much as they think they should, and that their response to being uninspired is to accept being mediocre, and that this shouldn’t be where careers advice leads. As a result, I don’t think that traditional expertise-oriented career advice is especially good advice for the median person.
Being ‘good enough at X’ for many Xs is completely attainable and, I think, can often set you up to be rewarded socially and commercially. There are lots of people who should know, be emboldened by the fact that expertise is one way amongst several to slice the pie. You can have a rich and rewarding intellectual life without demanding of yourself that you know what you’re destined to do from an early age, or even be destined to do anything. That you can indulge your broader interests without it immediately being written off as procrastination. It’s also playful, in an earnest sense. For many, the life of an enthusiastic amateur is, I really, truly, believe, a lot more fun.
I don’t, for instance, think that the piano-benefits to becoming an expert pianist diminish with more practice. As far as I can tell, being able to play a complicated piece marginally better does unlock new modes of expression and new value in the piece, and, moreover, that seems to happen proportionally to the amount of work you put in. Once you’re an expert, small variations can produce outsized results. This seems especially true in competitive zero-sum games that get repeated over time, like two tennis players facing off regularly. Relative to me, Federer’s marginal training session likely won’t change the outcome of our match. Relative to Nadal, it seems, that extra practice might make all the difference. ↩
For some examples, see David Epstein’s Range. ↩
I’m not quite sure about the meta-level. It might still make sense to pursue expertise at goal formation, or productivity, or something. Enthusiastic amateurs tend to be quite productive, relative to the mean. But maybe that’s because a lot of what they do is amongst the lower-hanging fruit, rather than because they’re productivity experts or aiming to be such. ↩
I make a lot of recommendations for restaurants. I also receive a fair few.
Unless the facts change from out under my feet – one day I’ll tell you a story about The Marksman – I think my recommendations are generally pretty good. But I would, wouldn’t I? Unless I don’t like you, I’m not going to recommend things I don’t think are good recommendations.
It’s very important to be careful when recommending. If you eat out often, say ~3 times / week, you can expect to have ~9,300 meals over a 60-year adulthood of eating. That isn’t many meals! I read roughly a book per week. That’s ~3,120 books in the same adulthood. That isn’t many books! So each meal and each book has to count. & many people eat out many fewer times per month and read much less. Centrally: you should respect the time and money that people will spend based on your recommendations.
It’s also easier to recommend things in the indirect-objectless sense, as I do in the restaurant list above. But recommendations are often recommendations to somebody, in some context, for some purpose.
In these cases, how should we tell which recommendations to listen to, and which to ignore? How reliable is the average recommendation? How can you reliably make good recommendations to others?
Off the top of my head, there are obvious heuristics we can use:
- Prior experience of the recommender’s recommendations. Have you been to restaurants with this person before? Did you like the last movie she recommended?
- The recommender’s knowledge of the subject matter. Is he an expert? Are they at least an enthusiastic amateur? Are you confident they know a lot about this?
- The recommender’s knowledge of the recommendee’s tastes. Does this person know you? Do you have confidence in her model of your preferences? Does he buy you good novels at Christmas?
- Consensus amongst more than one recommender. Have you heard from multiple people that this restaurant is good? Have each of them been consistent in their reasons for recommending it, or have the different reasons been intriguing and appealing?
These each seem non-controversially true: if these conditions are met, it seems more likely that you’ll get a good recommendation. But it’s not at all obvious to me that you’ll reliably (i.e. >50% of the time) get a good recommendation.
For one thing, the facts can change from out under the feet of the recommender. In a large city like London, you’re not likely to revisit the same restaurant more than a few times a month (unless it’s provenly reliable and local). Staff turns over, the great old chef moves to her new place, your friend goes on a busy night, has horrible service, and it’s game over.
For another, it’s only semi-plausible that good taste clusters. So if the recommender’s taste in novels is good, that doesn’t on the face of it seem to suggest his taste in restaurants will necessarily be good; or art, or music, or whatever else. Some people seem blessed with good taste across the board, but that’s far from true universally.
For a third – and this is my central point here – contexts which involve taste exhibit huge interpersonal variation no matter how persuasive the a priori justification happens to be.
So what are some things we can do to ensure we receive better recommendations, and can filter out the bad ones that slip through?
- Surround yourself by people with good taste. This seems like an easy one, but something I think not enough people act on meaningfully. It’s worth selecting good taste into your friendship group, not just because the quality of the recommendations you’ll receive will increase, but because you’ll develop a better appreciation for what sorts of people are the sorts of people who make good recommendations, which of course generalises.
- Cultivate better taste yourself; learn more. Another easy one too easily forgotten. Do you reflect on your aesthetic experiences, note what you enjoyed and what you didn’t? Do you move outside of your comfort zone frequently, and take the hits (so your recommendees don’t have to?) Do you make an active and regular effort to learn more?1
- Select for order-of-magnitude differences. You should aim to find recommenders who have at least an order-of-magnitude more experience than you, and try to tailor your recommendations to people with at least an order-of-magnitude less. There’s enough noise so that the marginal next % of exposure seems much less important. I wouldn’t, for instance, trust the judgement of somebody who had been to strictly one more opera than me. (This perhaps isn’t the case if I’ve never been to an opera.) Another reason why a little learning is a dangerous thing.
- Go wide then deep then wide again. A good way to think about taste is effective pattern-matching. For this you first need a broad range of knowledge to anchor novel experiences, and then enough depth of understanding to discriminate between the great and merely good. But it’s important to back out of the rabbit hole and dig yourself another one. Eat fifteen different cuisines, then pick a few and learn the regional variances within them, then eat fifteen more.2
- Consider the incentives. Tyler Cowen’s famous piece on restaurant recommendations makes this point well. If a restaurant is full of good-looking people, it will attract more people, holding fixed the quality of the food, which reduces the incentive for the restaurant to care about the food as much. (The restaurant, in effect, stops competing on quality of food and thus stops caring about it.) These sorts of incentives are everywhere, and it’s both fun and useful to be a little cynical and consider how they might affect your experience, and the recommendations you receive and make on the basis of it.
Two final points to consider. Firstly, perhaps try to elicit and make anti-recommendations rather than positive recommendations. It can often be more helpful to know where to avoid rather than where to go. This seems a little counterintuitive, since we’re optimising for the positive case – i.e. the case in which we in fact do go to the restaurant – but it’s useful because it provides useful information and still ‘frees up’ the higher end of the recommendation spectrum to float more independently. Similar concerns apply to groups of positive recommendations (“eat in Shoreditch”, “read novels from feminist authors in the 1920s”). You can then use your good taste to narrow it down further.
Finally, and most importantly, try to keep an open mind, and give others as many opportunities to be open-minded as possible. (If that means hiding certain things from your recommendees, so be it.) This can be very high-leverage, because the best sort of recommendation (at least in an information-theoretic sense) is the recommendation somebody is unlikely to receive from anybody else. For instance, many people miss out on amazing food because they dislike the idea of offal, while at the same time are fine with a chicken liver pâté. It’s not that they won’t like offal, it’s that they’re unlikely to follow a recommendation that mentions it, and therefore people are unlikely to make these recommendations in the first place. Sometimes it takes a bit of energy to get past the inertial resistance.
My mother hates the idea of lardo, but couldn’t stop eating the lardo-fried rice at Smoking Goat. I may have forgotten to tell her what it was.
There are some interesting questions about the dynamics of taste. Tastes appear to ossify as you get older, which is a shame since your knowledge accumulates (generally) monotonically. I need to think about this more. ↩
This approach also helps counter Gell-Mann amnesia, because you interlace the development of expertise with novelty and force yourself to consider whether and in which ways the experiences cross-cut. ↩
I’m very prone to greener-grass thinking.
Sometimes it’s important to acknowledge that the grass is greener, because, well it is. Sometimes you’re not in the best possible timeline, and bringing it to your own attention is the first step toward changing it.
But it’s often unhelpful, too, and can trap you into a cycle of distraction and dissatisfaction. You change something, because the grass over there is greener, and you reinforce the preferences you have for novelty. The less you develop the ability to stick with something, focus, persevere, the more difficult it becomes the next time you must.
The more vulnerable you become to your own whims.
The more often you throw away something valuable, because you’re suffering under some sort of novelty bias and what appears to be objectively better is simply fresher.
So it can be useful to reframe how you view your current situation. When I regret some decision because the alternative looks easier, or if I suspect I have made a mistake, I can find solace in the knowledge that:
- It can be easy to forget you’re making trade-offs explicitly and knowingly;
- and, the tradeoffs are generally worth it.
The beginning of anything requires a first step. A new job, a new purchase, a new skill or hobby, a new relationship. If you’re a thoughtful and goal-oriented person, you will consider whether this the right step for you. Something that might appear to be obvious at the time might be a mistake later, and you know this, so you take your time, weigh up your options. Consider the pros and cons and decide to act.
The next day, you’re in the honeymoon phase. The cons are irrelevant. The pros are even better than you expected. You enjoy the present. You look forward to the future with anticipation, with giddiness, with glee.
Some days later, the pros and cons look evenly balanced. You’re less interested than you once were, but stasis is a powerful force, so you do nothing about it.
Some days further, the cons are now what is salient. What began as small pet peeves or niggling doubts have blossomed. The cons are so overwhelming, so frustrating, that you cannot imagine why you could ever have thought the pros might outweigh them (or, hell, even counterbalance them!)
What seemed like a good idea now looks like a set of inappropriate, naïve preferences borne from the brain of somebody who knew less than what you do now.
But here’s the thing. You made those tradeoffs. You considered the pros, you considered the cons.
You might have missed some of the cons initially; not everybody gets it right first time. But this also applies to the pros. Are there not things about your role, your club, your partner, your commitment, whatever it is, that are unexpectedly pleasant, as well as unexpectedly not-so?
If it appears not, then consider this: even if you were able to predict everything a priori, do you think you’d be weighing the cons and pros appropriately after the cons have become so aggravating that you felt the need to reweigh them in the first place?
People only really reflect on whether their decisions were correct when they have reason to suspect that they weren’t.
People don’t look to upend the boat when they’re enjoying the journey.
So your re-evaluation may be happening from a place of more intimate knowledge, but it’s also happening from a place of discomfort.
There are definitely times when you should re-evaluate. When you are being harmed. When you have good reason to suspect you misjudged it all initially. Or when your preferences have changed significantly since then.
But there are lots of times when you shouldn’t. When, were you to reflect in a pro-dominant phase, you’d conclude quite differently.
The grass is always greener, Jamie.
The tradeoffs are usually worth it.
For a while I’ve been collecting a list of interesting book-length projects. Since I’m never going to write the bloody things, I figured it’d be better to throw them into the light of day and see what daylight makes of them. Here is a preliminary, first stab at a list of books I wish existed: books that haven’t been written yet, but could be.
A collection of short books on the history and interpretation of U.S. constitutional amendments
One under-explored feature of the US constitution is its deep cultural, as well as judicial, role in modern American politics. Each amendment has its own motivations, historical context, and judicial precedent; but each amendment also serves as the starting point for contemporary arguments for or against certain policies: even, today, the policies of private companies.
I’d like to see a series of short, concise, focussed books – think roughly the length and depth of the OUP Very Short Introductions series – with each volume focussed around an amendment to the U.S. constitution. Each book could discuss the amendment’s historical context, important cases in its subsequent judicial precedent, and the moral and legal and institutional justifications for the amendment and how they have changed.
Most amendments would have their own volume, while some of the more arcane amendments might be bundled together, where appropriate. The 18th & 21st are a natural pairing; the 13th, 14th and 15th sit snugly together in terms of their shared historical context, but are perhaps each significant enough, with their own rich set of continuing precedent and relevance, for their own volumes; perhaps the 3rd, 4th and 5th. It might also be interesting to conclude the series with a volume on the amendments that didn’t get passed: amongst many others, the ERA, balanced budget amendments, the We the People amendment.
Seeing Like A Startup
Scott’s Seeing Like A State is an excellent piece of political epistemology, not because he makes a powerful moral argument to curtail the absolute power of the state – which he does – nor because of the trenchant analysis he applies to the material and sociopolitical conditions under which the tools of statecraft are likely to be abused – which he gives – but because it grounds the analysis in a Weltanschauung, an all-encompassing frame of reference, a set of spectacles that underpin the identity of those who wear them. To see like a state is not just to see the world a certain way, to plan with a specific framework, to write with a specific dictionary, but also to be somebody.
Isn’t the same true of startup-land? Isn’t working in a startup with its techno-optimism and its studied disregard of conventional wisdom and Disruption with a Capital D a form of world-view? Weren’t we decades ahead on remote work and Agile / Lean Startup approaches to product development? Don’t startups, especially tech startups, have a distinctive set of incentives and respond to a distinctive set of internal and external cues? Isn’t this weird (physical or virtual) Bay Area we inhabit a conduit for a specific mode of thought, a Weltanschauung, a pair of spectacles?
A full-length biography of Évariste Galois
Évariste Galois died aged 20, after being shot in the stomach with a pistol. He died a gregarious yet unlikable, angry young man, but he bequeathed us a small elliptic body of mathematical work that has proven to be incredibly fertile.
The short biographies that accompany discussions of his work are useful and evocative, but focus almost exclusively on either his precociousness, or the Potemkin-romanticism of his death. His life was short but full of activity, sadness, anger, intense adolescence, mental illness and revolutionary politics.
The best biography of him so far (fr) focusses on Galois-as-mathematical-figure (‘personne’ vs ‘personnage’). I’d like to see a full-length biography of Galois-as-boy and Galois-as-man, as well as Galois-as-mathematician: something that draws out the dynamics of a Republican and Bonapartite household in restoration Paris, the stability of his mother and bipolarity of his father (who himself committed suicide when Évariste was 15), the friends and foes, real and imagined, that shaped this troubled young boy.
I’ve been trying to write this book for a while, but have put the project on hold. Perhaps I’ll resurrect it one day.
Uses and abuses of popular science
The effective communication of science is incredibly important. What the electorate understands and values about scientific output can translate meaningfully into policy outcomes, on the one hand, and our continued ability to discover more about the world on the other. (At its limit, it can cause deadly incentives failures when the scientific bureaucracy needs to reengage a science-saturated public). Simplifying without talking down is a tough job, and the very best writers do it with elegance and wit and humanity. But so much of it is reductionist, factually incorrect, statistically ignorant, sensationalist drivel.
Writing about science poorly harms us all. Being excessively confident about scientists’ predictions – “toast causes cancer!” – shifts our focus onto the wrong things, or erodes trust in the output of science when it turns out that, you know, the world might be a little more complicated. Being excessively cynical about science’s output is so often a tiresome postmodern ploy to import political solutions to yet-understood social issues.
I’d like to see a book on popular science and the popularisation of science: what good it can do when it’s good, what harm it can do when it’s bad, and how we can get more of the former and less of the latter. I’d also like to learn more about how science fiction fits into all this. We will never get to a stage where science is not weaponised in one direction or another – discovery is, as the physicists of the Manhattan Project discovered, the beginning of the moral story, not the end of it – but with a better understanding of how science is reported, we might be able to give people the tools to at least discount the views of the most egregious of offenders.
What could science look like?
The way that modern science is structured – the categories and classifications of physics, biology, chemistry, computer science, mathematics, philosophy, the social sciences – forms a reasonably arbitrary and path-dependent structure. A few changes in how humans organised themselves at various stages, how projects got funded, and which questions happened to be salient (for cultural or contingent material reasons), and we have a very different body of knowledge, structured along different lines, today. What, for instance, would modern AI look like if the centre of gravity in computer science hadn’t drifted away from cybernetics and the HCI-focussed research tradition during the ARPA golden years, and toward applied mathematics and algorithm design? What could biology look like if our best mathematicians were more interested in biological systems rather than physical systems? What would Newton have done if he hadn’t spent so much time pursuing alchemy?
A good moral, economic and psychological investigation into paternalism.
I have a set of libertarian-ish (which is to say, mostly negative) aesthetic reactions to paternalism, and, in a trivial sense, ‘paternalism is bad’ seems true by definition – at least on a normative reading of ‘paternalism’. Naturally, these intuitions have come into much sharper focus throughout the pandemic. But state interventions in private lives are nothing new, in many cases they are basically uncontroversial (e.g. seat belts) and there are a whole host of moral and economic arguments in favour as well as against.
Perhaps paternalistic reasoning is our default mode of thought and respect for individual freedom only gets bolted on in certain contexts? If you really believe that such-and-such a lifestyle is immoral, harmful materially and spiritually to he who practices it, why wouldn’t you want to intervene? Liberalism is a position most have to contort themselves into. I’d like to see a modern book-length treatment of this subject, exploring the changing relationship between individual and society, ideally within a framework that make sense of big data, the long death of privacy, and crypto- or techno-libertarianism.
Aesthetics in politics.
Hume never got to finish his ‘examination of morals, politics, and criticism’, but if he had, I imagine much of the project would be spent grounding political discourse in terms of human sentiments like approval and disgust. Jonathan Haidt offers a modern-day version of this story, arguing for the centrality of psychological states in understanding politics and religious discourse.
But one thing that often gets ignored, I think, is how much aesthetics play a part. People find views they dislike not just disagreeable but ugly, and often detached logical reasoning takes a backseat to matters of taste. I’d wager that a lot of opposition to virtue signalling, for instance, is simply that it seems distasteful, or uncouth, or something like that.
To what extent do we elevate matters of taste to matters of shared social importance? (There’s an interesting Twitter thread here on conservatism and aesthetic sense, which might begin to address these issues.)
The House of Uncommons: the rise and fall of excellence in politics
Why has politics lost its cultural cachet? Why do we pay our governors so little relative to other, arguably better run, countries – and certainly less than a lot of private-sector high-status jobs? Are our politicians getting more incompetent and panderous, as they do indeed seem to be? When was the golden era of the politician? What characteristics should we try to select for? Given the unpredictability of democracy and the epistemic credentials of the average voter, how can we reshape our institutions to better encourage the selection of these characteristics?
The Aesthetics of Programming Languages
One thing that often gets lost amongst the computer science jargon and expediencies of writing functional software is that there’s an important aesthetic dimension to programming, a concern with the beauty of the code and algorithms we write. We throw around words like ‘beautiful’ when we talk about code, but we’re usually just gesturing toward some muddy intuitive notion, something like ‘clean’. There’s been little attempt to define these words more rigorously, or explore other aesthetic or aesthetic-adjacent virtues, such as simplicity, or parsimony.
It’s not merely syntactic, either. Much of what a programmer does is invent abstractions, extract out pieces of a system into reusable and more generic chunks. Some abstractions are intuitively better than others. But on what grounds? It’s not just “how widely applicable is this thing”, or “how performant is this thing”, or “how few lines of code does this thing take to implement or call”. There’s a notion of expressivity, the capacity for the abstraction to open and close the right set of logical doors, that is crucially important, and, crucially, misunderstood.
It runs deeper than just the code that actually gets written. Different language design decisions force us to think about our code in different ways, and to structure our programmes along different fault lines. Type systems force us to think about our domain before we think about the processes we apply in that domain. Pure functional languages force us to think about the flow and transformation of data. Different languages, sometimes, though by no means always, designed for different tasks, start with different mental primitives which change both how we write code now, and how the norms of the broader language ecosystem evolve.
The great irony of programming: instructing computers can be a deeply human thing. It would be fun to see a thoughtful little book exploring these questions in more detail.
Last month, Visa announced their intention to buy Plaid for $5.3 billion dollars.
Plaid provide an abstraction around bank accounts. They offer a developer-friendly API to query one or more of a user’s accounts. This allows startups and consumer products to offer financial analysis and services in a bank-agnostic way.
On the back of this purchase, and alongside my general growing interest in fintech, I’ve been becoming more and more bullish about Visa.
But why did Visa pay so much for Plaid?
$5.3b is a lot of money. It’s somewhere between a 25x and 50x multiple of Plaid’s revenue.
The Plaid team are impressive, but at $5.3b, it’s got to be something strategic. As my friend Rich put it, when you’re counting in billions, it’s not an acquihire.
So if it’s not revenue, and it’s not (just) the team, what’s the strategic value in Visa’s owning Plaid?
I can think of three big reasons:
1. It solidifies Visa’s core business.
Visa is a three-sided network. It provides the infrastructure to move money between consumers, merchants, and banks.
When Visa works well, everyone benefits:
Consumers get instant access to credit, and can buy products from anywhere the card is supported.
Merchants can accept payments from anyone, and no longer need to run back-office operations responsible for credit and payments, nor handle cash.
Banks can offer credit to consumers more easily, at higher interest rates, and collect fees from merchants to provide the aforementioned credit management services – and, in doing so, reduce merchants’ exposure to credit risk.
For arranging this service, Visa charge merchants and banks a percentage of each transaction. The underlying economics of the business are excellent: they have over a 50% profit margin, steady revenues, a long pedigree, and revenue still grows 10% year-on-year.
Plaid can contribute to this core business, since an increase in fintech innovation is likely to increase transactions simpliciter:
Banks can offer more, and better tailored, financial products to consumers. Visa can better integrate identity and security services with payments, reducing rates of fraud. More broadly, and more importantly, fintech can, and will, bring finance to the under- and unbanked.
Thus, Visa can leverage Plaid to shore up the existing network, make being a Visa customer more attractive, and create whole new demographics of customers in developing markets.
All of this means more transactions, and more transactions means more transaction fees.
2. Plaid is a good business in its own right.
A 25x to 30x multiple on revenue is large, but Plaid are still a relatively young company.
Plaid offers Visa the opportunity to take fees from the “other side” of the consumer’s relationship with the bank: in GraphQL terms, the queries against the bank account rather than the mutations on it. This is somewhere Visa currently aren’t able to capture value.
Plaid’s target customers are software engineers, but the institutions at the bottleneck to Plaid’s growth are banks. Now Plaid has the weight – and half-century of personal relationships – of Visa behind it, the product itself can grow substantially more.
Plaid are also, so far, focussed heavily on the U.S. Fintech opportunities are global, so now Plaid can lean on Visa’s global reach to expand internationally.
All this said, Plaid could represent a significant income stream for Visa in its own right.
3. Plaid is part of a broader Cambrian explosion in fintech.
Finally, this purchase reflects a wider trend.
As Stripe, Twilio, Algolia, and now Plaid have shown, making developers happy is big business.
But making developers happy and productive also has serious downstream effects. It reduces the amount of time and money it takes to create new products. It encourages the development of new tools which themselves make developers happy and productive, effecting a Cambrian explosion of new products and tools.
Just like the set of norms and tools developed around open source, and composability in smart-contract technologies, making it easy, cheap, and secure for developers to access financial data will only accelerate the potential of technology –– and, crucially, along dimensions we can’t easily predict.
In short: it increases the amount of innovation possible in the world.
If I am at all correct about this, then Visa have shorn up their excellent business model, giving themselves access to an attractive new revenue stream, and now control a resource which has already, and will continue to be, central to sustaining and growing the fintech space, and, eventually, the global economy.
Thanks to Jessica Cooper, Richard Burton, Peter King, JS Denain, and Jonny Corrie for their notes and comments.
A while ago, J and I noticed we’d lie in bed, fiddling with our phones for 30, 40, 45 minutes, an hour, many, most mornings, every morning. Sometimes our fiddling was productive, clearing emails, writing lists, researching some topic or other – but most of the time, it wasn’t.
And it’s easy to justify it to yourself if it’s AngelList, or LinkedIn: you’re keeping up with career options. Hacker News is obviously of great professional concern to a software engineer. Twitter keeps you wired into the zeitgeist. And, as for Facebook, what could be more important than friends and family?
The point, of course, is that I’m fickle and distractible and the only thing that can salvage my mornings, whittle and form them out of the chaos, is to leave the matter out of my hands entirely. Leave motivation and bandwidth in the hands of a /system/, and make delivery an inevitability – or at least a reliable expectation – rather than something subject to whimsy and caprice.
So we built a system, and we call it “book hour”. Every morning, we read, at least, a few pages of a dead-trees paper book. Before touching our phone.
A glimpse of analog before our days become digital.
You can turn off your alarm. If somebody calls you, you’re allowed to answer. And it’s flexible: if you both agree on an exemption – early start, late start, got to catch a flight, etc. – then it’s allowed.
But it turns out that we rarely need an exemption, because beginning your day with a few pages of a book is actually a really nice thing to do.
Carrot: you end up reading more, and, in a relaxed and controlled manner, gradually phase into your day.
Stick: If you touch your phone before reading a few pages, £10 goes in a savings pot in our joint bank account. When the pot reaches some amount, we take ourselves out for dinner.
This is good because it serves the psychological function of an incentive without any great real-terms material loss (we’d probably spend the money on eating out anyway.) It hurts without hurting.
It’s effective, and I urge you to try it out if you too are looking for an easy and fun way to cut down on screen-time.
It’s also, I think, illustrative of a more general approach to productivity, good work, and human happiness:
Build systems that minimise friction, and, where appropriate, align your incentives with your interests.
Research is chaotic, but it’s okay, because we can build routines which encourage regular, structured work and limit the possibility of procrastination.
Memory is chaotic, but it’s okay, because we can use spaced-repetition to minimise friction and make long-term memory a choice. (As for incentives: how about an Anki hour?)
The shape and structure of data is chaotic, but it’s okay, because we can work with statically typed languages and write unit tests, both of which have all kinds of good upstream effects.
The world is chaotic, but it’s okay, because systems help tame it.
Ruth Kinna, Pelican, 2019.
Most political ideologies have clear theoretical commitments. Liberalism: the individual as the primitive unit of society; his wellbeing subordinate to, or exhausted by, his freedom; doctrines of rights which circumscribe and define that freedom sitting at the base of any institutional arrangements. Socialism: the collective as the primitive; the individual’s wants subordinate to the group’s needs; a commitment to equality expressed in the common ownership of property.
But anarchism doesn’t really seem to fit. Anarchism, it seems to me, isn’t a political ideology at all: it’s more like a family resemblance, each anarchism approximating the others to a greater or lesser degree, but none admitting of a common core or shared basis. A fluid set of concepts aimed at achieving a form of radical egalitarianism rather than a concrete theory. Or perhaps, like conservatism, it’s more of a temperament, an inclination to gesture toward an outcome, rather than an explicit set of instructions to achieve it.
‘Anti-capitalist egalitarianism’ holds the clue to unlocking it, says Kinna. But ‘anti-capitalist egalitarianism’ is hardly a clearer term than ‘anarchism’.
In one direction, it veers into a Kantian metaphysical liberalism of totally self-regulating agents. In another, it seems to collapse into communism. So the exponent of anarchism as a distinctive tradition must not only explain anarchism on its own terms, but also situate it relative to the primitives of both the liberal and communist traditions, without relying on the primitives of either.
It turns out that such a tradition can be cleaved out from between the two extremes. But it’s awfully difficult to do cleanly.
Kinna does well to reveal anarchism’s parallel world of literature, art and debate. And she does a good job at casting the anarchist in a positive light, of repainting the out of the colours of a psychotic lover-of-chaos and into something a little more.
But it’s not a good book.
One problem is Kinna’s bias, and how it can hinder the book’s analytical power. This is advertised as a “sympathetic account”, and, to that extent, it delivers: she clearly has an affinity with the anarchist programme and is deeply immersed in its literature. But that’s also what makes it a tough book to follow: her familiarity means that she never really explains the basics, leaving the rest of us to reconstruct the edifice on which her explanations sit.1
This lack of an accessible introduction means that, to the outsider, it is a book of half-thoughts, non-sequiturs and passages groaning under the weight of technical terminology:
The rejection of domination unifies anarchists in shared struggles against the monopolization of resources and the centralization of power, representation, racism, imperialism and authority, while leaving the institutional and sociological mechanisms that explain it open to discussion.
Passages like the above are littered throughout the book, and yet the core concepts they turn on are never really explained. Is domination just shorthand for the ‘monopolization of resources and the centralization of power’? If not, what is it? And if so, why isn’t that compatible with federalism and some liberal anti-trust laws? Isn’t the point of representation to centralise power? And what does it mean to centralise racism and imperalism? Why is authority a bad thing, its centralisation to be struggled against; doesn’t its goodness follow analytically?
And why couldn’t it be that these institutional and sociological mechanisms justify, not just explain the phenomena? Why accept these normative claims in the first place? Answers are not forthcoming, and so the whole thing feels incoherent, and in-groupy.
It is at its most incoherent and in-groupy in the section on education. Education is an important piece of the anarchist puzzle, since most people are in fact decidedly not anarchists, and the political organisation it proposes requires individuals thinking and acting freely in anarchistic (i.e. egalitarian, ‘non-dominating’) ways. But anarchist thought on education, beyond just rehashing Marxist ideas about power sustaining power through ideology, are deeply unenlightening:
Knowledge is underpinned by linear, instrumental reasoning and this is manipulative and alienating … Education … comes, instead, through re-wilding: reconnecting to undomesticated, genuinely ecological and gentler systems of knowing.
And so it goes on, and on, and on.
Inaccessibility is this book’s original sin, but it also feels like it’s been rushed to print. Structurally, it’s organised thematically (Traditions, Cultures, Practices, Conditions, Prospects; followed by a set of anarchist biographies, which is mostly filler) and yet it focusses much on the historical development of the ideas, with the result that it keeps jolting, restarting; awkwardly lapsing into chronology, bumping against the ostensible thematic structure. Each insight and thinker tumbles into the next, presenting a cacophony of anarchisms, rather than a single unified theory. All of which means there’s little to no sustained argumentation.
The biggest sin, however, is the lack of a genuine multi-sided discussion of political violence. Government actions are described as “horrifying brutality and evident injustice”; anarchist assassinations and violent direct action are described in much cooler, theoretical terms. Her sympathy means we miss any real discussion of these very important questions: the extent to which political violence is legitimate, necessary or just. And while I understand her reluctance to encourage the typical framing of anarchism as chaos, violence and disorder, violence is anarchism’s shibboleth, and any book on the subject ought to address it.
Instead of a subtle, informed, nuanced debate of both why these given thinkers find it legitimate, and under what conditions we might today, we get quiet acquiescence, defensiveness, deflection:
One example of this is the debate about the ‘black bloc’ – the protest tactic associated with politic confrontation. Another is tactical diversity … resonant with the fluidity of historical anarchist activism, [which] encourages activists to ask whether a proposed action is ‘effective at generating power’ rather than ask whether it is ‘peaceful or violent’.
That’s as close as we get to a discussion of this central issue, and it’s a much poorer book because of it.
There’s a lot of content in here. Kinna knows the tradition well. And it may be a valuable reference for somebody already au fait with the anarchist tradition; someone already predisposed to buy what it’s selling. But that’s not me.
Some of the main concepts – domination, power, self-emancipation – echo Marx, but seem to be used in a different way; anarchism doesn’t share Marxism’s explanatory basis of historical materialism. Kinna never really explains what anarchists mean by these concepts, perhaps because they’re used so variously that there isn’t any common definition to give. ↩