This is For Everyone - Tim Berners-Lee

Finished reading: This is For Everyone by Tim Berners-Lee ๐Ÿ“š

Firstly, I have no idea why, but I wasn’t aware of how intermingled Tim has been with high society. He doesn’t try and hide it in this book - quite the opposite - but it was a little startling to read about in current times. I wanted to tell him to read the room, more than once.

However, I really enjoyed the nostalgic trip the book provided. I grew up as the web grew up, so it felt both interesting and oddly cathartic. Tim took us from the genesis of the first page, through the various milestones of the web’s development, and his take on its future.

He certainly sounds more of an optimist of where we could head than I expected, but maybe this was more wishful than his honest expectation. The Solid project did pique my interest not long ago, and I might just be convinced to take another look at playing with it.

I can’t imagine a non-technologist being at all interested in spending the time reading this - but if you’re involved at all, it is worth including on your list, in my opinion.

Currently reading: This is For Everyone by Tim Berners-Lee ๐Ÿ“š

“But if we are to prevent these systems from exploiting us, it is critical that we get the data layer right. We need a layer where we control our own data, and we can share anything with anyone, or any agent - or no one.”

My (current) thoughts on LLMs

We’ve lived with high-profile LLMs for a while now, and a lot has been written about their affect on our future as a society. I’ve read a lot of it. I’ve lurched from existential dread, to complete disdain and hatred. I feel my boat is steadying somewhat, and thought it may be useful to chronicle some thoughts here.

To me, the idea of a Large Language Model is not inherently positive or negative - useful or damaging - out of the context of its use and development. What we have here is a system design that predicts the ideal next word, with additional tooling built around that to influence the next word. What “ideal” means is highly dependent on training data, the intention of the developer, and the intention of the user.

The actions of some companies whose main products are LLMs have been revolting (at least, to me). Many people have turned violently against a technology whose reputation is most aggressively written by the big players. Silicon Valley companies like OpenAI, Anthropic, Google and Meta have acted like colonial masters as was brilliantly described in Karen Hao’s book The Empire of AI.

The unethical practices that were involved in the building of models in the larger Silicon Valley companies are abhorrent and should not be ignored.

The shear size of the (unrealised) promise from these grifters means their resources have to match the scale of the problem. At a time that is pivotal in our approach to the continuation of a habitable environment, the impact on our environment and the massive contribution to climate change from the infrastructure behind these huge services is abhorrent and should not be ignored.

The wishy-washy, snake oil feel to the goal of “AGI” should really be enough to turn most thinking people off of these companies. Their products are not “thinking”, they have no concept of correct or incorrect, no sense of right and wrong. They are not sensing. They’re just a way to collect, store and query information but in a way we aren’t used to.

Which makes it quite sad that a technology that could be put to good use in specific contexts is being led by what often feels to me as the worst of us. There is room for smaller models, probably on-device, that do not have these down sides. Tools used to solve a problem, not a planet-hungry solution in search of problems. I intend to direct any of my attention that’s on these matters to those working on that vision of the future.

In terms of my industry of Software Engineering, my advice currently looks something like this:

There is plenty else that concerns me about LLMs, and the internet - heck, the world - right now. I actually couldn’t promise you the web will be worth using in a year’s time. The impact on culture and the arts I think is yet unknown. I suspect humans will always value human creation and art, and will find some way to ensure its authenticity. I am deeply worried about losing trust in anything one reads, hears or sees that is not directly before us in the fresh air. The reckless, sadistic march to build energy-sucking data centres, reversing any real hope of combatting climate change.

Gladly, I can realise I don’t need to know the answers to these things. I hope you don’t feel you do too. It’s a time of rapid change, and being sure of much right now is probably a fool’s errand. For now, if I’m thinking of LLMs at all, I’ll be enjoying the small, open, worthwhile projects that solve real problems I encounter, in the real world.

Bye Bye Tiny Tiny RSS

The main person behind Tiny Tiny RSS is to shut down the project on November 1st:

The reasons for this are many but the tl;dr is that I no longer find it fun to maintain public-facing anything, be it open source projects or websites. As for tt-rss specifically, it has been โ€˜doneโ€™ for years now and the โ€œletโ€™s bump base PHP version and fix breakagesโ€ routine is not engaging in the slightest.

Sad news, but I’m sure the project will be forked and kept going in some vain.