Neo-Luddism in the era of AI

I would like to open this essay by saying that I am very strongly against Neo-Luddism. I find it to be mostly an exercise in futility - can you really halt a movement that takes a life of its own once it really gets going? Betteridge's law of headlines holds true here as well, of course. And so we find ourselves in a time of truly neck-breaking technological changes, for which the current systems of power (Adam Curtis would be proud) are usually too slow to react to.

In most arguments, I tend to attempt and view both sides as best I can, to varying effect - on subjects pertaining to technology the effect is really diminished. I grew up with a very specific ideal of what technology should be, so I find it difficult to reconcile when technological progress races off to the near-opposite direction. Regardless.

On the positives, there's of course the billions of hours of soul-crushing work that are performed on a yearly basis that AI could really streamline. Entire lives spent in front of excel spreadsheets, tax documents, you name it. I personally believe strongly that every person has a calling to them, and I have a suspicion that sitting in front of a computer placing numbers in different cells is not it, for most.

Then you have self-driving cars. A true technological feat, the door is now open for people to get blackout drunk and still get home safe. Considering automobile accidents are one of the leading causes of premature death worldwide, this is a cause for celebration. At this point - and potentially sooner for some - a voice in the back of your head is probably telling you "yes, sure Chris, however".

However, what about the fact that cars are now software-controlled? What about the fact that they can be hacked? What about the fact that this is a technology that belongs almost solely to Tesla? What about all the people that AI will put out of a job when there's no more numbers to put in cells? I don't consider these to be the true problems of AI. I find them to be immediate problems needing solutions, of course, but they are reactionary more than anything else.

I would like to interject in my own thought process here for a second and state the obvious - AI has been around for decades, and what is now being popularized are advances in specific subfields (mostly generative AI, in the case of chatGPT, the main method of interacting with the AI overlords by everyday people). For sensationalist as well as purely aesthetic reasons, I talk about "AI" here focusing on the subject as one would view it from a philosophical perspective. A black box that takes in input and produces output akin to that of an intelligent being.

With that out of the way, let us return our focus to problems AI creates in the now. I find them to be no different than the "problems" created by the automobile industry, or fridges, or the industrial revolution. And again, you will say; but Chris, the industrial revolution and its consequences- and I will agree, partly. It has not all been a simple, temporal, reactionary problem that can simply be solved with the creation of new jobs as humanity evolves. And this takes me to my next point.

The real problems of AI, are philosophical in nature, in my view. AI is created in the image of its creators, and this primordial pool is not a fun one to look at, from the get-go. This has to do with how AI operates - there is an input, as mentioned, which is then moved to a black box, and then there is an output. And humans don't understand the black box part now, and it is likely that as AI gets more and more complex that humans will never understand the black box part. So really, you have an input and an output, take it or leave it.

But let us assume that for better or for worse the mathematics behind the black box part is sound (this is not an easy assumption to make, and thousands of statisticians spun in their graves as you read that sentence). Surely one can make the argument now that the output is good - in a perfect world perhaps. But we live in a deeply imperfect world, with millenia of deep-seated hatreds, injustices and atrocities. There is not enough trolley problem images in the world to even begin to describe how difficult it would be to "level the playing field" for an AI, when we cannot even do the same as humans.

To visualize this problem for someone less tech-savvy, consider this very rough sketch of an example, meant simply to outline the problem. Let's assume you come from a poor family, and are seeking a loan from your local community bank. Your ancestors haven't had the opportunity to perform money management before, as in prior states of government it was a privilege reserved for the already wealthy. The times have changed however, and you, vowed to make the most of your newfound freedom of choice. You now go to the bank, and their Frankenstein's version of Clippy takes in your details and will then tell you if you are eligible for a loan (let's assume for the sake of brevity that there is no risk calculations at play). The AI now sees that no one you are related to has ever taken out a loan before. Immediately, you would start off with a disadvantage.

One does not need much imagination to see the above play out, as it effectively already is the way the system works. The only difference is there are humans that are more-or-less able to understand the system and can slightly lean on the scales a bit to make it fairer. However, as AI progresses and is able to take more and more of this workload, the problems will no doubt start to accumulate and create massive societal rifts. This is mostly fact, as far as I am concerned.

The "solution" to the above, however, is also very problematic. Governments all around the world will attempt to put their finger on the scale not as outside observers, as your random bank clerk might, but as part of the system, effectively building it in. Even assuming no corruption as is usually encountered during any bureaucratic process and system of power, it would be impossible to predict what the outcome of such a built-in counterweight would be, when one does not understand the black box to begin with, and what alteration of the input leads to a "better" output.

Which kind of hints to my main struggle with AI, which I saved for last. I could go on forever, of course, but I will no doubt return to this topic someday, and I do not really like to theorize ad nauseum. So far, humans have progressed technologically at a given pace, but now they are putting more and more faith in a system which they do not (for the most part) understand, that is very much beholden to a board of directors and has to turn a profit. No doubt, people are already putting more blind faith into it than they should. It can summarize a page for you, it can translate words for you, it can book "the cheapest" plane tickets for you, once given your financial information.

It is very difficult for me to come to a conclusion here, both on the question of AI as well as thematically on this essay, but I keep returning to a quote from the film Pulse by Kiyoshi Kurosawa; No matter how simple the device, once the system's complete, it'll function all on its own and become permanent. This system has become permanent, and we need to learn to live with it, to understand its shortcomings and how our own human nature can work against us. It is going to be a difficult road ahead.