...On Technology
A furtherance of discussion on AI and technology's creeping denervation versus its utility to humanity.
The essay The Blurring Test by Librarian of Celaeno is an interesting observation on the vector of Artificial Intelligence (AI) in society. I encourage anyone reading this to first read the linked article.
I won’t get into the nuances of the nebulous field of AI as the article focuses on Large Language Models (LLM’s), which everybody is familiar with by now, and I imagine the Librarian’s opinions generally apply to any other model.
I have developed machine learning models in my career to analyze language as a means of producing a better design (think Don Norman’s Design of Everyday Things applied as algorithmic insight) as well as ‘codifying’ high-value, subjective, and repetitive tasks - automation if you must, but I argue it is closer to augmentation (there are a lot of opinions on how automation replaces human labor, but my experience is that it only shifts the skills of the laborer from direct to supporting roles with less danger and higher quality and at a net positive hiring rate).
I only mention my background for sake of context and not as an appeal to authority. I am not an expert on the mathematics of AI or even the software engineering, but I have applied these concepts in useful cases. I am also a practitioner of the contra-machine; I found myself dependent on machines (and other people) and I seek to decrease that reliance. Hunting, farming, building, and even reasoning are my personal manifestations of independence. Yet, I still use AI and I am excited about its application prospects.
Reducing the Creator-Tool Gap
Technology today is an expression of an industrial network. Almost any current advancement in technology relies on a stack of industrial capabilities. In the AI world, the massive server farm funded by venture capital (in turn funded by other industries) runs on critical hardware from advancements in the silicon industry and powered by advancements in the energy industry, using data from the internet industry, and so on. An historical delay in any of these industries results in a delay in the technology we see today. This reliance applies to any technology (although I don’t like to speak in absolutes and there likely is some ‘advancement in technology’ of wooden windmills or something that I would argue is more of an interesting side-quest rather than the main storyline).
Technology has the obvious benefits of improving quality of life (yes, sometimes to our own detriment like dopamine drips of porn and Doritos apps). What I am most excited about with AI, however, is the part of technology that seeks to reduce the skills required to create and to further advance technology. In Arnold Kling’s essay Anticipating the Uses of the new AI, he speaks of robotics as a next-generation pairing with AI to allow non-engineers to program robots. However, with the advent of push-to-program cobots a decade ago, and in smaller ways like Wolfram Alpha’s use of language in computing, many of the core technologies already exist to reduce the required skillsets to use robots.
Where I had to use machine-level assembly language or higher-level languages like BASIC, C, and Python (or a robotics company’s proprietary language) to program, the non-engineer can now use block diagrams and compartmentalized computation to execute their desires. In fact, I have used this to my advantage to train biologists on automating tasks, expanding my influence when there was a lack of classical engineers available.
The gap between creator and tool was already decreasing. AI only further decreases the gap.
Realigning the Intellectual Ladder Rungs
Why did a biologist sign up to program robots in the first place? While I would encourage them to dig deeper to understand the core of the technology, there are diminishing returns for them to do so. They simply wanted to create. More specifically, they wanted to create solutions. Learning the robotics side was a means to an end. Kids coding and robotics programs abound for this same reason. As I have rejoined my alma mater for more research work, I see further evidence of a shift from the applied theory version of engineering into a solutions-focused version. Eyes gloss over when I discuss Fermi distributions, but are at rapt attention discussing productization.
Is this bad, indicative of a lazy student base and/or a loss of the level of knowledge needed for insightful technology improvements? Possibly, but I also see this as a demarcation between the desire to create and the desire to understand, which seem to be conflated by higher education and even our own metrics of success. As Librarian states:
I teach because learning is not the mechanical process of intaking information but a joining of the souls of student and teacher, an uplifting to higher things shared between them.
Our current education approach forces students to violate this ethic in studies they do not care about. For this and other reasons, many have argued that the education industry is over-saturated, filled with students seeking fortune rather than knowledge - and usually acquiring neither. Kids (and more increasingly, adults, too) are incentivized by the pursuit of money and class to continually pump dollars into the education system for some nebulous net benefit, rather than seeing education as the pursuit of excellence - the higher ethic, in my opinion, and with less devotees. These coasting-style students take up a limited number of spots that could otherwise be occupied by passionate students seeking deep understanding (and thus a better match for the program), or drag the education quality down when both student types must interact.
A thought comes to mind here, though, that AI can provide much of a general education at a fraction of an individual’s education expenses by its use of aggregating and distilling information without the need for research skills. Pushing a skillset down a few rungs on the intellectual ladder to satiate the desire to create could have the consequences of filtering the passionate student into higher education and allowing the fortune-seekers to mine for gold without diluting the totality of the education experience. (However, education is a business, and no business will give up their cash cow, or otherwise allow a technology to usurp said cow.)
It would seem counterintuitive to be a proponent of both elitism and access in this way, but I argue that the desire for creation is general and the desire for deep understanding is rare, and accepting both modalities could result in more profound outcomes for both groups. Unfortunately, the incentives for producing as much as possible as fast as possible are strong, and the current system (from education to corporate) will continue to endorse the degree mill. This system already had issues with integrity and competence because of this, and AI tools are only further exposing it.
Regardless of educational pursuits and regardless of whether we lionize excellence or production, AI will certainly generalize accessibility to otherwise inaccessible knowledge.
Toward a Bifurcation
The argument that Librarian seems to lay out is that technology abets our worst, lazy instincts and dampens our motive force - that is, we are headed towards a highly passive and consumptive hell, losing our humanity along the way. I can’t argue with the observation of that resultant vector. However, I would argue that there might be transient conditions that inform the vector, rather than intrinsic conditions. As in the case of higher education, and generally the corporate world writ large, the high-production incentives have perverted the pursuit of excellence. This pursuit of excellence and individualism is what I consider the root of humanity, and AI can close gaps and create access to support this effort.
The ‘poisonous approach’ Librarian mentions as a damper to growth, and that AI seems to embody, may then just be a symptom of the current systemic weaknesses rather than a root flaw embedded in human nature that forever trends toward mass stupidity. If the human desire for success is the voltage/potential, then the circuit to route the current - the education system, the corporate structure, societal systems, etc. - might be of a poor or outdated design. If we simply ignore the underlying design and throw AI on top of junk, of course it will result in junk. AI porn is still just porn.
It is how we use the technology that matters, and I think we will find the same modalities outlined above (excellence vs. production focus) in the use cases of AI. Yes, humanity creates machines to replace our work, but the result can be diverting energy for more efficient and higher-level applications. Or, it can be diverting energy to producing and consuming more porn and Doritos. Or both.
The former branch can be seen in the inefficiencies of the aforementioned ‘industry stack.’ For example, there are entire bureaucratic systems built around regulations and ensuring conformance thereof (read: paperwork, and mountains of it). These ancillary systems are of relatively little value but high risk if ignored. They require special skill sets because it is mundane work and are non-value-added (in the Lean Six Sigma sense), with entire companies built as succubi to ‘assist’ you with this. This is a clear instance ripe for AI usage, and in fact I will likely develop an API to assist my own work. AI here fills a resource gap and creates efficiency. Much in the way that an industry popped up for IT work, an industry for ‘prompt engineering’ is being established right now, which will further integrate AI into many more industries and specialties.
The latter branch is the generalized mass of people utilizing AI, which, unfortunately supports Librarian’s opinion of dehumanization. The recursiveness of the AI and user loop in this branch would seem to divorce users from skills as well as create a seething mediocrity without external impulses. However, while technology does trend toward homogeneity, correlation does not imply causation.
We may be seeing the beginning of a bifurcation of use cases of AI with much different ethics and experiences. I posit that the pursuits of excellence using AI, less numerous as they are, prove an exception to the rule and show that humanity - excellence and individualism - can remain intact alongside technology. The methodology of the system on which technology is predicated seems just as rich a target as the technology itself.
In any case, we are certainly in the middle of a paradigm shift. While we could be approaching the pessimistic argument that AI furthers the loss of value of the individual (skillsets, purpose, motivation, etc.), we also could be simply realigning the individual’s capacity with the need for such capacity. However, if we simply layer AI on top of a poorly designed system, we risk losing any benefit and adding only negatives.
Uncertain Times in an Uncertain System
AI can easily approach yet-another-step-in-the-doompath by eradicating general knowledge and skills for the pursuit of efficiency and maniac productization. AI can also provide resources to the resourceless, catalyzing fields and products that may not have existed otherwise.
However, I do think that societies trend toward giving up their individualism (read: humanity) for sake of comfort, regardless of whether the root cause is a feature of the system or the incentives of the system. Either way, the prospect of losing more independence, and thus more skills, would be unfortunate.
I lie somewhere between cautious and nihilistic optimism on AI. ‘Cautious’ because there are unanticipated effects, and ‘nihilistic’ because the technology will march on with or without me. There is great personal freedom to be had from an enabling technology like this. There is also a creeping denervation of humanity - a loss of connection to the analog world and the skills required to function in it. It may not be a question of whether this machine will rob us of humanity, but which machine will we allow to do so?
For me, I will continue to use AI and explore my analog humanity.
That is a very interesting perspective. I tried to find the theory on complexity order from my old engineering textbooks, but I can’t find it. I will have to dig into my sociology books next because they deal with a little more systems theory.
Edit: I just asked ChatGPT, yes I think I remember this theory from one of my sociology books, which are buried.
That AI "furthers the loss of value of the individual" is a minor compared to the real dangers I see in AI. Yes, I use AI well, and it's application in many fields are endless.. But its potential as a control tool for a new globalist power elite are also endless. We are in a new war for humanity, and AI is the field of battle.