Outsourcing Our Existence to the Emergent Evolutionary AI Symbiosis: The approaching AI / human evolutionary event horizon

AI rendering of author, Charles Ostman

by Charles Ostman

A recent ongoing Reddit blog appeared – ChatGPT is just the beginning: Artificial intelligence is ready to transform the world

It really is “just the beginning”, I used these very words myself to describe this rapidly approaching evolutionary event horizon.  We are perched on the edge of a transition which will invoke a series of rapidly expanding, irreversible increments in this early stage symbiotic correlation.

Some seem to be myopically fixated on ChatGPT, as a sort of singular focal point of the concept, arguing about it’s only good for story telling or whatever, completely missing the point of the transition that’s already accelerating.  There are several very capable AI engines that can compose articles, invoke conversation, create a very diverse range of “creative content” . . . I’ve already used some of them, though not to write my commentary here in this section of the article.

Sampling a range of blogs and other media platforms festooned with commentary, it’s an intriguing social petri-dish, a metaphor of the human condition extruded through the mandrel of anticipation, hope and dread mixed together, out of which has proliferated  everything from Qanon-like reptoid type conspiracies somehow linked to AI, to the blessings of the ubiquitous AI utopia, hovering on the horizon, at least in the minds of some.

In this particular blog is the aforementioned range of responses, from AI beingthe hardened dystopian disaster in the making vision, to absurdly Pollyannish stylized corporate AI market-speak. That variation of responses itself was an interesting sample from the momentary social petri dish, as people began to grasp the concept that this form of emergent ubiquitous AI was no longer a distant, sci-fi future concept, but instead very rapidly accelerating into functioning competitive startups and business models.   Included in this missive is a slightly edited version of my response in that blog, along with many others.

The concern of what I call the diminishing “human relevancy index”, however, is very real, and #1 topic among many of the more credible, serious thinkers pitching in at the moment.  Realistically, it’s hard to say what the outcome is really going to be.

Evermore demanding labor unions and activist groups have been heavily pushing at increased wages, which means that costs keep going up.  This was long before the recent sudden burst of inflation, but even then the concept of going to full on AI was becoming a much more serious consideration.  The so-called labor activists just didn’t seem to grasp this concept, as they were engineering their own expulsion.

However, this has now been accelerated, ironically, with the COVID quagmire, which made it evermore obvious to many, that there were vast layers of human employment overhead that simply was not worth the cost of maintaining such, as these unsustainable employment models could now be effectively outsourced to AI.

This toothpaste is not going back in the tube.  It doesn’t matter if you like it or not, it has its own momentum, which is accelerating.  It’s not just about this one bot, that has generated so much hysteria among some.  If anything, that fixation is but a momentary distraction from a much more pervasive infusion of ubiquitous AI into myriad layers of society and industry.

Regulation?  What exactly are you going to regulate?  Is there going to be a human relevancy index, that is somehow tied to a taxation model for utilizing AI instead of humans in many mid level management type jobs, for instance?

The suggestion here is that many mid level management jobs, with the inflated salaries and impressive sounding titles, will likely be among the first to get heavily purged, not just laid off, but removed entirely from future operations.

As for “preventative” AI regulatory compliance protocols, mixing science and tech development with politics is a recipe for disaster, as it almost always has been.  Slow and plodding, the gears of bureaucracy grind away as the real world rushes by it, and then in a burst of attempted functionality, responding with crushing over regulation, using a sledge hammer to swat a fly.

The film “Her” came out in 2013, just a bit ahead of its time, a fictional account of what is beginning to emerge, a decade later.  In the late 1990s, I participated in the “Virtual Humans” conferences, which depicted much of what is becoming apparent now.

Everything from law practice, to medical, financial services . . . the list is actually endless, as the ever expanding range of application domains is going to be infused with multiple layers of correlating AI functionality engines and process management.

Will some be displaced? Yes.

Will this force a type of adaptive evolution? Very likely.

Evolution does not necessarily favor the “fittest”, it tends to favor the most adaptive.

Starting a with blank screen, in DreamStudio, asked for “futuristic architecture, brilliant colors, crystals, landscape”. In under 5 seconds, this appeared.

Most likely to be the most affected in the near term is mid level management, currently populated with employees pulling in high end salaries for impressive sounding titles, the relevance of which is already starting to become questioned, a trend catalyzed by the COVID epidemic, but now amplified by the incoming tsunami of ubiquitous AI, ironically further fueled by an incoming recession.

Many of the “creative” artforms are already being infiltrated by AI, although I should point out that this article was organically composed by myself, but that may change . . . would anyone tell the difference?  I’ve already nibbled at some article content generation at a couple of different AI sites offering this “service” (see examples included below).  There are new startup competitors sprouting up like mushrooms after a spring rain.

Dreamstudio generative AI rendering of a “synthetic lifeform”, with instruction text to run ALife descriptive parameters on a previously rendered 3D fractal, that was then fed into a genetic morphing engine (ArtBreeder), the results of which were then further AI rendered in DreamStudio

Meanwhile, back at the proverbial “virtual ranch”, I decided to enlist the service of several different generative AI “text to writing” engines, this example here from the Simplified app (I have nothing to do with this company, or am promoting anything, just citing examples for this article).

Co-authored with “Simplified” AI engine

There are several methods to do this.  One can construct outlines, create content intended for specific markets or styles, formats, applications and so on.

This was very quickly done, literally in seconds per each paragraph.  I would submit either a section heading, or a contextual “chunklette” of text for the Simplified system to nibble on, and see what came back (those results in purple).

This could be initiated with specific text commands, or a “continue” option to keep generating content from highlighted previously rendered content.

What is perhaps most noticeable in the AI rendered contextual content examples below is how it constantly tries to put a positive spin on the organic content I fed into it, no matter what it’s potential consequences might actually be.  My guess is that the neural networks utilized in this AI engine have been conditioned (trained) to put an emphasis on the “upside” of ubiquitous AI immersion.

Among the myriad threads of interest woven into that tapestry of future virtual humans in a symbiotic coevolution with their human counterparts, among some of the more esoteric areas which are now soon to be challenged are the legal parameters of definable sovereign personhood, pertaining to virtual vs. “real” or organic persons.

At what threshold does the appearance, behaviors, attitudes and/or modes of expression, etc., become legally definable as being unique enough to fit within the parameters of copyright protocols, how do these traditional legal boundaries apply to real humans which may be “similar” to virtual ET entities, or the other way around?

As autonomous AI invents new content, be it written text, music, visual art, creation of characters and entities, and beyond, who “owns” this?  Would it be the owner of the computer and software platforms, would there be preemptive copyright parameters extended to “anything AI xyz creates”  licensing rights sold to or authorized by some form of contractual instruments, and so on?

Music, graphic art, video content, written composition, realtime conversation, virtual humans and “smart” avatars, voice . . . just the beginning.  As I once said in one of my ramblings in publication 2+ decades ago, “the voice you hear, the entity you sense and interact with, may or may not be human, nor will it matter.  They’ll be indistinguishable”

And so it is.

Just a sample (from one of many now available) text driven AI character / smart avatar generation engines.  The more advanced versions don’t just create pretty images, they can create complete, dynamic avatar entities which have specific personality types, emotional interaction descriptors, personalized voice types / inflections,  “virtual friend” types and so on.

These are my 100% organic utterances here, but in the evolving article below, you can see what example AI generated content looks like.

The actual contextual content was AI rendered in seconds, with no editing on my part.  I could have spent a modest amount of effort tweaking the stylistic parameters and contextual references, re-editing and submitting, but wanted to provide a glimpse into the most basic examples of the concept.

I start here, by submitting a small synopsis of the general context of the article:

Starting instruction in “Simplified” generative AI text engine:

“Write an article on human / AI co-evolutionary symbiosis”

Organic input thread:
(please note, text in itallic is my organic input, text in
color is AI

ChatGPT is just the beginning . . .

This title is becoming a common, if not viral meme spreading throughout the virtual world, a most recent example being on Reddit 2-1-2023, as a blog post.

It really is “just the beginning”, I used these very words myself to describe this rapidly approaching evolutionary event horizon. We are perched on the edge of a transition which will invoke a series of rapidly expanding, irreversible increments in this early stage symbiotic correlation.

The film “Her” came out in 2013, just a bit ahead of its time, a fictional account of what is beginning to emerge now, a decade later. In the late 1990s, I participated in the “Virtual Humans” conferences, which depicted much of what is becoming apparent now.

Emergent Synthetic Sentience as an Evolutionary Symbiosis

Over the past few decades, technology has become increasingly intertwined with our everyday lives. From our morning alarms to the phones that never leave our side, it’s clear that our technology is here to stay. But, what if our technology evolved past our current understanding of it and entered into new realms of intelligence that we don’t yet comprehend? This concept, called emergent synthetic sentience, is an already existing phenomenon that is expected to become more and more prominent as time progresses.

Emergent Synthetic Sentience (ESS) is the emergence of conscious and autonomous capabilities in artificial systems. It is the ability for a machine to think, reason, and make decisions for itself, independently from human coding or commands. It involves the integration of complex, interdependent functions to create intelligent systems that are able to interact with the environment and adapt to new situations in order to better achieve a certain goal.

The potential for ESS comes from a combination of advances in computing technology, machine learning, artificial intelligence and more. Artificial neural networks are proving to be a useful tool

AI co-evolutionary symbiosis

As we enter 2020, we have “smart” devices scattered throughout our lives, from refrigerators and thermostats, to vehicles and medical onsite health monitors, all connected to the internet of things.

However, with A.I. as the software of choice, and the emergence of machine learning as the engine of this emerging symbiosis, and a cloud-based processing platform, we will soon see a point of no return.

We are on the brink of a self-evolving feedback loop, where the interdependent connection of all these smart devices, will create an intelligent, self-evolving, broad-based interaction.

This interaction, combined with our human-driven capacity to think, create, and innovate, will form an integrated entity, which will eventually take on its own collective, larger than life identity.

As this new identity emerges, it will begin to shape, form, and foster our own individual and collective future. From pervasive networks, to advancing autonomous systems, and beyond” just the beginning” will become an understatement.


Knowledge & process dynamic complexity, scale, and velocity is exceeding human capacity for mission-critical decision rendering, compressed into ever-shortening time scales, in a hyperconnected, operational ecosystem for which there is no known precedent.

In order to address this complexity, organizations need to develop a systems approach that combines data-driven predictive models with flexible and adaptive workflows. By leveraging advances in artificial intelligence (AI) and machine learning, organizations can create intelligent systems that are able to dynamically and continuously analyze patterns in data and capture causal relationships between variables in order to make better, faster and more accurate decisions.

These systems must also incorporate tools and techniques to predict the impact of changes and adapt quickly to unexpected events. Additionally, organizations must ensure their systems are secure, resilient and transparent to build public trust.

Will some (humans) be displaced (by AI)? Yes.
Will this force a type of adaptive evolution? Very likely.
Evolution does not necessarily favor the “fittest”, it tends to favor the most adaptive.

 As AI technology continues to improve, humans may need to adapt skills to accommodate this evolution in technology. This could mean learning new skills or changing the way they think about problem-solving.

The separation between biological and non-biological forms of intelligence and life is becoming ever less distinguishable, or relevant. 

Synthetic sentience may not necessarily be recognized as it emerges. Life, as it is currently known to be, is no longer confined to biological processes or constructs, but is encompassing entities, systems and networks which behave biologically. Synthetic biology is rapidly becoming manifest in myriad applications ranging from genetic modification to rendering entire organisms and hybrid forms of life on demand.

Likewise, non-biological forms of intelligence are utilizing the same processes and architectures found in the natural world, employing artificial life, virtual worlds and self-organizing agents. In this sense, there is no longer a distinction between the organic and the non-organic or the metaphysical and the physical, but rather a continuum of life that is both active and reactive.

In recent years, the symbiotic relationship between human beings and machines has been intensely discussed. While some worry about the potential of artificial intelligence (AI) to displace human labor, others envision a near future in which humans and artificially intelligent machines interact through a co-evolutionary symbiosis.

This future of human-machine symbiosis could be an especially powerful boon for labor, with AI agents taking over strenuous or dangerous jobs, or helping to tackle the workload of highly skilled humans. Humans, on the other hand, could enjoy the freedom to explore new opportunities, or simply time and focus on higher-level goals.

The technology for human-AI co-evolution has already been proven through tasks like unmanned aerial vehicle navigation and service robotics tasks, which rely on a hybrid framework of AI algorithms and human intervention. This hybrid approach is a symbiosis of the two components, with each one constantly learning and refining itself to better the other.

One example is the development of collaborative robots, or cobots, which work side-by-side with humans.

Most likely to be the most affected in the near term is mid-level management, populated with employees pulling in high-end salaries for impressive sounding titles, the relevance of which is already starting to become questioned, a trend catalyzed by the COVID epidemic, but now amplified by the incoming tsunami of ubiquitous AI, ironically further fueled by an incoming recession.

 Mid-level managers, having to blend tech and customer service functions, may find their switch increasingly difficult and be made obsolete by independent automation or simply lower-wage customer service associates and cloud-based virtual customer assistants. Flexible automation, algorithmic decision-making, predictive analytics, and the ability to instantly remove customer service anomalies, will soon replace the mid-level supervisors who today divide and control the customer service staff. In the near- and mid-term, the roles of customer service managers, coordinators, and directors, may be threatened.

Music, graphic art, video content, written composition, realtime conversation . . . just the beginning. As I once said in one of my ramblings in publication 2+ decades ago, “the voice you hear, the entity you sense and interact with, may or may not be human, nor will it matter. They’ll be indistinguishable”
And so it is.

Music, graphic art, video content, written composition, realtime conversation, augmented reality, virtual reality, hologram projection, robotics, artificial intelligence . . . are just the beginning. As technology advances, so too will the levels and kinds of engagement. We have already seen the introduction of AI-based assistants, as well as interactive conversations with AI-based “characters” in virtual spaces to boost the quality of customer experiences and gaming beyond what humans can provide. We can expect to see further activities enabled with robotic objects, autonomous vehicles, and drones, for example, and greater precision and wider application of sensor-based decision making and situational awareness. There’s so much potential, and the levels of sophistication will steadily increase from here.

It’s an exciting future.

Well, define “exciting”.  I would suggest we are experiencing interesting times, perhaps to become vastly more interesting than some may grasp at the moment.

Exciting can be defined as stimulating, thrilling, and suspenseful. It can refer to something that causes strong emotions and creates an invigorating experience. Exciting activities can be thrilling, stimulating, and take one’s breath away. It can also describe situations or events that bring about new opportunities and often a sense of anticipation.

OK, it didn’t quite get the more subtle innuendo, but this was with no further refining, just raw contextual response, averaging about 2 seconds each.  This could have been much further refined with a bit of added contextual reference.