Catch the Replay: Data Citizens presents Sheldon Brown

Catch the Replay: Data Citizens presents Sheldon Brown

Data Citizens: A Distinguished Lecture series offers in-depth talks by experts in the field of data science on a wide variety of topics including data visualization, big data, artificial intelligence, and predictive analytics. Catch the replay of a lecture on AI by academic, entrepreneur, computer scientist, and artist Sheldon Brown.

Sheldon is currently Co-Founder and Vice President of Product Design and Innovation at Virbela, a virtual world company spun out of his lab at UC San Diego where he was a Professor and the MacArthur Foundation Endowed Chair in Digital Media and Learning. Sheldon was also the founding Director of the Arthur C. Clarke Center for Human Imagination, Director of the UCSD branch of the Center for Hybrid Multicore Productivity Research, and Co-Founder of the California Institute of Telecommunications and Information Technologies (Calit 2). He went on to help launch the Creative Computing Institute at the University of the Arts in London as Professor and Research Leader. He has also been a Visiting Professor at NYU Shanghai and an Honorary Professor at Shanghai University. His multi-media artworks have been commissioned and exhibited at museums around the world.

This talk took place on Thursday, March 11, 2021 (3:00-4:30 PM EST). IDSC Founding Director Nick Tsinoremas glowingly introduced Sheldon Brown. Some minor technical difficulties on Nick’s end and on Sheldon’s end created a serendipitous segue to Sheldon’s start up Virbela. After Nick’s highlights of Sheldon’s career, Sheldon said he frequently gets asked to define his career: Are you an artist, are you a professor of computer science, or are you a business person? And his answer is: Yes, I do encapsulate all these things.

“We are in the midst of
a really profound transformation
in human cultural activity.”


Sheldon began: I’m a computer scientist. I’m an artist. I’ve been, primarily, in academia—although my whole time in academia always doing lots of business consulting for big business, as well as start ups. Out of my lab a few start ups have spun out, and now, I’m focusing a lot of attention, in particular, on one of them [Virbela] because it’s very timely. It’s very telling to something that I hope is a takeaway message from this talk: that we are in the midst of a really profound transformation in human cultural activity that’s been motivated, primarily, by computing.

Human cultural activity organizes itself into different categories over time because it makes sense to provide focus in different areas. And sometimes those areas get increasingly siloed and deep, and they have arcane knowledge bases that require complete dedication. But, we’re at a moment where these technologies and concepts necessitate us thinking laterally, and across different categories.


When I founded the Arthur C. Clark Center for Human Imagination, it was focused on trying to understand the phenomenon of human imagination. And to do that, we needed to think about cognitive science and neuroscience and artistic creation and literary culture. All these things are components in what we might consider to be “imagination”. If we think about a concept as ineffable as imagination, we should be informed by views from those different areas, and think how answers from those different areas might help influence the ways we further investigate the phenomenon of imagination.  This speaks to my career’s interdisciplinarity.


I‘ve also been motivated and interested to think about how my work gets out into the world. A lot of it has been though my work as an artist. I have created exhibitions that were shown internationally—primarily at Contemporary Art museums. But I’ve also been working to spin technologies out to create companies that have a different way in which the work engages the world. So, I am always interested in these opportunities that bring together and bridge intellectual disciplines, but also cultural realms—like the space between academic research and cultural production, or new kinds of business ventures.

About three years ago, I was looking to transition my career and was recruited by a new institute, the Creative Computing Institute (at the University of London). It was going to take a lot of the things I had been doing at UCSD, and utilize the best pieces of that to build a new platform for this new research institute. So, I moved to London to help start this Institute, and, at the same time, had a virtual world company, Virbela, that had spun out of my lab. There seemed to be a pretty good balance between starting the new research institute and working with this new virtual world company, Virbela, and then the Covid thing hit.


When Covid hit a few things happened. The academic environment got put into a difficult situation, which every university in the world ran into. Building out the Institute’s facility got put on indefinite hold. In the meantime, Virbela was this virtual platform was created to allow people from all over the world to come together in this metaphor of physical space, and to collaborate with people through avatars, to have things that range from meetings or presentations with thousands of people, to small group interactions between a few people at a time. This seemed to be one of the kinds of tools, like Zoom, that was well positioned to help the world adapt to a Covid/lockdown state of existence.

As the University of London was moving into slowdown mode, this company was exploding in terms of growth. I took a leave from the Institute and joined Verbela full time to help guide its next phase of development as the V.P. of Product Design and Innovation.


The idea with Virbela is that we have a set of virtual-world technologies we package and license to different companies, based on the types of things that they trying to do and the scale that they’re doing them at. We focus on enterprise companies, event companies, and the education market. Every client gets their own semi-customized virtual world platform to engage with, and we have hundreds of these out there that are licensed to different people. Improvements and feedback from different clients feed into the entire-base platform, so it’s a continuously improving environment. We’re about to launch the next version, at the end of Q2, beginning of Q3, that will have an entirely different look and feel to it, and operate at a bigger and bigger scale.

“There are more people
developing virtual reality
than using virtual reality.”


There are a number of companies out there right now that are doing aspects of this. We have some things that we think are differentiators for our platform, including the scale that we’re able to support within the virtual worlds, and the kinds of affordances we give people are focused first, and foremost, on interaction, engagement, and collaboration. So, this is a virtual-world platform, it’s not virtual reality. Although we do have a virtual reality interface that you can do with it, we are a virtual world first, as opposed to virtual reality first. There are a lot of competitors who support virtual reality as the first step. We always saw that as our second step. One of our inside jokes is that there are more people developing virtual reality than using virtual reality.

A virtual world platform uses just a desktop as an interface, so there’s a very low bar of adoption to get people to interact and engage with it. And we try to make it a very simple, easy-to-use environment because our purpose is not to get people to try to play a video game or do something extraordinary complicated. The extraordinarily complicated thing we want people to do is their daily work activities. And want the environment to be as easy as possible to support the use of these concepts to support that daily work activity.


There are different kinds of environments for different clients. People can’t run large conferences or exhibitions, so we have expo halls that clients will rent and then they will rent out booths to different exhibitors for different things. Right as we speak, there’s a huge boating conference going on in France with 30,000 attendees. These what-seem-to-be-unusual use cases, are just, basically, ways to port real-world activities into these environments.

Laval Virtual, our virtual events operator. Find environments and virtual spaces adapted to your professional events. As close as possible to the real world to facilitate your meetings and encourage remote collaboration.

We spun this company out, about five years ago, from the University and started working with a number of single clients, slowly building up. One of our early clients was a real-state company called EXP. They were trying to become a national real estate company. They had a few hundred agents scattered around the country when they started using us. One of their concepts was they were going to try to have a national real estate company, but not own any real estate themselves.  They wanted to do it without investing in brick and mortar, so they started to explore what virtual platforms might be out there and they found us and started using us. In five years, they grew from 200 agents to, now, 55,000 agents. And they went from operating in the U.S., to now operating 14 countries. Went from almost nothing to now, $4 Billion dollars. They acquired us about three years ago—as we become kind of critical to their company—and helped fuel our growth. So, we are privately held within EXP World Holdings, and they’ve helped us continue to invest in the platform, grow, and expand our use cases.

• CLIENT-Singularity University
We have several hundred other clients. [Difficulty playing videos embedded in PowerPoint presentation]


I wanted to give a little bit of a dive into that company (EXP Realty) because what I think that company represented the outcome of this body of artistic work that I’ve been doing for twenty-five years, which explored the relationship between digital spatial experiences and how they mixed between what the digital realm and offered and the expectations of space itself were. So, we’ve evolved as embodied spatial creatures for the hundred of thousands back to million of years of the evolution of who we are as humans. So much of our cognitive capacities are about us existing within bodies within space. When we finally go to develop these realms of virtual reality and virtual worlds, it was really the first time we were able to bring the accelerated capacities of electronic media to the ways in which we might structure spatial experiences. With Verbela, it’s critical that we’re using this innate, cognitive, social, spatial capacity as the way in which you are interacting in this space. It’s a very different kind of way of creating social environments than what we’re doing right now with Zoom. And, for the moment, it doesn’t have expressive facial features or the ability to use my hands in the way that a video chat has, but those are things we are increasingly moving towards. Our next generation of avatars will have facial expressions and track your voice, and other kinds of things.


Verbela is the outcome of the investigation that I’ve been doing as an artist for many years. And also, it’s exploring some other dimensions of what this digital realm, in relationship to what human development, is enabling us to do. And that other kind of realm is around artificial intelligence. Another thread that’s been parallel with my exploration of virtual space and its relation to human cognition, has been just the subject of cognition itself.

As I described in the beginning, it was part of the motivation to undertake the study of human imagination very deliberately, and to try to devise what I think is a theory of human imagination that allows us to think about our own perceptions and how our sense of reality is underpinned by our perceptions, our memory, and our imaginations.

Anytime we think of what the ‘state of the real’ is, it’s because we’re making a connection between what we remember, what we anticipate, and what we sense in the moment. If we use that as a nexus, we can then start to go down each of those pathways and think about: What is the phenomena of memory? How does the personal connect to the social, to the historical? How do our forms of culture encapsulate that and present that? What are the basis of our perceptions? How does the world reveal itself to us? What are limits and extent of our perceptual apparatus? And then, what is the basis of anticipation? How do we anticipate how things go from one moment to the next? How do we anticipate how things go from the near term to the short term to the long term? And again, how might there be neurological underpinnings to these phenomena? And how do these become the basis of cultural scaffolds that allow us to build upon those and extend our conceptions and our imaginations in ways that are incredibly remarkable, and for which we really don’t have other good analogs?  And then as we do this, how do we then start to consider places in which we might created more over interventions in that schema? I would look at something like artificial intelligence as one of these interventions. [unable to play video]


Now, Sheldon said, starts “the meat of the talk” . . .

This talk, “THE FICTION SCIENCE OF AI,” looks at AI as a combination of sets of technologies but also a cultural perspective. It is the outcome of work done at Google Brain.  The title refers to how the terms “Fiction” and “Science” modify each other and how their various combinations have certain kinds of meanings. And it explores this specifically in relationship to the development of AI.  The terms fiction and science have different implications in human cultural activity. Science characterizes the means and methods by which we try to discover universal truths. Fiction has a much longer history in humanity than science does. Fiction has been the means by which we’ve explored the complexities of the human experience, to try to create coherent wholes out of disparate parts, and to help us be better able to consider our actions and the meaning of the world around us and our possible fate within it.


Fiction and Science seem to describe aspects of understanding that are orthogonal to each other. Where Science as the pursuit of truth, and Fiction as the pursuit of meaning, and how truth and meaning are both characterized. One area in which we’ve got a fairly well known conjunction of these terms is when we modify science with fiction to get Science Fiction. We’re all generally aware that this describes a genre of Fiction that uses concepts that have a basis in Science and technology as narrative provocations. This genre of fiction has been around for, arguably, 150 years. It has developed a very rich and complex set of works and, has become increasingly important in enabling broader awareness of the ways in which technology and science are rapidly developing and transforming our human condition. But can we flip these terms and get Fiction Science? Is that a useful cultural concept? What might the term Fiction Science imply and mean?


A few different meanings of Fiction Science are possible. One is a scientific study of Fiction: What Fiction is; how it operates cognitively, socially, and evolutionarily; some cross between neuroscience, cognitive science, psychology, sociology and history, anthropology, and literary narrative studies. There is work out there that’s being done in this way but they don’t use the term “Fiction Science.” It’s called things like Cognitive Narrative Studies or Cultural Anthropology, which sound more academic and more specific, and, are less likely to be mistaken for some of the more pejorative readings of the term Fiction Science. Fiction Science could be something in which the tropes of science are used to give activities that are not scientific at all some sort of authority about truth that the term Science conveys. Things like, maybe, Faith Healing or Astrology might be considered as kinds of fictional sciences, or Fiction Science. Or the term might be used to imply that Science is, itself, in part fictional, and therefore has a more complicated relationship to the truth than we typically acknowledge. And, even that reading could be used in a couple of ways, one of which is a more philosophically driven exploration of the nature and limits of knowledge. But, it could also be used to cynically undermine scientific findings that are politically inconvenient to other agendas.

So, those are, maybe, some downsides to a term “Fiction Science.” But, I am going to try to use this as a framework to think about the phenomena we call Artificial Intelligence. That it is an ongoing development of both Science and Fiction. That Artificial Intelligence belongs to both, and its pursuit is the result of a productive interaction between both those domains. And if we consider it from both its state as fiction and its state as science, it might further help us clarify the meaning of being human as we explore underlying truths about knowledge and computation, and try to connect those to the meaning of what it is to be human.

The concept of Artificial Intelligence comes from its speculation in Science Fiction that precedes its technological implementations and the pursuit of its underlying science. And that these speculations have shaped the aspirations for what AI might be able to become, or avoid becoming. And while our musings on AI have been exploring highly developed AI systems for some time, the underlying technologies have recently been undergoing some significant developments that are provoking new widespread consideration of how AI technologies can and should be developed. These questions are coming under increasing scrutiny and interest spurred on by its particular cultural history.  All of this is what I think positions AI as a kind of Fiction Science.  Recognizing AI in this way, I would hope, can allow us to better consider the further development of its technology and how it relates to better understanding human experience.  An aspect of this is to also help us better consider AI’s reification. How do we actually come to experience the outcomes of AI? How does AI have presence in the world?  And how does that factor into its agency?


There are three motifs I want us to keep in mind as we think about these questions around AI. I want us to think about: reciprocity, apparency, and transformation. These come from how we see AI, how AI sees us, and the transformational stakes that Artificial Intelligence provokes. So, we’ve been encountering AI in for decades in our fictions, and much of it has been dystopian. AI is other than human, it’s our offspring that arises up and dominates or eliminates its parents. It usurps the anthropocentric specialness of our human self-conscious awareness. But we’ve also had the opposite where AI is a child-like, naïve, and helpful servant. So, we can have bad AI they preys on our anxieties and our human tribalism, or a good AI that is sentimental, childish, and subordinate. But none of these fictional AIs were really created from much more than a very cursory basis in the underlying sciences and technologies of AI. It would be a generous say that they were an extrapolation of those as they were quite underdeveloped in their relationship to their physical manifestations. That the bads run truer than the good is mostly due to their narrative roles, not their technological determinants; the ways in which the narrative roles of these bad AIs have been doppelgangers of ourselves confronting existential dilemmas of right and wrong, purpose and meaning. And, to this point, it’s been harder to write and represent what is good about good AI, beyond servitude and loyalty. I think it’s because we’re just coming to understand the value of the technologies that are developed under the banner of AI. Or how these technologies embody, amplify, or distort the values of those that are creating them.

Fictional representations of AI have primed us to be both terribly excited and terribly worried about what it might become. Representations of these technologies and fictions have provided a myriad of thought experiments about possible outcomes, as well as to characterize targets for future technological developments. So what about these technologies of artificial development are under development?

Sheldon Brown AI Under Development University of Miami Institute for Data Science and Computing Data Citizens: A Distinguished Lecture Series talk


AI isn’t a single technology, but a set of computational methods that aim to give computational systems similar capabilities to how humans interact with our material and social world. It could be seen to begin with our ability to perceive the world, such as:

  • the ability to see and classify objects
  • the ability to read text
  • the ability to understand speech

From these perceptions, it proceeds to interpretations:

  • how to classify an image
  • deducing meaning in text
  • the translation of text across languages
  • plotting a course of movement through space

Those interpretations could then lead to the taking of action:

  • disconnected data could be linked in relevant ways to create meaningful text
  • autonomous agents and vehicles and robots could be controlled to actually move through those environments where courses have been plotted

There have been a variety of techniques developed to do these things. Recently, much of the excitement of breakthroughs in technologies under AI have been from the rapid developments of computers’ capabilities to program themselves in part on tasks that we weren’t able to figure out how to specify more precisely.


Since the rise of computation, we’ve gone through periods of getting hot and cold and hot and cold and hot again about the pursuit of AI from the engineering prospective.

First, we thought that intelligence was exemplified by doing things that involve being clever with systems that had complicated sets of rules, like playing chess. And so, AI programming techniques constructed large graphs of how those rules might be followed (called “expert systems”), and we were able to get computers that did really well with those complicated activities and were able to beat chess grand masters. But then the method by which the computer accomplished that feat were looked at and deemed to be not very good examples of artificial intelligence because the method that was used was too artificial. The solution had nothing to do with how we think human intelligence works.  And that critique, I think, leads to a realization that an aspect of the science of AI falls into at least two areas.

  1. Underlying computer science in these technologies
  2. The math involved in these techniques, and the way that can be enacted through the development of new computation methods. And for that, the amount of artificiality involved, and whether or not it actually is a  parallel to human intelligence really doesn’t matter.

Sheldon Brown Neural Networks University of Miami Institute for Data Science and Computing Data Citizens: A Distinguished Lecture Series talk

The other aspect of AI is thinking that AI is a method for us to try to understand the basis of intelligence itself as it exists in biological models, gaining insight into such areas as cognition, creativity, and consciousness. The methods of AI that are now in vogue are inspired by an understanding of neurology. Taking both the scientific methods of neurology together with computational models.

The idea of working with a model of a neuron as an underlying feature of computational artificial intelligence is actually a very old idea in computational theory, but has gained great currency with the developments of large-scale computational systems and large data repositories. So, here, with a neural network, a programmer can specify a basic structure of what a neural net might consist of, but the specific values are derived by the software itself, creating programs whose underlying details end up being unknown by the programmers.

With the ability to create these connections between these expensive data sets, and lots of high-fidelity sensors, even increasingly effective actuators, with, increasing the amount of underlying computational capacity, these semi-automated systems are able to find increasingly viable, self-regulating relationships that seem to echo concepts that we have about the nature of intelligence itself.

So, we, now, have a different way of creating software with a different class of problems that software is enabling us to look at, and might be able to undertake, and these methods are being rapidly mobilized by governments and industry.

Top Acquirers of AI Startups 2012-2017

But our cinematic representations of AI have often reduced the complexities of this emergent condition to autonomous singularities. While our actual emerging AI condition comes about with the opportunities of our distributed, interconnected, widely differentiated, socio-cultural condition. So our fictional antecedents may have prepared us to face the wrong AI conditions. All that preparatory cultural work may have even, ironically, taken away our own human agency in the system, misleading us as to where we need to been attenuated, to where the real action of AI is taking place.

The Building Blocks of Autonomy in company logos by category

So, as we develop AI, we need to further develop our literacy of what AI is; how to move well past those early cultural speculations on what AI supposedly was going to become, to better recognize what it actually is in its processing of becoming.  We need to develop a more sophisticated vocabulary of AI’s cultural operation as it comes into being.

smart speakers (listeners)

Currently, these inscrutable objects [showing Amazon Echo and Google Home personal assistants/smart speakers] are some of the most widely used interfaces in mass culture to AI systems . We can locate the aesthetics of these objects and the underlying ideologies to some common points of origin. One of these is the visual vocabulary of Minimalism, which these objects utilize without taking on the intellectual stakes that were involved in Minimalist Art, which had much to do with the interrogation of material properties. So, they were overt at putting those forward as the object of the artwork as opposed to using them to refer to other subjects. So they are highly self-referential in their aesthetics and their methods.

Another antecedent, was “Hal” [the AI computer in the 1968 movie “2001: A Space Odyssey”] as a distributed stoic interface to an underlying intelligence. Hal’s uninflected tone revealed little about his developing psychosis and provided us with a dry template for the current Alexises and Siris of our world. But with narrative stakes in play, [film Director Stanley] Kubrick used an efficient aesthetics strategy to stage the Oedipal conflict between human and machine. This is primarily located in the eye. How each of these see their world and how they are seen, either through the blue eyes of [the astronaut] “Dave” or the red eye of Hal.

This relates to an underlying value proposition that goes along with this AI, and it’s about Dave transmuting to new state of consciousness, becoming more human than human through his encounter with his upstart offspring. Hal wasn’t so lucky in this process, but his mind did have an interior. And that was Hal’s problem—was that Hal had a psychology. And so, in order to actually deal with Hal, we have to go into his mind and do deep, deep, psychoanalytical work to turn it off.

2001: A Space Odyssey showing Dave and Hal


The Minimalist impulses of our current AI devices is a continuation of what has been a really profound technological development in our cultural moment, which the mobile phone encapsulates really well. We’re in the process of dematerializing all these discrete different types of devices: [paper maps, record player, video camera, desktop computer, flip phone, camera]. All these activities used to be manifested in a device. We’ve dematerialized the need for those devices and collapsed all of those functions into a single, minimal form. The minimal form of this universal communication device plays a different role than that of our emerging AI. And I think it’s because AI phenomenon I would characterize as more emergent than convergent. And it has different compound stakes that need to engage us more completely than the cool, reductiveness of Minimalist aesthetics. We will be better served by being as explicit about those things as we can be, even when, and maybe even particularly when, they get a little bit weird.

And so, if we embrace the weirdness of AI, we will find AI to be more engaging, more interesting, and it will be a more effective way of understanding the extent of agency that these systems have, what their level of autonomy is, their scope of engagements with people, the environment, and with other AIs. I also want to point out there’s another basis that is important to the Minimalist aesthetics and the AI aesthetics that we currently have. Minimalism had its heyday as a precursor to the 1960s Utopianism, which, still, has been this kind of bedrock ideology of much of the tech industry. Sixties Utopianism took ideas from Libertarianism and Anarchism (where disruption was a virtue), and valued the delegitimization of regulatory political states to serve the business interests of what could be perjoratively thought of as Ponzi Scheme stock valuations, fueled by the commoditization of users’ personal data.

Kind of an interesting text for this was “The Whole Earth Catalog” as a Bible of the moment. Its structure is quite interesting because it had a little bit of a “why”—a book of hundreds of pages had a little bit, a paragraph, on its purpose—and then hundreds of pages about “how.” So, a little bit of “why” and a whole lot of “how.” Alongside the aesthetics of Minimalism, 1960s vanguard aesthetics also had another thread, which was Pop Psychadelia. Psychadelia is a way to try to come to terms with internal mental states. And we’ve started to see aspects of that as an aesthetic of artificial intelligence as well in things such as Google’s “Deep Dream” and other generative neural network environments. So, Psychadelia gaining us this uncanny view into human consciousness. Here we have a kind of representational form that expresses underlying processes of the way in which AI methods themselves are operating. So, interrogating aspects of the interior of a neural net as opposed to the normative results of it.

Google Deep Dream Google Deep Dream


Understanding how AI makes meaning its epistemology is going to be essential to our effective relationship to it. The visual modality or the articulation of AI needs to be bi-directional. How we see it and how it sees us needs to develop a shared vocabulary. We need to sense the time to reflect reality for instance, which is the metric of time lag between the world as it is and the world as it’s known by machines. And that way we start to get a better sense of what AI systems actually think the world is. We need to be able to sense and understand that.

Sheldon Brown AI epistemology

As we figure out how to live with AI we must recognize that it’s also going to be a rapid state of transformation. And we’re going to have to be ready to continuously let go of our understanding of today’s AIs as tomorrow’s AIs come into being. So, we’re going to have to be more intellectually and culturally nimble than we’ve ever had to be in the course of human history. We need another way to think very abstractly about AI and its developments. And that is, to try to reflect on how we want IA to continue to develop, and how we can use AI’s status as a Fiction Science to help think through those developments.

I want to consider AI an Utopian framework. As I mentioned earlier, part of the Utopian ideology that underpins the tech industry has come from a California Utopianism that idealizes Libertarianism and immortality. But I think that Utopianism has a bit of a problem in that, as a Professor, I would say that that Utopian movement didn’t finish reading the book, and recognize that the operation of Utopian narratives as thought experiments have taken on increasing sophistication and complexity in fields like science fiction that show them not to be singular answers to complicated questions, but a way to put the conundrums of a condition on the table to consider in total.


What I want to try to think about is what can the value propositions of AI really be? And how considering it from the point of view of value propositions might start to develop a program of representations that would reinforce and guide us to what we need to see, and how AI should be seen, or could be seen.

History of Utopian Literature in Book Covers

With more time, we would go through a history of Utopian literature here to try to tease out some themes from them but I would just say at a kind of very high level that Utopian literature often comes around when there are really significant transformations across our socio-cultural reality. And that there are times that we recognize these conditions, created by us, that if we are able to think deeply enough and comprehensively enough through them, we may be able to help drive to a more desirable outcome. So, Utopian texts articulate their underlying values, the ways in which their systems are structured, and how our social interactions within them have the possibilities of playing out. Right now, I think we have the opportunity and the need to drive forward a program of AI Utopiansm. The opportunities, the technological potential, and the re-framing of our ontological condition, and the needs of the variety of crises we face, and the requirements that we have to devise methods, and the literacy to understand those conditions.

OPPORTUNITY AND NEED TO DRIVE AI UTOPIANSM—Moore’s Law gives way to Murphy’s Law

What I fear we we are coming to an era of Murphy’s Law:  While we might be taken out by a meteor or zapped by some new pathogen (or an old one if we stop getting vaccinated), we are probably going to be mostly working to ameliorate the effects of human activity by attempting better human activity. Whether this is global warming and all of its impacts, or an ecosystem that falters under micro-plastic ubiquity, or the fear of artificial intelligence opaquely controlling our systems, and ultimately, us—we need AI to be thought about as a means to help us avoid our end.

As a proposal, I would say a starting point of our AI Utopia is to think about it in a cogno-positive perspective. How can AI be used to maximize our collective cognition, individually and socially, human first and then, non human second. Probably the biggest impact AI systems could have is just to help us the very basic aspects of being human across the globe. This is the foundation for addressing resource inequality, environmental degradation, and false narrative issues that are really going to do us in. All of which take us further away from conditions of peace, well being and understanding. Recognizing that progress is not justice in and of itself, that to be just, progress must move towards just ends, and that the just ends of progress means moving toward satisfying the vital needs of everybody.


We need AI to help us counter Neo-Tribalism. As barriers of self, nation, geography, temporality, communication, and sexuality dissolve away, we definitely have seen this growing globalist neo-Tribalist reversion that’s trying to counter these transformations. Belief circles that are taking advantage of the rapid transformation of the social to paint their own pictures of reality, and reject that we live in a shared, objective, interdependent reality. We also need to recognize that churn through our post-Enlightenment era are either not up to the task of this new global condition. Or that we’ve taken them so much for granted that realizing that we’ve forgotten how asperational, fragile, human-made, and improbable houses of cards they are, and need to be continuously attended to. Underpinning this is universal education that strives to be equitable and meritocratic, systems of law that have applicability to all public discourse in which truth and facts matter, scientific evidence that validates effect and common governance and promotes widespread social good. When we think about the founding ideals of the Enlightenment, the “Dare to Know” sounds almost shockingly provocative in relation to much of today’s popular and intellectual discourses.

Those are the kind of human areas that we need to help, that AI can help us optimize. None of that actually had to do with computational methods in and of themselves, but just the values that our computational methods, as they develop, can be done in relationship to. As we wrestle with this problem of recognizing and understanding AI, we also would be well served to take note of how much sophisticated cognition we’re surrounded by that we don’t understand well at all. Even though we’ve evolved right alongside all of these other systems and, likely, have far more in common with any of them than we are likely to have with a computer system anytime within the foreseeable future.


Even when we acknowledge this, we’ve been mostly stymied in trying to understand and engage these things as cognitive phenomena except for in pretty trivial ways, which are, most often, about how these things are able to engage things on our terms. So an octopus can open a jar, we think that’s, obviously, very intelligent; a dolphin can carry sensors and weapons to ships, we recognize that as a high degree of intelligence; chimpanzees or apes who are able to operate computer interfaces are obviously intelligent; but there’s lots of other phenomena out there that won’t ‘play’. So that’s unfortunate.

For instance, this is a white blood cell chasing a virus through the bloodstream. In this video, you can see the white blood cell navigates a complex environment. It has a goal-oriented activity. It’s very clever and savvy about . . . where it can, and exhibits things that we would really recognize as sophisticated decision-making processes. But, of course, the white blood cell doesn’t have even a single neuron. Or, colonies of bacteria that are able to move in what appear to be very complex social-organizational structures. Yet again, entities without brains seeming to exhibit features that if we were doing them in a robot, or a swarm of robots, we would characterize as exhibiting forms of intelligence.


I would say that one thing we should do in thinking about our AI design systems as we go forward is recognize that we are designing complex ecosystems. That our AI systems will have relationships with other AI systems and non-AI systems, and how we will make these apparent and understand them will continue to stretch our own ability of considering what entities are, what systems are, where things begin, where things end. And there’s this kind of ongoing expansion of our own kind of human conception of what constitutes the real. And I think art has played a really critical role in this. That it gives us a means of engaging what is, initially, kind of a cognitive estrangement. Art harnesses this cognitive estrangement to expand our human understanding.

From this, what we need is a new law that takes us from Moore’s Law, helps us avoid much of Murphy’s Law, but is aimed at accelerating our imagination and insight to bind us to the expansion of human consciousness, to better understand the effect and impact at radical new scales. That’s what required here as we develop these systems. So, I’m going to borrow Arthur C. Clarke’s second law, which is . . .

“The only way to discover the
limits of the possible is to
venture a little way past them
into the impossible.”



And that’s what I think is the important work that the Arts do for us. In finding our limits in how we think and understand, and then moving us past those limitations. Through Art, I think we learn how to deal more directly with human consciousness as both our interface, and, its operation through its content. As our understanding of our own means of cognitions peels back more and more layers of our own onion, we’re getting a better appreciation for the diversity of human experience and the experience of other members of our biosphere, so that we can better see that everything is a part of everything, and that we have to act, increasingly, in ways that are on the path to nurturing the myriad interconnectedness of our global condition. De-centering our short-term interest for long-term global interests.


googly eyes gifFrom the Arts we then move this across all other aspects of our cultural activity, our sciences, our politics, our economics. And for this, I think it gives us a couple paths forward for AI:

  1. AI continues to be invisibly propagated through our lives. Its invisibility is achieved through the means of things like camouflage, or high-jacking, or extra-perceptual phenomena.
  2. Or another is it’s manifested through these inscrutable, Minimalist objects, like the Alexas of the world, where they have some discrete localization that indicates some aspect of their AI-ness, maybe we can touch it, or read it, or hear it. Maybe it can do some of those things to us, but it’s just a small marker to the much larger phenomena. Its physicality might be related to its purpose or it might not be. It might just be symbolic.
  3. Or, another narrative trope, we can just put googly eyes on everything, and use cuteness as a way to have AI more nefariously infiltrate our lives.
  4. But I would think of another way: what I would characterize here as “mutually assured enchantment.” We come together with AI because we are fascinated by each other. We pluck each other’s cognitive estrangement strings.


We and AI will be part of each other, not as product and consumer, but as aggregate. We’re going to be uncomfortable when we are disengaged with AI, cognitively naked, vulnerable, and less capable. A few years back, a friend of mine the science fiction author Vernor Vinge, in conversation was suggesting that our technologies are changing so fast that, in just a few decades, the humans of that time will think of the humans of our time as having the cognitive abilities of goldfish because of our entwinement with AI.

In our Utopian AI, our notions of self will change from the Enlightenment era individual to recognize that we are a dynamic expression of many closely-coupled systems: social, cultural, ecological, and technological. Our methods will help us articulate this interconnectedness and be the basis for our ongoing pursuits for our happiness and the dare to know. The Fiction Science of AI will develop an increasingly tight coupling between speculation and actualization, until these are no longer even distinguishable from each other. That’s a threshold that we may have already crossed but are just coming to terms with understanding, that we are now on the other side.


Q & A starts at 1:08:30 . . .


Miami CTSI new logo 2021About Data Citizens

Data Citizens: A Distinguished Lecture Series is an ongoing course of in-depth talks by experts in the field of data science on a wide variety of topics including data visualization, big data, artificial intelligence, and predictive analytics.  The series is co-sponsored by the Miami Clinical and Translational Science Institute (CTSI).