Perhaps there needs to be more romance in the world of Artificial Intelligence? For a while, I had some thoughts back to good old ELIZA. What does she think of all this stuff? Most likely she would not say but deflect back with more questions.

I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

Joseph Weizenbaum in computer Power in Human Reason (1976)

What he describes has been called the ELIZA Effect, “the tendency to unconsciously assume computer behaviors are analogous to human behaviors.”

ELIZA, Meet ChatGPT

I solicited Doc Brown to drive the DeLorean back to 1964 but he was unavailable…

You can find many web-based implementations, my favorite is one is E.L.I.Z.A. Talking where you can simulate on a machine of her 1964-1966 era, and even speak to her.

Talk to and hear back from ELIZA https://www.masswerk.at/eliza/

So I had to play an intermediary setting them up on a blind date, shuffling questions and responses between the two of them.

ChatGPT screen with my opening prompt “You have shown up for a blind date with someone you only know as ELIZA. When you meet her, what will you say as an introduction?”

To set the scene, i asked Stable Diffusion Online to create some visuals (I must be terrible at this because none of them were what I had in mind, maybe the first one top left is close?) using a prompt “an older woman and a younger bearded man seated at a restaurant table on a date’ photorealistic.”

Pick your own scene for this blind date:

Four images generated by Stable Diffusion, each as awkward as this date must be.

I had to intervene a few times, but let’s listen in to this blind date:

To create this I just shuttled the responses back and forth between the two interfaces (ChatGPT sometimes needed me to nudge him, he was nervous?)

I turned to another AI service Murf.ai to render the audio. I am not quite sure where the AI comes into this versus the standard text to audio capabilities I’ve used before (the voice from https://www.masswerk.at/eliza/ is much better)

But shrug, if you put AI in the name, who know really what it does?

You should be able to “Murf” it yourself via https://murf.ai/share/lebr6p6e

The Murf.ai interface for rendering the conversation as audio. The voices are Ethan saying my lines, Claire saying ELIZA’s lines, and Ryan doing ChatGPT.

I was not going to pay for the download fee, so I did the lazy route of playing audio from the Murf site and recording in Audacity on a second laptop.

Back to the Present

I did this exchange a week or two ago but it lingered as files. I was inspired a bit last night after Cori and I went to the classic Mae Wilson Theater in Moose Jaw to travel back in movie time to 1985.

Back to 1985 on the Big Screen
Back to 1985 on the Big Screen flickr photo by cogdogblog shared into the public domain using Creative Commons Public Domain Dedication (CC0)

So this blind date really was some time travel!

It’s also about as much as I feel like doing amongst the tidal wave of fervor over AI… like ten years ago, I turned to mocking a hyped edtech trend, here I start a new trend of mocking.

Again, turning the time machine back to ELIZA’s creator, from the Wikipedia article on the ELIZA Effect

Joseph Weizenbaum considered that not every part of the human thought could be reduced to logical formalisms and that “there are some acts of thought that ought to be attempted only by humans”.  He also observed that we develop emotional involvement with machines if we interact with them as humans. When chatbots are anthropomorphized, they tend to portray gendered features as a way through which we establish relationships with the technology. “Gender stereotypes are instrumentalised to manage our relationship with chatbots” when human behavior is programmed into machines.

https://en.wikipedia.org/wiki/ELIZA_effect

How can we not interact with machines as humans when they are seemingly talking to us? Many of the things people are saying seem to be leaping ahead of the reality that what they are seeing is really approximations of what looks like conversation by [quickly] autocompleting ahead. Of course the references ChatGPT names are not real because it is not doing research, it is just generating something that statistically approximates the appearance of a relevant reference.

This ties to to Maha Bali’s criticism of any reference to starting communicating by “I asked ChatGPT to…” as really an abdication of care not even an automation of it. Because again, what this black box for which we have no intuition or understanding of how it produces words (cue the hypocritical name of the company)– does not automate or even generate care.

It is just generating something that statistically approximates the appearance of care. Humans should know the difference.

My main use of ChatGPT is to… make fun of ChatGPT. Can you help me, Eliza?

Asking ELIZA for help with my ChatGPT obsession (text below)
ELIZA: Please tell me what’s been bothering you.                                
YOU:   I just want to make fun of ChatGPT                                       
ELIZA: Do you say you just want to make fun of chatgpt for some special reason ?
YOU:   Yes, because so many people seem to take it seriously.                   
ELIZA: You are sure.                                                            
YOU:   Yes, what do you think?                                                  
ELIZA: I see.                                                                   
YOU:   Do you?                                                                  
ELIZA: What are your feelings now ?                                             
YOU:   I want to go outside and roll in the snow.                               
ELIZA: What would getting to go outside and roll in the snow mean to you ?   
https://www.masswerk.at/eliza/

Next up on AImocking… who wants to guess?


Featured Image: Made by me combining in Photoshop a screenshot of Talking Eliza, another screen shot of ChatGPT, text generated at Back to the Future Font Meme, and one of the Stable Diffusion online generated images for “an older woman and a younger bearded man seated at a restaurant table on a date’ photorealistic.” All kinds of re-use grayzone here, but if I can assert a license it is CC BY.

Like text style from the movie it reads-- Back to the AI Past-- over two superimposed images of the Talking With Eliza web site and ChatGPT. In the middle is a Stable Diffusion image of an old couple on a blind date
If this kind of stuff has value, please support me by tossing a one time PayPal kibble or monthly on Patreon
Become a patron at Patreon!
Profile Picture for CogDog The Blog
An early 90s builder of web stuff and blogging Alan Levine barks at CogDogBlog.com on web storytelling (#ds106 #4life), photography, bending WordPress, and serendipity in the infinite internet river. He thinks it's weird to write about himself in the third person. And he is 100% into the Fediverse (or tells himself so) Tooting as @cogdog@cosocial.ca

Comments

  1. This has sparked a thought about anthropomorphism. Is it a function introduced by narration? That is, do we really think we’re dealing with sentience when we’re alone with AI, or does the anthropomorphic incline occur as a result of telling the story of what we did to a third party?

    Is that when we allocate agency to the AI as a property of our languaging?

    (Side note: autocorrect changed this to languishing. It’s so easy to infer agency to “changed” because our language brains go deep into our species history of sorting out the world by hierarchies of sentience)

    1. I have only conjecture and my gut. To me it’s what occurs in our own heads/minds/bodies in the AI interaction, like a bit of a suspension of disbelief. Even as crude and obviously non human ELIZA plays out, her degree of creating (not that she does) annoyance makes her a bit more personified to me.

      And hardly relevant, but I think back to the weird years of Second Life and how seriously embarrassing to innately worrisome was people’s fear of their avatar accidentally disrobing in public. The fear was real, the nakedness not. We did some work with a person designing their virtual land and they were very clear that it could not be located near virtual ponds that might have virtual snakes. To me that always positively indicated how the experience left the screen.

      I can see similar blurrings with corresponding to chatbots– imagine when the interaction is via voice?

Leave a Reply

Your email address will not be published. Required fields are marked *