As we must do in 2023, a disclaimer comes here that no imagery nor writing here was generated by any large language models or statistical data system. It’s all me, with my typo-prone keyboarding and propensity to make up titles.

Yet, I am tempted to consult my conjured expert ChadGPT because I operate by natural sarcastitelligence…

I have instructed Chad to remain hushed. But imagine him scowling dismissively at me as I declare that I have been poking at AI actively since June, in an experiment using the OEG Connect space as a pseudo mini-blogging space. My opening was that I know the machinery of it all was likely beyond my skill grade but I could do more than repost articles in social media, that by public dabbling in the stuff, maybe I could better approach some kind of informed intuition:

But I do know I can better come to my own understanding if I can do more than read papers and blog posts, but actually dabble in a technology. 

https://connect.oeglobal.org/t/understanding-doing-some-ai/4013

The early stuff was the first versions I found of public DALL-E image generators morphing into craiyon all of which months later looks really crude compared to the latest DALL-E and pals like Midjourney and more.

My strand ran long and several more topics spun out. While many people were caught up in the (important) issues of what content was used for training and the unclear copyright implications, I honed some time on the question of how the heck we reuse/attribute the imagery the magic boxes spit out.

And all of this seems quaint given the intense furor, worry, soul wrenching, (cheatophobia?) hype soaked conjectures over the implications of this stuff.

Here I go with two Vennish overlaps.

Accept Its Inevitability?

Definitely the AI bro-ponents but also critics are proclaiming the far reaches of AI as a given, that it will replace artists, teachers, google. It’s unavoidable! You cannot hide! Glorious (for whom?)

I proudly accept a label of whistler in the wind, that AI is inevitable as the inventions of agriculture and industry. Do we once again have to pull out the long scroll of supposed disruptive technologies?

Yet I am not on the doom and gloom train either, I find a bit of fascination over things I have seen pop out of the AI toasters, the feat that it can be done is impressive-witness This Voice Does Not Exist, Riffusion, Victorian-Era People Who Never Existed, Flawless, Elicit, Consensus, Never Heard Before Sounds— if you are solely focusing in ChatGPT as the essay writing cheat machine, you are mibbling at the corner of the whole AIchillada.

Plus in a weird too way I have to be impressed that the poop writing that emerges from the magic GPT ball has so many undergarments in a wad.

I’s significant…. But is it inevitable? All encompassing? Coming to steal your job? Meh, shrug, I do not believe in the bogeyman.

The Obelisk’s Opaqueness

But the thing I find more perplexing is how fuzzy and weasly the wording gets when a product or pundit attempts to describe how it works. After the buzz splash description of GALL-E or whatever is next, or that someone is raving about the new AI enabled Toaster Oven, the “how it works” or “what it does” just about evaporates.

I resorted to asking one machine (The Google) for the classic explain AI to me like I am five which produced lots of descriptions of it all by people who apparently have never conversed with a five year old human (witness one that used the expression “tabula rasa” (WYF?).

Sure, we can take out the definitions of Large Language Models or Neural Networks and describe AI as processes that operate in a fashion like the brain, but my slow burning question is- exactly what happens after you type your prompt into the box and ****** is returned? As far as I can tell there is just some smoke, a jingle like sound effect then

God Of War Magic GIF by Santa Monica Studio - Find & Share on GIPHY
God Of War Magic GIF By Santa Monica Studio

No I am not expecting myself or any mortal to fully understand the computer science the mathematics of these models. But what I am left with is us as users/subjects of this stuff we have absolutely no comprehensible mental model of what it does. We just wipe the glitter off our faces and send another prompt to the machine. Without any kind of internal intuition, our only source of understanding is our statistically insignificant experiences of typing something into a box turning the crank and seeing what pops out. And then we come to conclusions based on either the crap we get or the stuff that is actually tenable.

There is a long list of things being written by educators…

Among them are some really good comments from people like Derek Bruff on Three Things to Know About AI and Teaching. His statement has me nodding like a big dog

This interchange points to at least one important observation about AI and teaching: We are going to have to start teaching our students how AI generation tools work.

https://derekbruff.org/?p=3970

But what Derek describes is not really teaching how the tools work but some hands on experience seeing what they do. We are teaching Promptism- as described by Seb Chan:

My social media feeds overflowed with DALL-E and Midjourney ‘promptism’ visuals. The coining of ‘promptism’ by others nicely deflects attention from the underlying technologies to the craft of finding the right language (the prompt) to ‘talk to the machine’. Its a bit like those people who seem to have a magical ability to craft ‘the perfect Google search query’ but aren’t trained librarians and have really just have done a bit of SEO work in their past.

Seb Chan’s Fresh and New newsletter:Generative things and choosing the right words

Think about Google search. It’s PageRank algorithm is secretly hidden under a floorboard in Sergey Brin’s bedroom, but we have some idea that Google has visited a vast number of websites, read them all, indexed into some kind of database and matches our input words to some relevance of word frequency and scores based upon some factor of how many other sites link to it. That may not be exactly correct, but that as a mental model helps me at least frame expectations and a sense what my results mean.

Or for another analogy, go sit behind the wheel of an automobile. I am no mechanic, but my experience of driving makes a model that my filling the tank with an explosive liquid, can be energized when put a key in (tapping into electricity in a battery), and under the hood, this creates explosions that moves engine parts up and down that is transferred to the axles of my wheels.

I do not need to be a mechanic to at least have some internal understanding what makes my car go (or not go). Is it critical to know this? Maybe not but I think some minimal model of what an engine does is helpful to operating a vehicle.

But with artificial intelligence, are we to be productive just by becoming the feeders of prompts to a machine without even a guess at what it is doing with that prompt?

A lot of people are caught up in what is a key question of how the training of AI is done and how the stuff that gets hoovered into the machine influences what pops out.

To me we are getting a bit over distracted by the candy sliding out of the bottom of the AI machine and not having any kind of understanding, even schematic, of what goes on behind the front of the machine.

Unlike others I will not act like I have seen every YouTube video or read every single article, and I am banking on there being some resources out there that can help a non data scientist have a more informed picture of what is behind the magic wand.

And this, my human eyed readers, is what bothers me. Are we just going to accept what pops out of these opaque obelisks without asking what goes on inside?

Updates:

As the train is moving fast, I will occasionally add some reverent/interesting to me resources here.

  • Digital Detox 2023 (Thompson Rivers University) Thoughtful series of posts and discussions led by Brenna Clark Grey
  • OLDaily (Jan 10, 2023) This post got Downsed! As usual, Stephen is strongly opinionated but was nice to me. Most useful was his shared mental model of how AI works, and that makes sense to me.
  • Ten Facts About ChatGPT (Contact North) also from OLDaily, useful summary and helpful explanatory stuff, I might have to readjust my post after this!

Featured Image:

47 Public Art Monolith Installation11092010
47 Public Art Monolith Installation11092010 flickr photo by City of Wylie shared under a Creative Commons (BY-NC) license

If this kind of stuff has value, please support me by tossing a one time PayPal kibble or monthly on Patreon
Become a patron at Patreon!
Profile Picture for CogDog The Blog
An early 90s builder of web stuff and blogging Alan Levine barks at CogDogBlog.com on web storytelling (#ds106 #4life), photography, bending WordPress, and serendipity in the infinite internet river. He thinks it's weird to write about himself in the third person. And he is 100% into the Fediverse (or tells himself so) Tooting as @cogdog@cosocial.ca

Comments

  1. I got Downsed!

    https://www.downes.ca/post/74733

    Thanks Stephen especially for the details of your model:

    If one were to ask me whether I have a mental model of what’s going on inside an AI, I would say that I think I do. Think of a Rorschach test or a word association test. What’s happening here is that your brain is being stimulated, and you’re responding with the next thing that pops into your mind. Internally, the stimulation activates a part of your neural network, and your response is the word or phrase that is most similar to that part of your neural network. Tweak the parameters a bit and you can get a ‘what comes next’ sort of response, based on the same principles. That’s what’s happening, at least as I see it. But with computers, not with your brain.

Leave a Reply

Your email address will not be published. Required fields are marked *