I already took a blog dip on the biggest craze in education since the ditto machine, but as you cannot take a scroll in social media and the news without tripping over 5 takes on ChatGPT… well, I could not help myself.
Well, I could, but I did not.
With all the efforts to take big swings at Chat GPT, I did appreciate seeing Maha Bali’s announcement of a workshop coming up this week (online/free), see her post and info within (sidenote, I have to commend her immensely on that metaphoric image, which led me down another whinging hole as I have noticed that the Pixabay copy/t attribution has no link to the original image) (that’s another post for another day)
What is refreshing is that Maha and co-conspirator Anna Mills have planned not another slide deck of info nor a bag full of links, but a real activity and discussion. The rabbit hole grew very large looking at Anna’s examples of essays generated by the gizmo, which are valuable to see her commentary and an approach of fine tuning the interrogation of the AI.
Another piece I liked was they opened a doc early and asked for people to suggest different assignment prompts that potentially could be used as exercises in the workshop.
ChatGPTing Blog Post Titles
I could not help myself but to play with one…er… two tries. The first, is the dated already approach of getting Chat GPT to talk about itself, the prompt was:
Write 10 different titles for a blog post about the conflict between human creative endeavors and algorithmic generated ones. Use metaphors based on the history of machines and inventions. A bit of sarcasm could help.
It look like its been docified to be part of the workshop. I have to say that the results I got were pretty, well, not so great or how I would have gone. The most curious lack of creativity is the repeated format of a blog post title seemingly requiring a colon.
“The Battle of the Titans: Man vs. Machine in the Arts”ChatGPT thinks these are good blog post titles. That dude will never write for me.
“Creative Evolution: Can Algorithms Keep Up with Human Genius?”
“Robo-Artists vs. Human Masters: Who Will Prevail?”
“The Industrial Revolution of Creativity: How Algorithms are Changing the Game”
“The Luddite’s Guide to Algorithm-Generated Art”
“The Machine Age of Creativity: Will Algorithms Replace Human Artists?”
“The Automation of Aesthetics: How Algorithms are Shaping the Future of Art”
“The Algorithmic Uprising: Are Human Creators Becoming Obsolete?”
“The Digital Disruption: How Algorithms are Changing the Art World”
“The Mechanical Muse: How Algorithms are Inspiring and Challenging Human Creativity”
I did try an push it a bit, with a variation
Now create a list of 10 blog post titles done listicle style.
Read these and you can tell ChatGPT sucks at understanding net culture
“10 Ways Algorithms are Changing the Art World”ChatGPT’s attempts at listicle titles.
“Top 8 Advantages of Algorithm-Generated Art”
“5 Ways Human Creativity and Algorithms Can Coexist”
“7 Examples of Algorithm-Generated Art That Will Blow Your Mind”
“The Pros and Cons of Algorithm-Generated Art: A Listicle”
“9 Artists Who Are Experimenting with Algorithms in Their Work”
“10 Predictions for the Future of Algorithm-Generated Art”
“6 Ways Algorithm-Generated Art is Challenging Traditional Art Forms”
“The Top 3 Controversies Surrounding Algorithm-Generated Art”
“A Beginner’s Guide to Understanding Algorithm-Generated Art: 10 Key Points”
My other effort was maybe more out of the scope, but I had a curiosity– I asked it a question that I koew should draw significantly from a source, a paper I co-authored in the 1990s when I was a PhD candidate in Geology.
I sort of already wrote this up in a Google doc as I was just taking notes, so am just copying here.
Can ChadGPT Write About Fluid Dynamics of Volcanic Flows?
Yes, I wrote “ChadGPT”– I think of ChatGPT as that know it all character Cliff Clavin from Cheers, guy at the end of a bar who is always spouting off about everything in a way that sounds knowledgeable, but is not.
I even had DALL-E create an image so I can make Chad feel more alive to me.
I decided to ask ChadGPT questions related to a published topic I know well (or did) a research paper I co-authored and published long ago as a grad student in Geology
Alan H. Levine, Susan W. Kieffer; Hydraulics of the August 7,1980, pyroclastic flow at Mount St. Helens, Washington. Geology 1991;; 19 (11): 1121–1124. doi: https://doi.org/10.1130/0091-7613(1991)019<1121:HOTAPF>2.3.CO;2
I have a full copy of the paper’s contents on my own domain https://cog.dog/pubs/1991/11/15/msh/
Okay ChadGPT, show me your stuff!
DOT DOT DOT [insert thinking time and the magic of the response] [no actually I did this before so I refer the interested reader, all 1.5 of them, to my doc with the responses and my refinements].
First of all, if you are looking for the first response from Chat… ChadGPT to be just what you need, then you are not playing well. The game is in the refinement.
The first response sounds reasonable (see response to round 1), but its very generalized, and does not distinguish this particular eruption in 1980 (August) from the big blast in May. It does that circling back thing where it ties every up in a neat bow, because it can. And it included no momentum or flow equations, none at all.
For Round 2, I pushed it for more details:
Rewrite to include a more explicit explanation and representation of the momentum equations. Also, add reference citations.
This version had a few more details (see , and a few equations, but is not really relevant. The two references it gave is a book title that does exist and one that does not. It also did not cite any specific references.
Now in Round 3, I apply more pressure, get more specific, Chad! And you suck at citing references…
You left out a few references for the August 7 1980 eruption, please add more in text citations.
As shown in the comments in the docs, it keeps the same fluid dynamic references (one is fake). The two ones it adds for the requested references for the August 7 eruptions use the title of a relevant report (USGS paper 1250 is pretty famous, I have a copy) but that is a large multi paper report, that includes both the names listed as authors, but their contributions in that volume are not related. So two more bogus references.
Now, finally in round 4, I push to get them closer to the paper I co-authored
Include as well a discussion of the feasibility or problem with applying physics of fluid dynamics to a gaseous medium of a pyroclastic flow using gas dynamics theory. Include a discussion of how such a volcanic flow may have or could not have experienced the equivalent of a hydraulic jump generated by channel instructions or reduction of slope at the base of the volcano.
Chad waxes on but its really just stuff that sounds credible unless you have done this kind of research. What it is describing is not the eruption of August 7. And honestly, a google search gets better reference results, but as we know, ChadGPT knows not of the world wide web, in in place of being relevant, it just tries to appear statistically relevant.
Chad should hook up with Elicit.org and write papers together, at least Ellie some AI yields more relevant resources (and summarizes them).
I’ve somewhat fallen into the trap of trying to go deep with one example, foray. It’s again really hard for the 99.9% of us who are not LLM specialists to grok what is going on inside the box, so we just infer from our various pokes.
And I also lost myself in the pointing out the invalidity of its responses. It has no experience, nor does it even have a stake, or interest in what it spits out.
I have been thinking about a post in Mastodon by Kate Bowles (I’m somewhat hesitating these days to embed toot/posts, I wonder if that is appropriate??)
Following a trail from John Naughton’s article about the fediverse restoring human communication, I read his previous article on ChatGPT, and that took me to Murray Shanahan on human language, and LLM models as predictive sequences of words designed to seem likely.
“Sequences of words with a propositional form are not special to the model itself in the way they are to us. The model itself has no notion of truth or falsehood, properly speaking, because it lacks the means to exercise these concepts in anything like the way we do.”https://aus.social/@kate/109731104243090193
I could link all those references, (you can find them w/o an AI) (right?) but this leaps out at me: LLM models as predictive sequences of words designed to seem likely.
ChadGPT has nothing to with truth, ideas, getting to an understanding, sensemaking, it is all a fabrication to be statistically knowledgable, to look like an informed source, not to be one.
Or, to go out on a farther limb, as a comment I think I left on Dave Cormier’s blog post with a title that says my lines above with more metaphorical beauty (what ChadGPT fails at repeatedly) — ChatGPT search – Autotune for knowledge. I pondered:
Here is a question: Would you prefer to do the hard work to love and be loved or to just get it easily to have something just looks like love?Me commenting on Dave’s post
And the whole circus of this is the reactive froth we are in. Be the time educators get their heads even tuned/autotuned into ChatGPT, the 4.o version will be out, as will Google’s play, and all the other outfits scrambling to outdo OpenAI (if someone can do it better and cheaper…).
Who’s ready for Constitutional AI (it is not a right to it!)
Or heck, see what Charlie The Intern does next!
It’s interesting times time for sure.