There you go, a deliberately vague blog post title that give no real indication what this is about. Heck, at this point, 1.5 sentences in, I am questioning myself if I know.
Note: I found this post hanging out in my Drafts. Was it left undone? Did I change my mind on publishing? I don’t know, what’s the diff? Just clicking publish now to clean house.
Like other colleagues who still keep this way of public writing alive (hail bava!) I have been wheel spinning with latching on to the Artificial Intelligence excitement wave. I was somewhat interested a year or more ago when the early image generators (e.g. Craiyon) came on the scene. It’s one thing to type text in a ChatGPT box and get paragraphs of text back, but getting a generated image back is still, if I sit back and pause, a bit surreal just from free form plopping words into a chatbox– i am getting something for almost no effort.
Ignoring yes, all the well known problems, impacts, dangers with goes inside the magic machines, I am finding it interesting to think that our interface for doing this is as simple, and familiar as a chat text box.
That’s maybe where our relationship with the same mode of communication we use countless times per day to message friends, family, colleagues, is intertwined with feeling a whiff of presence that we are doing more than interacting with programmed logic.
I had tried a few of these myself, and like the suggestion that it “hallucinates” (which it does not really do, it’s just wrong). I must not be a good prompt maker, because I always got code that looked right, but never worked. There was one case I am not remembering and cant find in my history, I was asking ChatGPT to write some kind of data query, that each time I tried it, game me empty results. “It” kept insisting that it was running the same query and getting valid results.I pushed i asking for proof, and finally, it said it was only able to test by verifying the syntax was correct, it was never really running the queries.
And then I realized I spent good chunk of time in this back and forth pushing for clarification or demonstration of not only saying, “this should work”, but proving it.
Some will be saying these are the literacies that users of these systems need to build, how to push and rework prompts to get desired results.
This feels not all that different from something we have used so often we take for granted. I am wondering….
I’ve learned through sheer iteration (okay and reading guides and blog posts) the tricks and methods for getting better results. I get back a whole raft of stuff and I have learned to develop my “spidey” sense (not really accurate much of the time) to sift the results. And while I conceptually understand that Google scrapes the web, creates some kind of index based on keywords and link frequency, but truly, the how this thing works is kept behind an iron wall (at least the company is not called “OpenSearch”).
I know this is really trivial nor insightful. Here is a super trivial experience.
Recently I was working on a spreadsheet with names, emails, of a list of people’s names, emails, I was assigning to do multiple things listed in more columns. All I was doing was marking the column for a name for the tasks I was assigning. I needed on a second sheet to list all the tasks and then to the right, which people were assigned.
Now the method is one that many will just know the function off the top of their head. I love doing spreadsheet formulas, but they come in bursts, and I forget often what I had done before. I’d done ones with V and H lookups, but that’s not the same. I reach for my usual textbox (web search) but my keyword mojo was not clicking. A lot of stuff is more ads for services then answers, it takes a lot of sifting through ads and promos to get an answer. I went down a few StackExchange holes, but did not find the same thing.
Just for exasperation, I tossed it into ChatGPT:
I have a Google spreadsheet with column headers of student’s names, like “Mary”, “Juan”, “Eleanor”, “Mabel” and rows representing 100 different papers I am assigning for them to review, where I have put an “x” in a cell if say I want Mary and Mabel to be a reviewer. Write a formula I could use for each row to find names of all column headers with an “x” in them.ChatGPT link
Of course, now I know that he function I was seeking was FILTER (which most readers will be groaning, “doh! @cogdog is a knob”) but this is one of the few times I got a working bit on the first try.
I am not describing this for any kind of brilliant statement of what ChatGPT can do. But I think about my approaches used over the last decades of finding by web search: how to do tech things I maybe forgot how to do
- Type words in a box
- Press return
- Try to guess best result from results
- Wade through a lot of verbiage and ads and other stuff
- Find a bit of code to try
- Test it out
- If it does not work, go back to try a different result
Of course in suggesting there’s not much difference between these approaches I am glossing over the significant problems AI search presents.
Still, the ways I am doing these things feel the same.