The churn of techno-fadism is at warp speed 9. Already left behind is blockchain, NFTs and despite the imploding star of twitter-verse, it feels like it’s AI* all the time.

In trying to generate some discussion activity over at the OEG Connect community site I have been fiddling with since 2020, I tried a pseudo mini blog, out loud thinking, starting June 13 with Understanding / Doing Some AI

Artificial Intelligence, can we understand it without being data scientists? Is it too complex to grapple? Do we just have to trust the experts, like the ones who claim sentience?

No. I hope not.

But I do know I can better come to my own understanding if I can do more than read papers and blog posts, but actually dabble in a technology.

I will not claim any deep understanding, and I still cannot even explain what AI does in words that are not clear as sludge, but I at least have a collections of stuff I came across (and I still tag stuff old style social bookmarking style).

I veered amongst different topics but got keen on the first wave of generator tools that created images from an entered phrase. In the beginning I was using what I could access in Craiyon which looked and still do, so primitive, compared to the stuff in tech stories or the cover of Cosmo. I saw tweets of things people were creating with OpenAI’s DALL-E. I signed up for the beta in April, and finally got access in August (it’s now public).

Now I imagine some future tool that will, from the toss of a phrase, generate a video of a robot performing a stage number Carol Channing style

Hello DALL-E!
Well, Hello DALL-E! I
t’s so nice to make images just from text
They’re looking swell, DALL-E,
We can tell, DALL-E,
You’re truly thinkin’, really creatin’
From text spit into a box we swoon.
We feel the tweeters swayin’
For the game’s changin’
Everything will be done like this soon.

never sung anywhere….

I did blog a bit about AI image restoration tools and more about the pitfalls of promptism.

There’s all kinds of dimensions of this to grapple with. There are many concerned with the unattributed “reuse”of existing images for the training sets of these systems. That is a thorny topic, but because we are not even exactly clear (or I am not) on exactly how the new images are generated, is it really akin to remix/sampling of sources, or is it more like being influenced by prior art.

Another dimension is sorting out the copyrightableness (or not) of such generated content, seemingly falling into the realm of it not being possible to copyright something created by a non human.

I got more interested in trying to understand and be able to communicate to educators who will start using these tools, how they think about reusing, and yes, attributing images. Yes, if it turns out they cannot be copyrighted, then we do not have to worry about attribution?

The terms of use from openAI changed on a biweekly basis, first asserting they owned images, then recently not, but that all uses of them (commercial too) were okay. It’s moving fast.

But since I adopt the practice of ABA, Always Be Attributing (whether a license says so or not), I got really curious about this “grey zone” of these types of images (that became a newly launched topic in OEG Connect).

I turn to the Creative Commons Best Practices for Attribution which always makes sense to me to think of TASL- Title, Author, Source, License. In my first forays with DALL-E in August, at best I could eek out a “T” – authorship was not clear, I could not link to a source, and who knows what, if any, license I could ascribe to.

In conversations in the Creative Commons Slack, Nate Angell shared a newly published post of theirs with an attempt at attribution of a DALL-E generated image.

In that time, the TASL-ability has changed. Now DALL-E can generate a public version of an image, and they have given up claims of ownership (though the public links assert credit to human+DALL-E), which might mean, to be, we are welcome to choose our own license?

In my experiment, I asked DALL-E for an image conjuring peering into some machine for the ways to create attribution, using as a prompt, “a young girl in a yello hat attempts to adjust the dials on an xray machine, digital art” (yes I had a typo in there, and it still worked)

Screenshot of DALL-E generated image based on prompt “a young girl in a yello hat attempts to adjust the dials on an xray machine, digital art”

I have trouble, like others with the credit given to Human & AI (as my colleague Jonathan Poritz notes, Ansel Adams never had to credit his camera). But can I really claim credit too, just for typing in a box?

My best attempt, at the time, for an attribution for said image would (this week) be something like:

DALL·E generated image created by Alan Levine from prompt –a young girl in a yello hat attempts to adjust the dials on an xray machine, digital art– licensed under a CC BY license (because now I own it)

I am fascinated to see how we might start sorting this out (if even possible). Are educators and those who work with them considering such issues? Will the risk aversion be to advise to steer clear of them?

And I go back to questions about the allure of prompt generated easy peasy media. Is it art? (likely) Is it useful (maybe). Is it creativity (open to question)? I found a compelling essay from, of all places, the Bulletin of Atomic Scientists, where Annie Dorsen says, “AI is plundering the imagination and replacing it with a slot machine“.

Like, is this how we want to be making digital art?

Where do you stand on this stuff? If you are curious/confused tune in this week to a pair of Creative Commons webinars on AI Inputs, Outputs and the Public Commons (and see the latest efforts in their post on a prompt generated image).

Is it the new wild west of open frontier?

When I created prompted this image, the terms of use were different, so the attribution was… Robot Wild West” by Alan Levine, image generated and regenerated by the DALL-E 2 AI platform with the text prompt “A robot sitting on a horse on a western plain Impressionist style.” OpenAI asserts ownership of DALL-E generated images; Alan dedicates any rights it holds to the image to the public domain via CC0.

I think all this stuff is quite fun to grapple with. As in the above caption I might start refraining from stating “I created” an image to “I Prompted” an image!

Better than going on and on about the musky odor birdplace drama.

Anyone else?

*I share Jonathan Poritz’s aversion to calling this stuff AI as he says, it’s not intelligent, not artificial, just statistical modeling. But it’s the phrase of vogue now.

Image Credit: A screenshot of the four images created from my DALL-E prompt “a young girl in a yello hat attempts to adjust the dials on an xray machine, digital art” superimposed on another screenshot of the Creative Commons Best Practices for Attribution. Since I composited this image with my own hands in Photoshop, I can assert a Creative Commons CC BY license on it.

If this kind of stuff has value, please support me by tossing a one time PayPal kibble or monthly on Patreon
Become a patron at Patreon!
Profile Picture for CogDog The Blog
An early 90s builder of web stuff and blogging Alan Levine barks at on web storytelling (#ds106 #4life), photography, bending WordPress, and serendipity in the infinite internet river. He thinks it's weird to write about himself in the third person. And he is 100% into the Fediverse (or tells himself so) Tooting as

Leave a Reply

Your email address will not be published. Required fields are marked *