After so many years of taking/making photography, from an undergraduate photography class at the University of Delaware, to a darkroom in my grad school apartment, to first digital photos with an Apple Quicktake, on through DSLRs, and the alluring capability of an iPhone 13, I’ve been firm on representing the world I see. Even if digitally enhanced, the photo itself, definitely to me, is a representative of truth.
Not absolute truth of course, and digital fakery has been long going on the web. Plus Errol Morris in his book Believing is Seeing have already poked holes in the truth in photos question (here’s my take on this fab book).
As The Verge reports, standy by as the photograph as evidence of the world —
No one’s ready for this “Our basic assumptions about photos capturing reality are about to go up in smoke”–
An explosion from the side of an old brick building. A crashed bicycle in a city intersection. A cockroach in a box of takeout. It took less than 10 seconds to create each of these images with the Reimagine tool in the Pixel 9’s Magic Editor. They are crisp. They are in full color. They are high-fidelity. There is no suspicious background blur, no tell-tale sixth finger. These photographs are extraordinarily convincing, and they are all extremely fucking fake.
https://www.theverge.com/2024/8/22/24225972/ai-photo-era-what-is-reality-google-pixel-9
Compare the AI altered photos and ask if you glanced how you would react. Don’t count on the inserted metadata google will say is useful, editing photo meta data is easy. Maybe an expert can tell from up close analysis of the image data, maybe. But think of the rest of the world without forensic photo skills/tools.
Don;t get me wrong- what is being done is beyond being technically impressive. I’ve done my share of Photoshopping out distracting from photos I did not compose ideally (using the clone brush), and yes, on request I have edited friend’s photos to remove a family member’s weird divorced spouse. So there is capability to “clean up” photos easier.
But I leave it for anyone to imagine where this can/might/will go.
I will continue to enjoy making/sharing and messing aroudn with my photos. They are to me, my truths, as I was the one who snapped the pic. But when they go out in the world, now, well, its on every one to accept/reject/question, or just keep passing it on.
It shall be… interesting.
A note on the image I made for this post. This was a “selfie” I made in 1988, taking in a mirror reflection of the bathroom that was my darkroom. That’s my old (gone) trusty Nikon Nikkormat film camera I used for my Geology field research.
I imagined making an overlay for half the camera of what Generative AI could do to modernize the camera into something futuristic. I worked with Adobe Firefly to generate an image based on the camera. I guess this will show how bad my skills are at Generative AI, and I am sure someone who has been at this can do much better. But my efforts are closer to what most early/normal users will generate, not a prompt-master.
First prompt: “Front view of a classic metallic Nikon 35mm film camera. The body has been modified to include futurist features of Al. Photorealistic.” Ugh. I meant the camera body!

I tried again, and used my photo above as a “reference” which kind of worked for getting the shape right. I wondered what was the lens thing on the left, but realized it was the lens cap hanging on my original. Hah. The prompt here was “Front view of a classic metallic Nikon 35mm film camera. The camera is covered with electronic circuit parts that suggest it is powered by futuristic technology”

I ended up going with the one on the top right (and the photoshop tool for selecting the object to mask worked out well). I can’t say these are really what I imagined or hoped might be generated to surprise me. The “futuristic” element means glowing blue tones and patterns in the background. I find it…meh.
Also for both of these prompts. I got some kind of warning that a word in my prompt was removed / flagged for not being appropriate. No idea what word it was– maybe “body”. Shrug.
Just for grins I dipped on more time in to let the AI figure out a futuristic view of an old camera with the prompt “Futuristic version of an old style Nikon 35mm film camera.” I guess it’s interpretation of “futuristic” means the style of the image, not the camera I asked for. Bleh.

It was a bit fun to try different things, but the overall tonailty of results was so similar, and, well, boring. I wonder too about how much time this was “saving” me versus image searching for something to make a composite.
Again, if you really are good at this and can render something better, sure, go ahead. But right now, I am a basic user, and for all the hype and promise of AI, more people will generate crap like this. Is that what the world needs? Is this really saving me time from “boring tasks” so I can lounge at thew top of Maslow’s pyramid?
Wither the meaning truth / reality in photos, I shall continue to make and share my own, maybe the hoovering into the machines will make a differen— hah, what a joke.
Featured Image: A composite of my original photo Selfie circa 1988 flickr photo by cogdogblog shared into the public domain using Creative Commons Public Domain Dedication (CC0) with a masked portion on the right side of the camera generated with Adobe Firefly using the prompt “Front view of a classic metallic Nikon 35mm film camera. The camera is covered with electronic circuit parts that suggest it is powered by futuristic technology”

Comments