I’m cratering and writing about AI. Call me a lemming. Stop the presses. Where are the presses?
All the time in webinars and discussions about AI there is much weeping and wailing… no actually there is not, but much concern about what Large Language Models are trained on. Often its the vein of “they are doing it on copyrighted stuff or my stuff that is not licensed for this use”.
What is in the pile that trains ChatGPT? Maybe the most illustrative answer was WaPo’s Inside the secret list of websites that make AI like ChatGPT sound smart (No $wall).
OMG! This blog here has 610,000 tokens used, or a whopping 0.0004% of the entire enchilada. I rank 24,442 down the list from Wikipedia
How does my stuff get used? Can anyone really answer?
Bing with the Sourcing Zing
In a recent webinar I heard it said or chatted that the new Bing AI powered search is better because it provides its sources. Now I think there is a bit conflating what is happening here. The sources returned by Bing are references, and unlike what we know is problematic with ChatGPT is the latter returns citations/sources that do not exist.
But if an AI returns valid citations/sources, that’s good, but its far from representing the sources the AI was trained on. To be clear, there is no way to trace an AI generated response back to the source. Don’t take my word:
Unfortunately, AI models such as Claude live in a black box. Researchers know how to build the AI, and they know how AIs work at a fundamental, technical level. But what they actually do involves manipulating more numbers, patterns and algorithmic steps than a human can process in a reasonable amount of time.
For this reason, there’s no direct method by which researchers can trace an output to its source.
https://cointelegraph.com/news/anthropic-cracks-open-the-black-box-to-see-how-ai-comes-up-with-the-stuff-it-says
And do not say a Generative AI “hallucinates”. Only people can do that. But what it is doing is producing a statistical approximation of what looks like a valid reference for a topic, it’s the essence of truthiness. It’s assembling sentences from stats, not reason, not research, not experience. Just word vectors.
I admit I have not Bing-ed before, but felt a little compelled to try.
What do do for.a test case?
Well, I return to a favorite topic here where over many years I have sought to debunk the claim that people process information in images 60,000 times faster than text. And I can claim credit for pushing back the timeline for this to 1982, where the trail stops cold.
Let ChatGPT Flail First
I gave this a whirl in August 2023 using the now inferior ChatGPT 3.5 (I am loathe to pay for AI, we all do already with our environmental debit cards).
Write an expository essay that unravels the history, source and scientific rationale behind the claim the 3M scientists proved that humans perceive information in images 60000 times faster than text. Go beyond the implications for visual media– explain the basis for the claim and the evidence that supports it. Yes, please include references, and any. current research that supports this claim.
https://chat.openai.com/share/59244ef6-b099-4a3f-833a-b1cbe8287b43
Use a scholarly tone but also write it in a form that is approachable for a layperson.
It blah blahs right past the lack of evidence in the original 3M claim:
The claim that humans perceive information in images much faster than text emerged from research conducted by 3M, a renowned multinational conglomerate. While the exact origin of this claim might be challenging to trace, it has been widely cited in discussions related to the efficiency of communication mediums, such as presentations, advertisements, and educational materials. However, it is important to note that the details of the original research, its methodology, and its publication remain elusive, potentially limiting our ability to thoroughly scrutinize the validity of the claim.
And while I asked for references, further done is the place I’d think there’s be references, we get vague references to unsaid references!
Supporting Evidence
Numerous studies in cognitive psychology and neuroscience lend support to the notion that humans process visual information more rapidly than text. Research has shown that the brain can identify familiar objects within as little as 100 milliseconds, while words require more time to be recognized and processed. For instance, a study conducted by Potter and Faulconer in 1975 demonstrated that participants could recognize images in as little as 13 milliseconds, far faster than the time it takes to recognize individual words.
Furthermore, eye-tracking studies have consistently revealed that our eyes naturally gravitate toward images before text in various media formats. This phenomenon suggests an innate predisposition for processing visual stimuli promptly. Additionally, the rise of platforms like Instagram, where image-centric content predominates, attests to our preference for visual communication.
We get one reference (and no details), but here it is:
Potter, M. C., & Faulconer, B. A. (1975). Time to understand pictures and words. Nature, 253(5491), 437–438. https://doi.org/10.1038/253437a0
My experiences with ChatGPT end up feeling like when I am victim to some telephone automated answering system, where at some point I am screaming at the top of my lungs demanding to speak to a human.
I poked it again, hoping for a response with more citations:
This reads like a high school student essay, and you were asked to provide speciific citations for the references. Write a better, more scientific essay, and append to the end a valid list of research with citations in APA style
All it did was more or less give me the same essay, and cited again only the single reference:
References
Potter, M. C., & Faulconer, B. A. (1975). Time to understand pictures and words. Nature, 253(5491), 437-438.
(Note: This is a representative reference, but the original research from 3M may not be available in standard academic databases.)
At least it produced a de-hallucinated valid citation… one, puny citation.
When it came time for bids, this poker play folded his hand.
Take it to the Bing
After reluctantly logging in with a MS account (needed to save the results), I gave Bing this query which is really the answer I have been chasing since 1942:
What is the source of the research behind the claim that 3M scientists showed that people process information from images 60000 times faster than text?
And zoinks, I have to admit the results are pretty damned good!
The assertion that “the human brain processes images 60,000 times faster than text” has become widely cited in discussions about visual communication and marketing. However, the origins of this claim are dubious, and there is no clear scientific evidence to support it1.
While not specifically supporting the “60,000 times” statement, scientific research has explored differences in visual and verbal processing speeds using various methodologies. Here are some relevant findings:
- Visual Processing Time:
- Visual processing of complex objects in the brain typically occurs within a range of 0-500 milliseconds.
- A 1996 study on visual categorization speed found that people could determine if a photograph contained an animal in under 150 milliseconds.
- This suggests that visual processing can be very rapid but does not directly compare to text processing.
- Text Processing Time:
- The average time to read a word is estimated to be 100-200 milliseconds, and a sentence takes 300-400 milliseconds.
- Visual and verbal processing appear to require similar lengths of time.
- Context Matters:
- Some studies have found faster processing for images associated with concrete nouns (e.g., “apple”) compared to abstract nouns (e.g., “justice”).
- However, this does not universally prove faster visual processing.
- Factors like image complexity, word length, and cognitive load affect processing speeds for both modalities.
- Logo Recognition and Face Discrimination:
- Familiar image logos are recognized 119 milliseconds faster than less familiar ones.
- People need between 130 and 160 milliseconds to discriminate between different faces.
In summary, while images can indeed be processed rapidly, especially when they contain concrete content, text is not necessarily slow. The claim of a 60,000-fold speed difference lacks a clear scientific basis and represents a significant oversimplification1.
The earliest known source of this claim dates back to a 1982 Business Week advertising section, where Philip Cooper (then president of Computer Pictures Corporation) stated that “people assimilate visual information about 60,000 times faster than they assimilate printed copy.” However, this assertion was made without any citation to research12.
!1982 Business Week Advertising Section Image from a 1982 Business Week advertising section (Credit: Alan Levine aka cogdog)
Learn more:
1. photutorial.com 2. oit.williams.edu. 3. blog.csgsolutions.com 4. policyviz.com
https://sl.bing.net/eM1op6zN9H2
And I am stoked that Bing singed out my credit, ego PUFF.
But there it is, Bing produces 4 relevant, working, clickable results, sources, usable links. This is good, but again, these are sources for the result it spat out… this is by no means the sources of what Bing was trained on to be able to do this magic trick.
I left Bing and went back to an old school Google search for an explanation of how Bing works. I got Microsoft explains how Bing AI Chat uses ChatGPT and Search with Prometheus.
Prometheus, besides being the Titan who stole fire, is that name Microsoft devised for putting ChatGPT to use in conjunction with Bing’s web search data.
As I understand it, the Prometheus technology works across Bing in these areas:
https://searchengineland.com/microsoft-bing-explains-how-bing-ai-chat-leverages-chatgpt-and-bing-search-with-prometheus-393437
- Query interpretation: It takes your long-winded spoken-like query, and breaks it down into a bite-size normal search type of query so Bing Chat can process it and find content faster.
- Bing’s index. It leverages Bing’s search index, so Bing Chat can use the information that is literally up to the minute. Bing calls this the “Bing Orchestrator.”
- Bing ranking. The Bing ranking algorithm is incorporated to see what content to surface in the answer and which documents ChatGPT should use to give the answers.
- Bing answers and results. Bing can also show answers such as weather, sports scores, news boxes, local results and/or even ads from Bing Search directly in the Bing Chat answers.
- Citations and links. And Bing Chat, currently unlike ChatGPT, provides links and citations to where it found the content, something Microsoft said it can only do because of the Prometheus technology.
For Bing-ing, ChatGPT helps reformulate human’s rambling, verbose question into suggested search terms to optimize web search. Bing then decides which of its indexed sources might povide the best answers, which is then sent back through ChatGPT to talk back with the results.
So the answers I got were drawn from likely credible, or at least not imaginary, sources of information.
But as far as I understand, this is NOT what Bing was trained on. All of the problems with ChatGPT sources, bias, and mystery shroud are still present.
Sure it’s great to get better responses, and ones with reference links. Actually, its MUCH better, and you can see what this might mean for the ways we look for information. I’d say the AI is helping to some degree.
The question for me with this result, do I swallow it whole? Do I verify and review the references? Do I compose my own summary of this summary? Do I look for additional references? This is all on me.
So do not conflate getting legitimate references for a particular search to inferring thats what the AI was trained on. The training data and process is so far down the monolithic obelisk, all the banging on the stone with our bone tools will not help.
And heck, BIng got no father than I did years ago with the answer to the 60000 times faux truth.
PS, if anyone ever gets a source, tell me, and I will cut you a check for $60.
Featured Image: From an open licensed image search on google for “60000” I was happy to see this image from colleague Christina Hendricks, as a response to a DS106 Daily Create from 2017.
Comments