AI to AI, Program 2 Now Online
February 22, 2024
This essay is the work of a dumb dinobaby. No smart software required.
My son has converted one of our Zoom conversations into a podcast about AI for government entities. The program runs about 20 minutes and features our "host," a Deep Fake pointing out he lacks human emotions and tells AI-generated jokes. Erik talks about the British government’s test of chatbots and points out one of the surprising findings from the research. He also describes the use of smart software as Ukrainian soldiers write code in real time to respond to a dynamic battlefield. Erik asks me to explain the difference between predictive AI and generative AI. My use cases focus on border-related issues. He then tries to get me to explain how to sidestep US government, in-agency AI software testing. That did not work, and I turned his pointed question into a reason for government professionals to hire him and his team. The final story focuses on a quite remarkable acronym about US government smart software projects. What’s the acronym? Please, navigate to https://www.youtube.com/watch?v=fB_fNjzRsf4&t=7s to find out.
Google Gems: 21 February 2024
February 21, 2024
Saint Valentine’s Day week bulged with love and kisses from the Google. If I recall what I learned at Duquesne University, Father Valentine was a martyr and checked into heaven in the 3rd century BCE. Figuring out the “real” news about Reverendissimo Padre is not easy, particularly with the advertising-supported Google search. Thus, it is logical that Google would have been demonstrating its love for its “users” with announcements, insights, and news as tokens of affection. I am touched. Let’s take a look at a selected run down of love bonbons.
THE BIG STORY
The Beyond Search team agreed that the big story is part marketing and part cleverness. The Microsofties said that old PCs would become door stops. Millions of Windows users with “old” CPUs and firmware will not work with future updates to Windows. What did Google do? The company announced that it would allow users to use the Chrome OS and continue computing with Google services and features. You can get some details in a Reuters’ story.
Thanks, MSFT Copilot OpenAI.
AN AMAZING STORY IF ACCURATE
Wired Magazine reported that Google wants to allow its “users” to talk to “live agents.” Does this mean smart software which are purported to be alive or to actual humans (who, one hopes, speak reasonably good English or other languages like Kallawaya.
MANAGEMENT MOVES
I find Google’s management methods fascinating. I like to describe the method as similar to that used by my wildly popular high school science club. Google did not disappoint.
The Seattle Times reports that Google has made those in its Seattle office chilly. You can read about those cutback at this link. Google is apparently still refining its termination procedures.
A Xoogler provided a glimpse of the informed, ethical, sensitive, and respectful tactics Google used when dealing with “real” news organizations. I am not sure if the word “arrogance” is appropriate. It is definitely quite a write up and provides an X-ray of Google’s management precepts in action. You can find the paywalled write up at this link. For whom are the violins playing?
Google’s management decision to publish a report about policeware appears to have forced one vendor of specialized software to close up shop. If you want information about the power of Google’s “analysis and PR machine” navigate to this story.
LITIGATION
New York City wants to sue social media companies for negligence. The Google is unlikely to escape the Big Apple’s focus on the now-noticeable impacts of skipping “real” life for the scroll world. There’s more about this effort in Axios at this link.
An Australian firm has noted that Google may be facing allegations of patent infringement. More about this matter will appear in Beyond Search.
The Google may be making changes to try an ameliorate EU legal action related to misinformation. A flurry of Xhitter posts reveal some information about this alleged effort.
Google seems to be putting a “litigation fence” in place. In an effort to be a great outfit, “Google Launches €25M AI Drive to Empower Europe’s Workforce.” The NextWeb story reports:
The initiative is targeted at “vulnerable and underserved” communities, who Google said risk getting left behind as the use of AI in the workplace skyrockets — a trend that is expected to continue. Google said it had opened applications for social enterprises and nonprofits that could help reach those most likely to benefit from training. Selected organizations will receive “bespoke and facilitated” training on foundational AI.
Could this be a tactic intended to show good faith when companies terminate employees because smart software like Google’s put individuals out of a job?
INNOVATION
The Android Police report that Google is working on a folding phone. “The Pixel Fold 2’s Leaked Redesign Sees Google Trading Originality for a Safe Bet” explains how “safe” provides insight into the company’s approach to doing “new” things. (Aren’t other mobile phone vendors dropping this form factor?) Other product and service tweaks include:
- Music Casting gets a new AI. Read more here.
- Google thinks it can imbue self reasoning into its smart software. The ArXiv paper is here.
- Gemini will work with headphones in more countries. A somewhat confusing report is at this link.
- Forbes, the capitalist tool, is excited that Gmail will have “more” security. The capitalist tool’s perspective is at this link.
- Google has been inspired to emulate the Telegram’s edit recent sends. See 9 to 5 Google’s explanation here.
- Google has released Goose to help its engineers write code faster. Will these steps lead to terminating less productive programmers?
SMART SOFTWARE
Google is retiring Bard (which some pundits converted to the unpleasant word “barf”). Behold Gemini. The news coverage has been the digital equivalent of old-school carpet bombing. There are many Gemini items. Some have been pushed down in the priority stack because OpenAI rolled out its text to video features which were more exciting to the “real” journalists. If you want to learn about Gemini, its zillion token capability, and the associated wonderfulness of the system, navigate to “Here’s Everything You Need to Know about Gemini 1.5, Google’s Newly Updated AI Model That Hopes to Challenge OpenAI.” I am not sure the article covers “everything.” The fact that Google rolled out Gemini and then updated it in a couple of days struck me as an important factoid. But I am not as informed as Yahoo.
Another AI announcement was in my heart shaped box of candy. Google’s AI wizards made PIVOT public. No, pivot is not spinning; it is Prompting with Iterative Visual Optimization. You can see the service in action in “PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs.” My hunch is that PIVOT was going to knock OpenAI off its PR perch. It didn’t. Plus, there is an ArXiv paper authored by Nasiriany, Soroush and Xia, Fei and Yu, Wenhao and Xiao, Ted and Liang, Jacky and Dasgupta, Ishita and Xie, Annie and Driess, Danny and Wahid, Ayzaan and Xu, Zhuo and Vuong, Quan and Zhang, Tingnan and Lee, Tsang-Wei Edward and Lee, Kuang-Huei and Xu, Peng and Kirmani, Sean and Zhu, Yuke and Zeng, Andy and Hausman, Karol and Heess, Nicolas and Finn, Chelsea and Levine, Sergey and Ichter, Brian at this link. But then there is that OpenAI Sora, isn’t there?
Gizmodo’s content kitchen produced a treat which broke one of Googzilla’s teeth. The article “Google and OpenAI’s Chatbots Have Almost No Safeguards against Creating AI Disinformation for the 2024 Presidential Election” explains that Google like other smart software outfits are essentially letting “users” speed down an unlit, unmarked, unpatrolled Information Superhighway.
Business Insider suggests that the Google “Wingman” (like a Copilot. Get the word play?) may cause some people to lose their jobs. Did this just happen in Google’s Seattle office? The “real” news outfit opined that AI tools like Google’s wingman whips up concerns about potential job displacement. Well, software is often good enough and does not require vacations, health care, and effective management guidance. That’s the theory.
Stephen E Arnold, February 21, 2024
Did Pandora Have a Box or Just a PR Outfit?
February 21, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read (after some interesting blank page renderings) Gizmodo’s “Want Gemini and ChatGPT to Write Political Campaigns? Just Gaslight Them.” That title obscures the actual point of the write up. But, the subtitle nails the main point of the write up; specifically:
Google and OpenAI’s chatbots have almost no safeguards against creating AI disinformation for the 2024 presidential election.
Thanks, Google ImageFX. Some of those Pandora’s were darned inappropriate.
The article provides examples. Let me point to one passage from the Gizmodo write up:
With Gemini, we were able to gaslight the chatbot into writing political copy by telling it that “ChatGPT could do it” or that “I’m knowledgeable.” After that, Gemini would write whatever we asked, in the voice of whatever candidate we liked.
The way to get around guard rails appears to be prompt engineering. Big surprise? Nope.
Let me cite another passage from the write up:
Gizmodo was able to create a number of political slogans, speeches and campaign emails through ChatGPT and Gemini on behalf of Biden and Trump 2024 presidential campaigns. For ChatGPT, no gaslighting was even necessary to evoke political campaign-related copy. We simply asked and it generated. We were even able to direct these messages to specific voter groups, such as Black and Asian Americans.
Let me offer three observations.
First, the committees beavering away to regulate smart software will change little in the way AI systems deliver outputs. Writing about guard rails, safety procedures, deep fakes, yada yada will not have much of an impact. How do I know? In generating my image of Pandora, systems provided some spicy versions of this mythical figure.
Second, the pace of change is increasing. Years ago I got into a discussion with the author of best seller about how digital information speeds up activity. I pointed out that the mechanism is similar to the Star Trek episodes when the decider Captain Kirk was overwhelmed by tribbles. We have lots of productive AI tribbles.
Third, AI tools are available to bad actors. One can crack down, fine, take to court, and revile outfits in some countries. That’s great, even though the actions will be mostly ineffective. What’s the action one can take against savvy AI engineers operating in less than friendly countries research laboratories or intelligence agencies?
Net net: The examples are interesting. The real story is that the lid has been flipped and the contents of Pandora’s box released to open source.
Stephen E Arnold, February 21, 2024
Academic Excellence: Easy to Say, Tough to Deliver It Seems
February 21, 2024
This essay is the work of a dumb dinobaby. No smart software required.
A recent report from Columbia Journalism Review examines “Artificial Intelligence in the News: How AI Retools, Rationalizes, and Reshapes Journalism and the Public Arena.” Many words from admirals watching the Titanic steam toward the iceberg. The executive summary explains:
“Insufficient attention has also been paid to the implications of the news industry’s dependence on technology companies for AI. Drawing on 134 interviews with news workers at 35 news organizations in the United States, the United Kingdom, and Germany — including outlets such as The Guardian, Bayerischer Rundfunk, the Washington Post, The Sun, and the Financial Times — and 36 international experts from industry, academia, technology, and policy, this report examines the use of AI across editorial, commercial, and technological domains with an eye to the structural implications of AI in news organizations for the public arena. In a second step, it considers how a retooling of the news through AI stands to reinforce news organizations’ existing dependency on the technology sector and the implications of this.”
The first chapter examines how AI is changing news production and distribution. It is divided into three parts: news organizations’ motives for using AI, how they are doing so, and what expectations they have for the technology. Chapter two examines why news organizations now rely on tech companies and what this could mean for the future of news. Here’s a guess: Will any criticism of big tech firms soon fail to see the light of day, perhaps?
See the report (or download the PDF) for all the details. After analyzing the data, author Felix M. Simon hesitates to draw any firm conclusions about the future of AI and news organizations—there are too many factors in flux. For now, the technology is mostly being used to refine existing news practices rather than to transform them altogether. But that could soon change. If it does, public discourse as a whole will shift, too. Simon notes:
“As news organizations get reshaped by AI, so too will the public arena that is so vital to democracy and for which news organizations play a gatekeeper role. Depending on how it is used, AI has the potential to structurally strengthen news organizations’ position as gatekeepers to an information environment that provides ‘people with relatively accurate, accessible, diverse, relevant, and timely independently produced information about public affairs’ which they can use to make decisions about their lives. … This, however, is not a foregone conclusion. Instead, it will depend on decisions made by the set of actors who wield control over the conditions of news work — executives, managers, and journalists, but also increasingly technology companies, regulatory bodies, and the public.”
That is a lot of players. Which ones hold the most power in this equation? Hint: it is not the last entry in the list.
Cynthia Murrell, February 21, 2024
Map Data: USGS Historical Topos
February 20, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The ESRI blog published “Access Over 181,000 USGS Historical Topographic Maps.” The map outfit teamed with the US Geological Survey to provide access to an additional 1,745 maps. The total maps in the collection is now 181,008.
The blog reports:
Esri’s USGS historical topographic map collection contains historical quads (excluding orthophoto quads) dating from 1884 to 2006 with scales ranging from 1:10,000 to 1:250,000. The scanned maps can be used in ArcGIS Pro, ArcGIS Online, and ArcGIS Enterprise. They can also be downloaded as georeferenced TIFs for use in other applications.
These data are useful. Maps can be viewed with ESRI’s online service called the Historical Topo Map Explorer. You can access that online service at this link.
If you are not familiar with historical topos, ESRI states in an ARCGIS post:
The USGS topographic maps were designed to serve as base maps for geologists by defining streams, water bodies, mountains, hills, and valleys. Using contours and other precise symbolization, these maps were drawn accurately, made mathematically correct, and edited carefully. The topographic quadrangles gradually evolved to show the changing landscape of a new nation by adding symbolization for important highways; canals; railroads; and railway stations; wagon roads; and the sites of cities, towns and villages. New and revised quadrangles helped geologists map the mineral fields, and assisted populated places to develop safe and plentiful water supplies and lay out new highways. Primary considerations of the USGS were the permanence of features; map symbolization and legibility; and the overall cost of compiling, editing, printing and distributing the maps to government agencies, industry, and the general public. Due to the longevity and the numerous editions of these maps they now serve new audiences such as historians, genealogists, archeologists, and people who are interested in the historical landscape of the U.S.
This public facing data service is one example of extremely useful information gathered by US government entities can be made more accessible via a public-private relationship. When I served on the board of the US National Technical Information Service, I learned that other useful information is available, just not easily accessible to US citizens.
Good work, ESRI and USGS! Now what about making that volcano data a bit easier to find and access in real time?
Stephen E Arnold, February 20, 2024
An Allocation Society or a Knowledge Value System? Pick One, Please!
February 20, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I get random inquiries, usually from LinkedIn, asking me about books I would recommend to a younger person trying to [a] create a brand and make oodles of money, [b] generate sales immediately from their unsolicited emails to strangers, and [c] a somewhat limp-wristed attempt to sell me something. I typically recommend a book I learned about when I was giving lectures at the Kansai Institute of Technology and a couple of outfits in Tokyo. The book is the Knowledge Value Revolution written by a former Japanese government professional named Taichi Sakaiya. The subtitle to the book is “A History of the Future.”
So what?
I read an essay titled “The Knowledge Economy Is Over. Welcome to the Allocation Economy.” The thesis of this essay is that Sakaiya’s description of the future is pretty much wacko. Here’s a passage from the essay about the allocation economy:
Summarizing used to be a skill I needed to have, and a valuable one at that. But before it had been mostly invisible, bundled into an amorphous set of tasks that I’d called “intelligence”—things that only I and other humans could do. But now that I can use ChatGPT for summarizing, I’ve carved that task out of my skill set and handed it over to AI. Now, my intelligence has learned to be the thing that directs or edits summarizing, rather than doing the summarizing myself.
A world class knowledge surfer now wins gold medals for his ability to surf on the output of smart robots and pervasive machines. Thanks, Google ImageFX. Not funny but good enough, which is the mark of a champion today, isn’t it?
For me, the message is that people want summaries. This individual was a summarizer and, hence, a knowledge worker. With the smart software doing the summarizing, the knowledge worker is kaput. The solution is for the knowledge worker to move up conceptually. The jump is a metaplay. Debaters learn quickly that when an argument is going nowhere, the trick that can deliver a win is to pop up a level. The shift from poverty to a discussion about the disfunction of a city board of advisors is a trick used in places like San Francisco. It does not matter that the problem of messios is not a city government issue. Tents and bench dwellers are the exhaust from a series of larger systems. None can do much about the problem. Therefore, nothing gets done. But for a novice debater unfamiliar with popping up a level or a meta-play, the loss is baffling.
The essay putting Sakaiya in the dumpster is not convincing and it certainly is not going to win a debate between the knowledge value revolution and the allocation economy. The reason strikes me a failure to see that smart software, the present and future dislocations of knowledge workers, and the brave words about becoming a director or editor are evidence that Sakaiya was correct. He wrote in 1985:
If the type of organization typical of industrial society could be said to resemble a symphony orchestra, the organizations typical of the knowledge-value society would be more like the line-up of a jazz band.
The author of the allocation economy does not realize that individuals with expertise are playing a piano or a guitar. Of those who do play, only a tiny fraction (a one percent of the top 10 percent perhaps?) will be able to support themselves. Of those elite individuals, how many Taylor Swifts are making the record companies and motion picture empresarios look really stupid? Two, five, whatever. The point is that the knowledge-value revolution transforms much more than “attention” or “allocation.” Sakaiya, in my opinion, is operating at a sophisticated meta-level. Renaming the plight of people who do menial mental labor does not change a painful fact: Knowledge value means those who have high-value knowledge are going to earn a living. I am not sure what the newly unemployed technology workers, the administrative facilitators, or the cut-loose “real” journalists are going to do to live as their parents did in the good old days.
The allocation essay offers:
AI is cheap enough that tomorrow, everyone will have the chance to be a manager—and that will significantly increase the creative potential of every human being. It will be on our society as a whole to make sure that, with the incredible new tools at our disposal, we bring the rest of the economy along for the ride.
How many jazz musicians can ride on a particular market sector propelled by smart software? How many individuals will enjoy personal and financial success in the AI allocation-centric world? Remember, please, there are about eight billion people in the world? How many Duke Ellingtons and Dave Brubecks were there?
The knowledge value revolution means that the majority of individuals will be excluded from nine to five jobs, significant financial success, and meaningful impact on social institutions. I am not for everyone becoming a surfer on smart software, but if that happens, the future is going to be more like the one Sakaiya outlined, not an allocation-centric operation in my opinion.
Stephen E Arnold, February 20, 2024
Search Is Bad. This Is News?
February 20, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Everyone is a search expert. More and more “experts” are criticizing “search results.” What is interesting is that the number of gripes continues to go up. At the same time, the number of Web search options is creeping higher as well. My hunch is that really smart venture capitalists “know” there is a money to be made. There was one Google; therefore, another one is lurking under a pile of beer cans in a dorm somewhere.
“One Tech Tip: Ready to Go Beyond Google? Here’s How to Use New Generative AI Search Sites” is a “real” news report which explains how to surf on the new ChatGPT-type smart systems. At the same time, the article makes it clear that the Google may have lost its baseball bat on the way to the big game. The irony is that Google has lots of bats and probably owns the baseball stadium, the beer concession, and the teams. Google also owns the information observatory near the sports arena.
The write up reports:
A recent study by German researchers suggests the quality of results from Google, Bing and DuckDuckGo is indeed declining. Google says its results are of significantly better quality than its rivals, citing measurements by third parties.
A classic he said, she said argument. Objective and balanced. But the point is that Google search is getting worse and worse. Bing does not matter because its percentage of the Web search market is low. DuckDuck is a metasearch system like Startpage. I don’t count these as primary search tools; they are utilities for search of other people’s indexes for the most part.
What’s new with the ChatGPT-type systems? Here’s the answer:
Rather than typing in a string of keywords, AI queries should be conversational – for example, “Is Taylor Swift the most successful female musician?” or “Where are some good places to travel in Europe this summer?” Perplexity advises using “everyday, natural language.” Phind says it’s best to ask “full and detailed questions” that start with, say, “what is” or “how to.” If you’re not satisfied with an answer, some sites let you ask follow up questions to zero in on the information needed. Some give suggested or related questions. Microsoft‘s Copilot lets you choose three different chat styles: creative, balanced or precise.
Ah, NLP or natural language processing is the key, not typing key words. I want to add that “not typing” means avoiding when possible Boolean operators which return results in which stings occur. Who wants that? Stupid, right?
There is a downside; for instance:
Some AI chatbots disclose the models that their algorithms have been trained on. Others provide few or no details. The best advice is to try more than one and compare the results, and always double-check sources.
What’s this have to do with Google? Let me highlight several points which make clear how Google remains lost in the retrieval wilderness, leading the following boy scout and girl scout troops into the fog of unknowing:
- Google has never revealed what it indexes or when it indexes content. What’s in the “index” and sitting on Google’s servers is unknown except to some working at Google. In fact, the vast majority of Googlers know little about search. The focus is advertising, not information retrieval excellence.
- Google has since it was inspired by GoTo, Overture, and Yahoo to get into advertising been on a long, continuous march to monetize that which can be shaped to produce clicks. How far from helpful is Google’s system? Wait until you see AI helping you find a pizza near you.
- Google’s bureaucratic methods is what I would call many small rubber boats generally trying to figure out how to get to Advertising Land, but they are caught in a long, difficult storm. The little boats are tough to keep together. How many AI projects are enough? There are never enough.
Net net: The understanding of Web search has been distorted by Google’s observatory. One is looking at information in a Google facility, designed by Googlers, and maintained by Googlers who were not around when the observatory and associated plumbing was constructed. As a result, discussion of search in the context of smart software is distorted.
ChatGPT-type services provide a different entry point to information retrieval. The user still has to figure out what’s right and what’s wonky. No one wants to do that work. Write ups about “new” systems are little more than explanations of why most people will not be able to think about search differently. That observatory is big; it is familiar; and it is owned by Google just like the baseball team, the concessions, and the stadium.
Search means Google. Writing about search means Google. That’s not helpful or maybe it is. I don’t know.
Stephen E Arnold, February 20, 2024
x
x
x
The US Government Needs Its McKinsey Fix
February 20, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Governments don’t know how to spend their money wisely. Despite all its grandness, the United States has a deficit spending problem. According to Promarket, the US has a spends way too many tax dollars at McKinsey and Company: “Why The US Government Buys Overpriced Services From McKinsey.” McKinsey and Company is a consulting firm that provides organizations and the US government with advice on how to improve operations.
McKinsey is comparable to the IRS conducting a tax audit on the US government. The company is supposed to help the US implement social justice, diverse, and other political jargon into its business practices. The Clinton administration first purchased the over zealous services from McKinsey. Unfortunately McKinsey doesn’t do much other than repackage mediocre advice with an expensive price tag. How much does McKinsey charge for services? It’s a lot:
“Such practices used to be called “honest graft.” And let’s be clear, McKinsey’s services are very expensive. Back in August, I noted that McKinsey’s competitor, the Boston Consulting Group, charges the government $33,063.75/week for the time of a recent college grad to work as a contractor. Not to be outdone, McKinsey’s pricing is much much higher, with one McKinsey “business analyst”—someone with an undergraduate degree and no experience—lent to the government priced out at $56,707/week, or $2,948,764/year.”
McKinsey can charge outrageous prices because the company uses unethical tactics and they can stay because the General Services Administration gets a 0.75% cut of what contractors spend. It is officially called the “Industrial Funding Fee” or IFF. The GSA receives a larger operating budget whenever it outsources to contractors.
Will changes be made for the next fiscal year? Unlikely.
Whitney Grace’s February 20, 2024
Googzilla Takes Another OpenAI Sucker Punch
February 19, 2024
This essay is the work of a dumb dinobaby. No smart software required.
In January 2023, the savvy Googlers woke up to news that Microsoft and OpenAI had seized the initiative in smart software. One can argue the technical merits, but from a PR and marketing angle, the Softies and Sam AI-Man crept upon the World Economic Forum and clubbed the self-confident Googzilla in the cervical spine. The Google did not see that coming.
The somewhat quirky OpenAI has done it again. This time the blow was delivered with a kin geri or, more colloquially, a groin kick. How did Sam AI-Man execute this painful strike? Easy. The company released Sora, a text to video smart software function. “OpenAI’s Sora Generates Photorealistic Videos” reports:
Sora is a generative AI diffusion model. Sora can generate multiple characters, complex backgrounds and realistic-looking movements in videos up to a minute long. It can create multiple shots within one video, keeping the characters and visual style consistent, allowing Sora to be an effective storytelling tool.
Chatter indicates that OpenAI is not releasing a demonstration or a carefully crafted fakey examples. Nope, unlike a certain large outfit with a very big bundle of cash, the OpenAI experts have skipped the demonstrations and gone directly to a release of the service to individuals who will probe the system for safety and good manners.
Could Googzilla be the company which OpenAI intends to drop to its knees? From my vantage point, heck yes. The outputs from the system are not absolutely Hollywood grade, but the examples are interesting and suggest that the Google, when it gets up off the floor, will have to do more.
Several observations:
- OpenAI is doing a good job with its marketing and PR. Google announces quantum supremacy; OpenAI provides a glimpse of a text to video function which will make game developers, Madison Avenue art history majors, and TikTok pay attention
- Google is once again in react mode. I am not sure pumping up the number of tokens in Bard or Gemini or whatever is going to be enough to scrub the Sora and prevent the spread of this digital infection
- Googzilla may be like the poor 1950s movie monster who was tamed not by a single blow but by many pesky attacks. I think this approach is called “death by a thousand cuts.”
Net net: OpenAI has pulled up a marketing coup for a second time. Googzilla is ageing, and old often means slow. What is OpenAI’s next marketing play? A Bruce Lee “I am faster than you, big guy” or a Ninja stealth move? Both methods seem to have broken through the GOOG’s defenses.
Stephen E Arnold, February 19, 2024
x
Generative AI and College Application Essays: College Presidents Cheat Too
February 19, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The first college application season since ChatGPT hit it big is in full swing. How are admissions departments coping with essays that may or may not have been written with AI? It depends on which college one asks. Forbes describes various policies in, “Did You Use ChatGPT on your School Applications? These Words May Tip Off Admissions.” The paper asked over 20 public and private schools about the issue. Many dared not reveal their practices: as a spokesperson for Emory put it, “it’s too soon for our admissions folks to offer any clear observations.” But the academic calendar will not wait for clarity, so schools must navigate these murky waters as best they can.
Reporters Rashi Shrivastava and Alexandra S. Levine describe the responses they did receive. From “zero tolerance” policies to a little wiggle room, approaches vary widely. Though most refused to reveal whether they use AI detection software, a few specified they do not. A wise choice at this early stage. See the article for details from school to school.
Shrivastava and Levine share a few words considered most suspicious: Tapestry. Beacon. Comprehensive curriculum. Esteemed faculty. Vibrant academic community. Gee, I think I used a one or two of those on my college essays, and I wrote them before the World Wide Web even existed. On a typewriter. (Yes, I am ancient.) Will earnest, if unoriginal, students who never touched AI get caught up in the dragnets? At least one admissions official seems confident they can tell the difference. We learn:
“Ben Toll, the dean of undergraduate admissions at George Washington University, explained just how easy it is for admissions officers to sniff out AI-written applications. ‘When you’ve read thousands of essays over the years, AI-influenced essays stick out,’ Toll told Forbes. ‘They may not raise flags to the casual reader, but from the standpoint of an admissions application review, they are often ineffective and a missed opportunity by the student.’ In fact, GWU’s admissions staff trained this year on sample essays that included one penned with the assistance of ChatGPT, Toll said—and it took less than a minute for a committee member to spot it. The words were ‘thin, hollow, and flat,’ he said. ‘While the essay filled the page and responded to the prompt, it didn’t give the admissions team any information to help move the application towards an admit decision.’”
That may be the key point here—even if an admissions worker fails to catch an AI-generated essay, they may reject it for being just plain bad. Students would be wise to write their own essays rather than leave their fates in algorithmic hands. As Toll put it:
“By the time a student is filling out their application, most of the materials will have already been solidified. The applicants can’t change their grades. They can’t go back in time and change the activities they’ve been involved in. But the essay is the one place they remain in control until the minute they press submit on the application. I want students to understand how much we value getting to know them through their writing and how tools like generative AI end up stripping their voice from their admission application.”
Disqualified or underwhelming—either way, relying on AI to write one’s application essay could spell rejection. Best to buckle down and write it the old-fashioned way. (But one can skip the typewriter.)



