Who Will Ultimately Control AI?
September 27, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
In the Marvel comics universe, there is a being on Earth’s moon called The Watcher. He observes humanity and is not supposed to interfere with their affairs. Marvel’s The Watcher brings to mind the old adage, “Who watches the watcher?” While there is an endless amount of comic book lore to answer that question, the current controversial discussion surrounding AI regulations and who will watch AI does not. Time delves into the conversation about, “The Heated Debate Over Who Should Control Access To AI.”
In May 2023, the CEOs of three AI companies, OpenAI, Google, DeepMind, and Anthropic, signed a letter that stated AI could be harmful to humanity and as dangerous as nuclear weapons or a pandemic. AI experts and leaders are calling for restrictions on specific AI models to prevent bad actors from using it to spread disinformation, launch cyber attacks, make bioweapons, and cause other harm.
Not all of the experts and leaders agree, including the folks at Meta. US Senators Josh Hawley and Richard Blumenthal, Ranking Member and Chair of the Senate Judiciary Subcommittee on Privacy, Technology, and Law don’t like that Meta is sharing powerful AI models.
“The disagreement between Meta and the Senators is just the beginning of a debate over who gets to control access to AI, the outcome of which will have wide-reaching implications. On one side, many prominent AI companies and members of the national security community, concerned by risks posed by powerful AI systems and possibly motivated by commercial incentives, are pushing for limits on who can build and access the most powerful AI systems. On the other, is an unlikely coalition of Meta, and many progressives, libertarians, and old-school liberals, who are fighting for what they say is an open, transparent approach to AI development.
OpenAI published a paper titled Frontier Model Regulation by researchers and academics from OpenAI, DeepMind, and Google with tips about how to control AI. Developing safety standards and requiring regulators to have visibility are no brainers. Other ideas, such as requiring AI developers to acquire a license to train and deploy powerful AI models, caused arguments. Licensing would be a good idea in the future but not great for today’s world.
Meta releases its AI models via open source or paid licenses for its more robust models. Meta’s CEO did say something idiotic:
Meta’s leadership is also not convinced that powerful AI systems could pose existential risks. Mark Zuckerberg, co-founder and CEO of Meta, has said that he doesn’t understand the AI doomsday scenarios, and that those who drum up these scenarios are “pretty irresponsible.” Yann LeCun, Turing Award winner and chief AI scientist at Meta has said that fears over extreme AI risks are ‘preposterously stupid.’’”
The remainder of the article delves into how regulations limit innovation, surveillance would be Orwellian in nature, and how bad acting countries wouldn’t follow the rules. It’s once again the same old arguments repackaged with an AI sticker.
Who will control AI? Gee, maybe the same outfits controlling information and software right this minute?
Whitney Grace, September 27, 2023
Getty and Its Licensed Smart Software Art
September 26, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid. (Yep, the dinobaby is back from France. Thanks to those who made the trip professionally and personally enjoyable.)
The illustration shows a very, very happy image rights troll. The cloud of uncertainty from AI generated images has passed. Now the rights software bots, controlled by cheerful copyright trolls, can scour the Web for unauthorized image use. Forget the humanoids. The action will be from tireless AI generators and equally robust bots designed to charge a fee for the image created by zeros and ones. Yes!
A quite joyful copyright troll displays his killer moves. Thanks, MidJourney. The gradient descent continues, right into the legal eagles’ nests.
“Getty Made an AI Generator That Only Trained on Its Licensed Images” reports:
Generative AI by Getty Images (yes, it’s an unwieldy name) is trained only on the vast Getty Images library, including premium content, giving users full copyright indemnification. This means anyone using the tool and publishing the image it created commercially will be legally protected, promises Getty. Getty worked with Nvidia to use its Edify model, available on Nvidia’s generative AI model library Picasso.
This is exciting. Will the images include a tough-to-discern watermark? Will the images include a license plate, a social security number, or a just a nifty sting of harmless digits?
The article does reveal the money angle:
The company said any photos created with the tool will not be included in the Getty Images and iStock content libraries. Getty will pay creators if it uses their AI-generated image to train the current and future versions of the model. It will share revenues generated from the tool, “allocating both a pro rata share in respect of every file and a share based on traditional licensing revenue.”
Who will be happy? Getty, the trolls, or the designers who have a way to be more productive with a helping hand from the Getty robot? I think the world will be happier because monetization, smart software, and lawyers are a business model with legs… or claws.
Stephen E Arnold, September 26, 2023
YouTube and Those Kiddos. Greed or Weird Fascination?
September 26, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Google and its YouTube subsidiary are in probably in trouble again because they are spying on children. Vox explores if “Is YouTube Tracking Your Kids Again?” and sending them targeted ads. Two reports find that YouTube continues to collect data on kids despite promises not to do so. If YouTube is collecting data and sending targeted ads to young viewers it would violate the Children’s Online Privacy and Protection act (COPPA) and Google’s consent decree with the FTC.
Google agreed to the consent decree with the FTC to stop collecting kids’ online activity and selling it to advertisers. In order to regulate and comply with the decree and COPPA, YouTube creators must say if their channels or individual videos are kid friendly. If they are designated kid friendly then Google doesn’t collect data on the viewers. This only occurs on regular YouTube and not YouTube Kids.
Fairplay and Analytics researched YouTube data collection and released compromising reports. Fairplay, a children’s online safety group, had an ad campaign on YouTube and asked for it to target made for kids videos. The group discovered their ads played on videos that were kids only, basically confirming that targeted ads are still being shown to kids. Analytics found evidence that supports kid data collection too:
“The firm found trackers that Google uses specifically for advertising purposes and what appear to be targeted ads on “made for kids” videos. Clicking on those ads often took viewers to outside websites that definitely did collect data on them, even if Google didn’t. The report is careful to say that the advertising cookies might not be used for personalized advertising — only Google knows that — and so may still be compliant with the law. And Adalytics says the report is not definitively saying that Google violated COPPA: ‘The study is meant to be viewed as a highly preliminary observational analysis of publicly available information and empirical data.’”
Google denies the allegations and claims the information in the reports are skewed. YouTube states that ads on made for kids videos are contextual rather than targeted, implying they are shown to all kids instead of individualizing content. If Google and YouTube are to be in violation of the FTC decree and COPPA, Alphabet Inc would pay a very expensive fine.
It is hard to define what services and products that Google can appropriately offer kids. Google has a huge education initiative with everything from laptops to email services. Republicans and Democrats agree that it is important to protect kids online and hold Google and other companies liable. Will Google pay fines and not worry about the consequences? I have an idea. Let’s ask Meta’s new kid-oriented AI initiative. That sounds like a fine idea.
Whitney Grace, September 26, 2023
If It Looks Like a Library, It Must Be Bad
September 25, 2023
The Internet Archive is the best digital archive that preserves the Internet’s past as well as the old media, out of print books, and more. The Internet Archive (IA) has been the subject of various legal battles regarding copyright infringement, especially in its project to scan and lend library books. Publishers Weekly details the results of the recent court battle: “Judgment Entered In Publishers, Internet Copyright Case.”
Judge John G. Koeltl issued a summary judgment decision that the Internet Archive did violate copyright and infringed on the holders’ rights. The IA and the plaintiffs reached an semi-agreement about distributing digital copies of copyrighted material but the details are not finalized. The IA plans to appeal the judge’s decision. A large continent of record labels are also suing the IA for violating music copyright.
The IA has a noble mission but it should respect copyright holders. The Subreddit DataHoarder has a swan song for the archive: “The Internet Archive Will Die-Any Serious Attempts At Archiving It?” User mikemikehindpart laments about the IA’s demise and blames the IA’s leadership for the potential shutdown. His biggest concern is about preserving the archive:
“I can’t really figure out any non-conspiratorial explanation as to why the IA people have not organized a grand archiving of the IA itself while there is still time. Is there any such initiative going on that one could join?”
User mikemikehindpart lambasts the IA leaders and claims they will go down in as self-proclaimed martyrs while dutifully handing over their hard drives if authorities come knocking. This user wants to preserve the archive especially defunct software, old Web sites, and other media that is not preserved anywhere else:
“fear is that the courts will soon order the site to be suspended while the trial is ongoing, so as to not cause further harm to the rights holders. Like turning off a switch, poof.
Eventually the entire archive will be ordered destroyed, not just the books and music. And piracy of popular books and music will continue like nothing happened, but all those website snapshots, blogs and lost software will simply disappear, like so many Yahoo! groups did.”
The comments vary on efforts how to start efforts to preserve the IA, to non-helpful non-sequiturs, and a few realistic posts that the IA may continue. The realistic posts agree the IA could continue if it stop sharing the copyrighted material and a consensus might be reached among IA and its “enemies.”
There are also comments that point to a serious truth: no one else is documenting the Internet, especially free stuff. One poster suggested that the Library of Congress should partner with the IA. I see absolutely nothing wrong with that idea.
Whitney Grace, September 21, 2023
Recent Facebook Experiments Rely on Proprietary Meta Data
September 25, 2023
When one has proprietary data, researchers who want to study that data must work with you. That gives Meta the home court advantage in a series of recent studies, we learn from the Science‘s article, “Does Social Media Polarize Voters? Unprecedented Experiments on Facebook Users Reveal Surprises.” The 2020 Facebook and Instagram Election Study has produced four papers so far with 12 more on the way. The large-scale experiments confirm Facebook’s algorithm pushes misinformation and reinforces filter bubbles, especially on the right. However, they seem to indicate less influence on users’ views and behavior than many expected. Hmm, why might that be? Writer Kai Kupferschmidt states:
“But the way the research was done, in partnership with Meta, is getting as much scrutiny as the results themselves. Meta collaborated with 17 outside scientists who were not paid by the company, were free to decide what analyses to run, and were given final say over the content of the research papers. But to protect the privacy of Facebook and Instagram users, the outside researchers were not allowed to handle the raw data. This is not how research on the potential dangers of social media should be conducted, says Joe Bak-Coleman, a social scientist at the Columbia School of Journalism.”
We agree, but when companies maintain a stranglehold on data researchers’ hands are tied. Is it any wonder big tech balks at calls for transparency? The article also notes:
“Scientists studying social media may have to rely more on collaborations with companies like Meta in the future, says [participating researcher Deen] Freelon. Both Twitter and Reddit recently restricted researchers’ access to their application programming interfaces or APIs, he notes, which researchers could previously use to gather data. Similar collaborations have become more common in economics, political science, and other fields, says [participating researcher Brendan] Nyhan. ‘One of the most important frontiers of social science research is access to proprietary data of various sorts, which requires negotiating these one-off collaboration agreements,’ he says. That means dependence on someone to provide access and engage in good faith, and raises concerns about companies’ motivations, he acknowledges.”
See the article for more details on the experiments, their results so far, and their limitations. Social scientist Michael Wagner, who observed the study and wrote a commentary to accompany their publication, sees the project as a net good. However, he acknowledges, future research should not be based on this model where the company being studied holds all the data cards. But what is the alternative?
Cynthia Murrell, September 25, 2023
KPIs: The Perfect Tool for Slacker Managers
September 22, 2023
Many businesses have adopted key performance indicators (KPIs) in an effort to minimize subjectivity in human resource management. Cognitive researcher and Promaton CTO Ágoston Török explores the limitations of this approach in his blog post, “How to Avoid KPI Psychosos in your Organization?”
Török takes a moment to recall the human biases KPIs are meant to avoid: availability bias, recency bias, the halo/horn effects, overconfidence bias, anchoring bias, and the familiar confirmation bias. He writes:
“Enter KPIs as the objective truth. Free of subjectivity, perfect, right? Not so fast. In fact, often our data collection and measurement are also biased by us (e.g. algorithmic bias). And even if that is not the case, unfortunately, KPIs suffer from tunnel vision: they measure what is measurable, while not necessarily all aspects of the situation are. Albert Einstein put it brilliantly: ‘Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted.’ This results in perverse motivation in many organizations, where people have to choose between doing their job well (broader reality) or getting promoted for meeting the KPIs (tunnel vision). And that’s exactly the KPI psychosis I described above.”
That does defeat the purpose. Not surprisingly, the solution is to augment KPI software with human judgment.
“KPIs should be used in combination with human intuition to enable optimal decision-making. So not just intuition or data, but a constant back and forth of making (i.e. intuition) and testing (i.e. data) hypotheses. … So you work on reaching your objective and while doing so you constantly check both what your KPI shows and also how much you can rely on it.”
That sounds like a lot of work. Can’t we just offload personnel decisions to AI and be done with it? Not yet, dear executives, not yet.
Cynthia Murrell, September 22, 2023
Amazon Switches To AI Review Summaries
September 22, 2023
The online yard sale eBay offers an AI-generated description feature for sellers. Following in the same vein, Engadget reports that, “Amazon Begins Rolling Out AI-Generated Review Summaries” for products with clickable keywords. Amazon announced in June 2023 that it was testing an AI summary tool across a a range of products. The company officially launched the tool in August declaring that AI is at the heart of Amazon.
Amazon developed the AI summary tool so consumers can read buyers’ opinions without scrolling through pages of information. The summaries are described as a wrap-up of customer consensus akin to film blurbs on Rotten Tomatoes. The AI summaries contain clickable tags that showcase common words and consistent themes from reviews. Clicking on the tags will take consumers to the full review with the information.
AI-generated review summaries bring up another controversial topic: Amazon and fake reviews. Fake reviews litter the selling platform like a slew of counterfeit products Amazon, eBay, and other online selling platforms battle. While Amazon claims it takes a proactive stance to detect and delete the reviews, it does not catch all the fakes. It is speculated that AI-generated reviews from ChatGPT or other chatbots are harder for Amazon to catch.
In regards to using its own AI summary tool, Amazon plans to only use it on verified purchases and using more AI models to detect fake reviews. Humans will be used for clarification with their more discerning organic brains. Amazon said about its news tool:
“‘We continue to invest significant resources to proactively stop fake reviews,’ Amazon Community Shopping Director Vaughn Schermerhorn said. ‘This includes machine learning models that analyze thousands of data points to detect risk, including relations to other accounts, sign-in activity, review history, and other indications of unusual behavior, as well as expert investigators that use sophisticated fraud-detection tools to analyze and prevent fake reviews from ever appearing in our store. The new AI-generated review highlights use only our trusted review corpus from verified purchases, ensuring that customers can easily understand the community’s opinions at a glance.’”
AI tools are trained using language models that contain known qualitative errors. The same AI tools are used to teach more AI and so on. While we do not know what Amazon is using to train its AI summary tool, we would not be surprised if the fake reviews are using similar training models to Amazon’s. It will come down to Amazon AI vs. counterfeit AI. Who will win?
Whitney Grace, September 22, 2023
Kill Off the Dinobabies and Get Younger, Bean Counter-Pleasing Workers. Sound Familiar?
September 21, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “Google, Meta, Amazon Hiring low-Paid H1B Workers after US Layoffs: Report.” Is it accurate? Who knows? In the midst of a writers’ strike in Hollywood, I thought immediately about endless sequels to films like “Batman 3: Deleting Robin” and Halloween 8: The Night of the Dinobaby Purge.”
The write up reports a management method similar to those implemented when the high school science club was told that a school field trip to the morgue was turned down. The school’s boiler suffered a mysterious malfunction and school was dismissed for a day. Heh heh heh.
I noted this passage:
Even as global tech giants are carrying out mass layoffs, several top Silicon Valley companies are reportedly looking to hire lower-paid tech workers from foreign countries. Google, Meta, Amazon, Microsoft, Zoom, Salesforce and Palantir have applied for thousands of H1B worker visas this year…
I heard a rumor that IBM used a similar technique. Would Big Blue replace older, highly paid employees with GenX professionals not born in the US? Of course not! The term “dinobabies” was a product of spontaneous innovation, not from a personnel professional located in a suburb of New York City. Happy bean counters indeed. Saving money with good enough work. I love the phrase “minimal viable product” for “minimally viable” work environments.
There are so many ways to allow people to find their futures elsewhere. Shelf stockers are in short supply I hear.
Stephen E Arnold, September 21, 2023
Just TikToking Along, Folks
September 21, 2023
Beleaguered in the US, its largest market, TikTok is ready to embrace new options in its Southeast Asian advance. CNBC reports, “TikTok Shop Strikes ‘Buy Now, Pay Later’ Partnership in Malaysia As Part of E-Commerce Push.” Writer Cheila Chiang reports:
“The partnership comes as TikTok looks to markets outside of the U.S. for growth. While the U.S. is the company’s largest market, TikTok faces headwinds there after Montana became the first state to ban the app. The app has also been banned in India. In recent months, TikTok Shop has been aggressively expanding into e-commerce in Southeast Asia, competing against existing players like Sea’s Shopee and Alibaba’s Lazada. TikTok’s CEO previously said the company will pour ‘billions of dollars’ into Southeast Asia over the next few years. As of April, TikTok said it has more than 325 million monthly users in Southeast Asia. In June, the company said it would invest $12.2 million to help over 120,000 small and medium-sized businesses sell online. The investment consists of cash grants, digital skills training and advertising credits for these businesses.”
What a great idea for the teenagers who are the largest cohort of TikTok users. Do they fully grasp the pay later concept and its long-term effects? Sure, no problem. Kids love to work at part time jobs, right? As long as major corporations get to expand as desired, that is apparently all that matters.
Cynthia Murrell, September 21, 2023
Those 78s Will Sell Big Again?
September 21, 2023
The Internet Archive (IA) is a wonderful repository of digital informational, but it is a controversial organization about respecting copyright laws. After battling a landmark case against book publishers, the IA is now facing another lawsuit as reported in the post, “Internet Archive Responds To Recording Industry Lawsuit Targeting Obsolete Media.” Sony, Universal Music Group, and other large record labels are suing the IA and others for the Great 78 Project.
The Great 78 Project’s goal is to preserve, research, discover, and share 78 rpm records that are 70-120 years old. Librarians, archivists, and sound engineers combined their resources to preserve the archaic, analog medium and provide free public access. The preserved recordings are used for researching teaching at museums, universities, and more:
“Statement from Brewster Kahle, digital librarian of the Internet Archive: ‘When people want to listen to music they go to Spotify. When people want to study 78rpm sound recordings as they were originally created, they go to libraries like the Internet Archive. Both are needed. There shouldn’t be conflict here.’”
Preserving an old yet appreciated medium is worthwhile and a labor of love. IA’s blog post fails to explain the details behind the lawsuit or defend the Great 78 Project other than restating its purpose. The IA should share the details about how the record companies are concerned about copyrighted material but many of the recordings are now in the public domain. The Great 78 Project should continue but the record companies should work with the preservation team instead of fighting them in court.
Whitney Grace, September 21, 2023