Shocker! Students Use AI and Engage in Sex, Drugs, and Rock and Roll

March 5, 2025

dino orange_thumb_thumbThe work of a real, live dinobaby. Sorry, no smart software involved. Whuff, whuff. That’s the sound of my swishing dino tail. Whuff.

I read “Surge in UK University Students Using AI to Complete Work.” The write up says:

The number of UK undergraduate students using artificial intelligence to help them complete their studies has surged over the past 12 months, raising questions about how universities assess their work. More than nine out of 10 students are now using AI in some form, compared with two-thirds a year ago…

I understand the need to create “real” news; however, the information did not surprise me. But the weird orange newspaper tosses in this observation:

Experts warned that the sheer speed of take-up of AI among undergraduates required universities to rapidly develop policies to give students clarity on acceptable uses of the technology.

As a purely practical matter, information has crossed my about professors cranking out papers for peer review or the ever-popular gray literature consumers that are not reproducible, contain data which have been shaped like a kindergartener’s clay animal, and links to pals who engage in citation boosting.

Plus, students who use Microsoft have a tough time escaping the often inept outputs of the Redmond crowd. A Google user is no longer certain what information is created by a semi reputable human or a cheese-crazed Google system. Emails write themselves. Message systems suggest emojis. Agentic AIs take care of mum’s and pop’s questions about life at the uni.

The topper for me was the inclusion in the cited article of this statement:

it was almost unheard of to see such rapid changes in student behavior…

Did this fellow miss drinking, drugs, staying up late, and sex on campus? How fast did those innovations take to sweep through the student body?

I liked the note of optimism at the end of the write up. Check this:

Janice Kay, a director of a higher education consulting firm: ““There is little evidence here that AI tools are being misused to cheat and play the system. [But] there are quite a lot of signs that will pose serious challenges for learners, teachers and institutions and these will need to be addressed as higher education transforms,” she added.”

That encouraging. The academic research crowd does one thing, and I am to assume that students will do everything the old-fashioned way. When you figure out how to remove smart software from online systems and local installations of smart helpers, let me know. Fix up AI usage and then turn one’s attention to changing student behavior in the drinking, sex, and drug departments too.

Good luck.

Stephen E Arnold, March 5, 2025

Mathematics Is Going to Be Quite Effective, Citizen

March 5, 2025

dino orange_thumbThis blog post is the work of a real-live dinobaby. No smart software involved.

The future of AI is becoming more clear: Get enough people doing something, gather data, and predict what humans will do. What if an individual does not want to go with the behavior of the aggregate? The answer is obvious, “Too bad.”

How do I know that as a handful of organizations will use their AI in this manner? I read “Spanish Running of the Bulls’ Festival Reveals Crowd Movements Can Be Predictable, Above a Certain Density.” If the data in the report are close to the pin, AI will be used to predict and then those predictions can be shaped by weaponized information flows. I got a glimpse of how this number stuff works when I worked at Halliburton Nuclear with Dr. Jim Terwilliger. He and a fellow named Julian Steyn were only too happy to explain that the mathematics used for figuring out certain nuclear processes would work for other applications as well. I won’t bore you with comments about the Monte Carl method or the even older Bayesian statistics procedures. But if it made certain nuclear functions manageable, the approach was okay mostly.

Let’s look at what the Phys.org write up says about bovines:

Denis Bartolo and colleagues tracked the crowds of an estimated 5,000 people over four instances of the San Fermín festival in Pamplona, Spain, using cameras placed in two observation spots in the plaza, which is 50 meters long and 20 meters wide. Through their footage and a mathematical model—where people are so packed that crowds can be treated as a continuum, like a fluid—the authors found that the density of the crowds changed from two people per square meter in the hour before the festival began to six people per square meter during the event. They also found that the crowds could reach a maximum density of 9 people per square meter. When this upper threshold density was met, the authors observed pockets of several hundred people spontaneously behaving like one fluid that oscillated in a predictable time interval of 18 seconds with no external stimuli (such as pushing).

I think that’s an important point. But here’s the comment that presages how AI data will be used to control human behavior. Remember. This is emergent behavior similar to the hoo-hah cranked out by the Santa Fe Institute crowd:

The authors note that these findings could offer insights into how to anticipate the behavior of large crowds in confined spaces.

Once probabilities allow one to “anticipate”, it follows that flows of information can be used to take or cause action. Personally I am going to make a note in my calendar and check in one year to see how my observation turns out. In the meantime, I will try to keep an eye on the Sundars, Zucks, and their ilk for signals about their actions and their intent, which is definitely concerned with individuals like me. Right?

Stephen E Arnold, March 5, 2025

We Have to Spread More Google Cheese

March 4, 2025

A Super Bowl ad is a big deal for companies that shell out for those pricy spots. So it is a big embarrassment when one goes awry. The BBC reports, “Google Remakes Super Bowl Ad After AI Cheese Gaffe.” Google was trying to how smart Gemini is. Instead, the ad went out with a stupid mistake. Writers Graham Fraser and Tom Singleton tell us:

“The commercial – which was supposed to showcase Gemini’s abilities – was created to be broadcast during the Super Bowl. It showed the tool helping a cheesemonger in Wisconsin write a product description by informing him Gouda accounts for ’50 to 60 percent of global cheese consumption.’ However, a blogger pointed out on X that the stat was ‘unequivocally false’ as the Dutch cheese was nowhere near that popular.”

In fact, cheddar and mozzarella vie for the world’s favorite cheese. Gouda is not even a contender. Though the company did remake the ad, one top Googler at first defended Gemini with some dubious logic. We learn:

Replying to him, Google executive Jerry Dischler insisted this was not a ‘hallucination’ – where AI systems invent untrue information – blaming the websites Gemini had scraped the information from instead. ‘Gemini is grounded in the Web – and users can always check the results and references,’ he wrote. ‘In this case, multiple sites across the web include the 50-60% stat.'”

Sure, users can double check an AI’s work. But apparently not even Google itself can be bothered. Was the company so overconfident it did not use a human copyeditor? Or do those not exist anymore? Wrong information is wrong information, whether technically a hallucination or not. Spitting out data from unreliable sources is just as bad as making stuff up. Google still has not perfected the wildly imperfect Gemini, it seems.

Cynthia Murrell, February 28, 2025

Big Thoughts On How AI Will Affect The Job Market

March 4, 2025

Every time there is an advancement in technology, humans are fearful they won’t make an income. While some jobs disappeared, others emerged and humans adapted to the changes. We’ll continue to adapt as AI becomes more integral in society. How will we handle the changes?

Anthropic, a big player in the OpenAI field, launched The Anthropic Index to understand AI’s effects on labor markers and the economy. Anthropic claims it’s gathering “first-of-its” kind data from Claude.ai anonymized conversations. This data demonstrates how AI is incorporated into the economy. The organization is also building an open source dataset for researchers to use and build on their findings. Anthropic surmises that this data will help develop policy on employment and productivity.

Anthropic reported on their findings in their first paper:

• “Today, usage is concentrated in software development and technical writing tasks. Over one-third of occupations (roughly 36%) see AI use in at least a quarter of their associated tasks, while approximately 4% of occupations use it across three-quarters of their associated tasks.

• AI use leans more toward augmentation (57%), where AI collaborates with and enhances human capabilities, compared to automation (43%), where AI directly performs tasks.

• AI use is more prevalent for tasks associated with mid-to-high wage occupations like computer programmers and data scientists, but is lower for both the lowest- and highest-paid roles. This likely reflects both the limits of current AI capabilities, as well as practical barriers to using the technology.”

The Register put the Anthropic report in layman’s terms in the article, “Only 4 Percent Of Jobs Rely Heavily On AI, With Peak Use In Mid-Wage Roles.” They share that only 4% of jobs rely heavily on AI for their work. These jobs use AI for 75% of their tasks. Overall only 36% of jobs use AI for 25% of their tasks. Most of these jobs are in software engineering, media industries, and educational/library fields. Physical jobs use AI less. Anthropic also found that 57% of these jobs use AI to augment human tasks and 43% automates them.

These numbers make sense based on AI’s advancements and limitations. It’s also common sense that mid-tier wage roles will be affected and not physical or highly skilled labor. The top tier will surf on money; the water molecules are not so lucky.

Whitney Grace, March 4, 2025

AI Summaries Get News Wrong

February 28, 2025

With big news stories emerging at a frantic pace, one might turn to AI to consolidate the key points. If so, one might become woefully ill informed. “AI Chatbots Unable to Accurately Summarise News, BBC Finds.” The BBC tested the biggest AIs on content from its own site–OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity AI all sat for the exam. None of them passed it, though ChatGPT and Perplexity were less bad than Copilot and Gemini. Tech reporter Imran Rahman-Jones tells us:

“In the study, the BBC asked ChatGPT, Copilot, Gemini and Perplexity to summarise 100 news stories and rated each answer. It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants. It found 51% of all AI answers to questions about the news were judged to have significant issues of some form. Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.”

But it was not just about mixing up, or inventing, facts. The chatbots also struggled with the concept of context and the distinction between facts and opinions. We learn:

“The report said that as well as containing factual inaccuracies, the chatbots ‘struggled to differentiate between opinion and fact, editorialised, and often failed to include essential context’.”

To illustrate the findings, the article gives us a few examples:

  • “Gemini incorrectly said the NHS did not recommend vaping as an aid to quit smoking.
  • ChatGPT and Copilot said Rishi Sunak and Nicola Sturgeon were still in office even after they had left.
  • Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed ‘restraint’ and described Israel’s actions as ‘aggressive’.”

So, dear readers, we suggest you take the time to read the news for yourselves. Or, at the very least, get your recaps from another human.

Cynthia Murrell, February 28, 2025

Yikes! Existing AI is Fundamentally Flawed

February 27, 2025

AI applications are barreling full steam ahead into all corners of our lives. Yet there are serious concerns about the very structure of how LLMs work. The BCS Chartered Institute for IT asks, "Does Current AI Represent a Dead End?" Cybersecurity professor Eerke Boiten writes:

"From the perspective of software engineering, current AI systems are unmanageable, and as a consequence their use in serious contexts is irresponsible. For foundational reasons (rather than any temporary technology deficit), the tools we have to manage complexity and scale are just not applicable. By ‘software engineering’, I mean developing software to align with the principle that impactful software systems need to be trustworthy, which implies their development needs to be managed, transparent and accountable … When I last gave talks about AI ethics, around 2018, my sense was that AI development was taking place alongside the abandonment of responsibility in two dimensions. Firstly, and following on from what was already happening in ‘big data’, the world stopped caring about where AI got its data — fitting in nicely with ‘surveillance capitalism. And secondly, contrary to what professional organisations like BCS and ACM had been preaching for years, the outcomes of AI algorithms were no longer viewed as the responsibility of their designers — or anybody, really."

Yes, that is the reality we are careening into. But for big tech, that may be a feature, not a bug. Those firms clearly want today’s AI to be THE one true AI. A high profit to responsibility ratio suits them just fine.

Boiten describes, in a nutshell, how neural networks function. He emphasizes the disturbing lack of human guidance. And understanding. Since engineers cannot know just how an algorithm comes to its conclusions, it is impossible to ensure they are operating to specifications. These problems cannot be resolved with hard work and insights; they are baked in. See the write-up for more details.

If engineers are willing to progress beyond today’s LLMs, Boiten suggests, they could develop something actually reliable. It could even be built on existing AI tech, so all that work (and funding) need not go out the window. They just have to look past the dollar signs in their eyes and press ahead to a safer and more reliable product. The post warns:

"In my mind, all this puts even state-of-the-art current AI systems in a position where professional responsibility dictates the avoidance of them in any serious application. When all its techniques are based on testing, AI safety is an intellectually dishonest enterprise."

Now all we need is for big tech to do the right thing.

Cynthia Murrell, February 27, 2025

A Handy Resource: 100 AI Tools in 10 Categories

February 27, 2025

We hear a lot about the most prominent AI tools like ChatGPT, Dall-E, and Grammarly. But there are many more options designed for a wide range of tasks. Inspiration blogger Ayo-Ibidapo has rounded up "100 AI Toos for Every Need: The Ultimate List." He succinctly introduces his list by observing:

"AI is revolutionizing industries, making tasks easier, faster, and more efficient. Whether you need AI for writing, design, marketing, coding, or personal productivity, there’s a tool for you. Here’s a list of 100 AI tools categorized by their purpose."

The 10 categories include those above and more, including my favorite, "Miscellaneous and Fun." As a life-long gamer, I am drawn to AI Dungeon. I am not so sure about the face-swapping tool, Reface AI. Seems a bit creepy. I am curious whether any of the investing tools, like Alpaca, Kavout, or Trade Ideas could actually boost one’s portfolio. And I am pleased to see the esteemed Wolfram Alpha made the list in the education and research section. As for the ten entries under healthcare and wellness, I wonder: are we resigned to sharing our most intimate details with bots? Ginger AI, for mental health support, sounds non-threatening, but are there any data-grubbing details buried in its terms of service agreement?

See the post for all 100 tools. If that is not enough, check out the discussion at Battle Station, "Uncover 30,000+ AI Apps Using AITrendyTools." There’s an idea—what better to pick an AI tool than an AI tool?

Cynthia Murrell, February 27, 2025

Meta and Torrents: True, False, or Rationalization?

February 26, 2025

AIs gobble datasets for training. It is another fact that many LLMs and datasets contain biased information, are incomplete, or plain stink. One ethical but cumbersome way to train algorithms would be to notify people that their data, creative content, or other information will be used to train AI. Offering to pay for the right to use the data would be a useful step some argue.

Will this happen? Obviously not.

Why?

Because it’s sometimes easier to take instead of asking. According to Toms Hardware, “Meta Staff Torrented Nearly 82TB Of Pirated Books For AI Training-Court Records Reveal Copyright Violations.” The article explains that Meta pirated 81.7 TB of books from the shadow libraries Anna’s Archive, Z-Library, and LibGen. These books were then used to train AI models. Meta is now facing a class action lawsuit about using content from the shadow libraries.

The allegations arise from Meta employees’ written communications. Some of these messages provide insight into employees’ concern about tapping pirated materials. The employees were getting frown lines, but then some staffers’ views rotated when they concluded smart software helped people access information.

Here’s a passage from the cited article I found interesting:

“Then, in January 2023, Mark Zuckerberg himself attended a meeting where he said, “We need to move this stuff forward… we need to find a way to unblock all this.” Some three months later, a Meta employee sent a message to another one saying they were concerned about Meta IP addresses being used “to load through pirate content.” They also added, “torrenting from a corporate laptop doesn’t feel right,” followed by laughing out loud emoji. Aside from those messages, documents also revealed that the company took steps so that its infrastructure wasn’t used in these downloading and seeding operations so that the activity wouldn’t be traced back to Meta. The court documents say that this constitutes evidence of Meta’s unlawful activity, which seems like it’s taking deliberate steps to circumvent copyright laws.”

If true, the approach smacks of that suave Silicon Valley style. If false, my faith in a yacht owner with gold chains might be restored.

Whitney Grace, February 26, 2025

AI Research Tool from Perplexity Is Priced to Undercut the Competition

February 26, 2025

Are prices for AI-generated research too darn high? One firm thinks so. In a Temu-type bid to take over the market, reports VentureBeat, "Perplexity Just Made AI Research Crazy Cheap—What that Means for the Industry." CEO Aravind Srinivas credits open source software for making the move possible, opining that "knowledge should be universally accessible." Knowledge, yes. AI research? We are not so sure. Nevertheless, here we are. The write-up describes the difference in pricing:

"While Anthropic and OpenAI charge thousands monthly for their services, Perplexity offers five free queries daily to all users. Pro subscribers pay $20 monthly for 500 daily queries and faster processing — a price point that could force larger AI companies to explain why their services cost up to 100 times more."

Not only is Perplexity’s Deep Research cheaper than the competition, crows the post, its accuracy rivals theirs. We are told:

"[Deep Research] scored 93.9% accuracy on the SimpleQA benchmark and reached 20.5% on Humanity’s Last Exam, outperforming Google’s Gemini Thinking and other leading models. OpenAI’s Deep Research still leads with 26.6% on the same exam, but OpenAI charges $200 percent for that service. Perplexity’s ability to deliver near-enterprise level performance at consumer prices raises important questions about the AI industry’s pricing structure."

Well, okay. Not to stray too far from the point, but is a 20.5% or a 26.6% on Humanity’s Last Exam really something to brag about? Last we checked, those were failing grades. By far. Isn’t it a bit too soon to be outsourcing research to any LLM? But I digress.

We are told the low, low cost Deep Research is bringing AI to the micro-budget masses. And, soon, to the Windows-less—Perplexity is working on versions for iOS, Android, and Mac. Will this spell disaster for the competition?

Cynthia Murrell, February 26, 2025

Researchers Raise Deepseek Security Concerns

February 25, 2025

What a shock. It seems there are some privacy concerns around Deepseek. We learn from the Boston Herald, “Researchers Link Deepseek’s Blockbuster Chatbot to Chinese Telecom Banned from Doing Business in US.” Former Wall Street Journal and now AP professional Byron Tau writes:

“The website of the Chinese artificial intelligence company Deepseek, whose chatbot became the most downloaded app in the United States, has computer code that could send some user login information to a Chinese state-owned telecommunications company that has been barred from operating in the United States, security researchers say. The web login page of Deepseek’s chatbot contains heavily obfuscated computer script that when deciphered shows connections to computer infrastructure owned by China Mobile, a state-owned telecommunications company.”

If this is giving you déjà vu, dear reader, you are not alone. This scenario seems much like the uproar around TikTok and its Chinese parent company ByteDance. But it is actually worse. ByteDance’s direct connection to the Chinese government is, as of yet, merely hypothetical. China Mobile, on the other hand, is known to have direct ties to the Chinese military. We learn:

“The U.S. Federal Communications Commission unanimously denied China Mobile authority to operate in the United States in 2019, citing ‘substantial’ national security concerns about links between the company and the Chinese state. In 2021, the Biden administration also issued sanctions limiting the ability of Americans to invest in China Mobile after the Pentagon linked it to the Chinese military.”

It was Canadian cybersecurity firm Feroot Security that discovered the code. The AP then had the findings verified by two academic cybersecurity experts. Might similar code be found within TikTok? Possibly. But, as the article notes, the information users feed into Deepseek is a bit different from the data TikTok collects:

“Users are increasingly putting sensitive data into generative AI systems — everything from confidential business information to highly personal details about themselves. People are using generative AI systems for spell-checking, research and even highly personal queries and conversations. The data security risks of such technology are magnified when the platform is owned by a geopolitical adversary and could represent an intelligence goldmine for a country, experts warn.”

Interesting. But what about CapCut, the ByteDance video thing?

Cynthia Murrell, February 25, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta