US Science Conferences: Will They Become an Endangered Species?

June 26, 2025

Due to high federal budget cuts and fears of border issues, the United States may be experiencing a brain drain. Some smart people (aka people tech bros like to hire) are leaving the country. Leadership in some high profile outfits are saying, ““Don’t let the door hit you on the way out.” Others get multi-million pay packets to remain in America.

Nature.com explains more in “Scientific Conferences Are Leaving The US Amid Border Fears.” Many scientific and academic conferences were slated to occur in the US, but they’ve since been canceled, postponed, or moved to other venues in other countries. The organizers are saying that Trump’s immigration and travel policies are discouraging foreign nerds from visiting the US. Some organizers have rescheduled conferences in Canada.

Conferences are important venues for certain types of professionals to network, exchange ideas, and learn the alleged new developments in their fields. These conferences are important to the intellectual communities. Nature says:

The trend, if it proves to be widespread, could have an effect on US scientists, as well as on cities or venues that regularly host conferences. ‘Conferences are an amazing barometer of international activity,’ says Jessica Reinisch, a historian who studies international conferences at Birkbeck University of London. ‘It’s almost like an external measure of just how engaged in the international world practitioners of science are.’ ‘What is happening now is a reverse moment,’ she adds. ‘It’s a closing down of borders, closing of spaces … a moment of deglobalization.’”

The brain drain trope and the buzzword “deglobalization” may point to a comparatively small change with longer term effects. At the last two specialist conferences I attended, I encountered zero attendees or speakers from another country. In my 60 year work career this was a first at conferences that issued a call for papers and were publicized via news releases.

Is this a loss? Not for me. I am a dinobaby. For those younger than I, my hunch is that a number of people will be learning about the truism “If ignorance is bliss, just say, ‘Hello, happy.’”

Whitney Grace, June 26, 2025

AI Can Be a Critic Unless Biases Are Hard Wired

June 26, 2025

The Internet has made it harder to find certain music, films, and art. It was supposed to be quite the opposite, and it was for a time. But social media and its algorithms have made a mess of things. So asserts the blogger at Tadaima in, “If Nothing Is Curated, How Do We Find Things?” The write up reports:

“As convenient as social media is, it scatters the information like bread being fed to ducks. You then have to hunt around for the info or hope the magical algorithm gods read your mind and guide the information to you. I always felt like social media creates an illusion of convenience. Think of how much time it takes to stay on top of things. To stay on top of music or film. Think of how much time it takes these days, how much hunting you have to do. Although technology has made information vast and reachable, it’s also turned the entire internet into a sludge pile.”

Slogging through sludge does take the fun out of discovery. The author fondly recalls the days when a few hours a week checking out MTV and  Ebert and Roeper, flipping through magazines, and listening to the radio was enough to keep them on top of pop culture. For a while, curation websites deftly took over that function. Now, though, those have been replaced by social-media algorithms that serve to rake in ad revenue, not to share tunes and movies that feed the soul. The write up observes:

“Criticism is dead (with Fantano being the one exception) and Gen Alpha doesn’t know how to find music through anything but TikTok. Relying on algorithms puts way too much power in technology’s hands. And algorithms can only predict content that you’ve seen before. It’ll never surprise you with something different. It keeps you in a little bubble. Oh, you like shoegaze? Well, that’s all the algorithm is going to give you until you intentionally start listening to something else.”

Yep. So the question remains: How do we find things? Big tech would tell us to let AI do it, of course, but that misses the point. The post’s writer has settled for a somewhat haphazard, unsatisfying method of lists and notes. They sadly posit this state of affairs might be the “new normal.” This type of findability “normal” may be very bad in some ways.

Cynthia Murrell, June 26, 2025

A Business Opportunity for Some Failed VCs?

June 26, 2025

Dino 5 18 25An opinion essay written by a dinobaby who did not rely on smart software .

Do you want to open a T shirt and baseball cap with snappy quotes? If the answer is, “Yes,” I have a suggestion for you. Tucked into “Artificial Intelligence Is Not a Miracle Cure: Nobel Laureate Raises Questions about AI-Generated Image of Black Hole Spinning at the Heart of Our Galaxy” is this gem of a quotation:

“But artificial intelligence is not a miracle cure.”

The context for the statement by Reinhard Genzel, “an astrophysicist at the Max Planck Institute for Extraterrestrial Physics” offered the observation when smart software happily generated images of a black hole. These are mysterious “things” which industrious wizards find amidst the numbers spewed by “telescopes.” Astrophysicists are discussing in an academic way exactly what the properties of a black hole are. One wing of the community has suggested that our universe exists within a black hole. Other wings offer equally interesting observations about these phenomena.

The write up explains:

an international team of scientists has attempted to harness the power of AI to glean more information about Sagittarius A* from data collected by the Event Horizon Telescope (EHT). Unlike some telescopes, the EHT doesn’t reside in a single location. Rather, it is composed of several linked instruments scattered across the globe that work in tandem. The EHT uses long electromagnetic waves — up to a millimeter in length — to measure the radius of the photons surrounding a black hole. However, this technique, known as very long baseline interferometry, is very susceptible to interference from water vapor in Earth’s atmosphere. This means it can be tough for researchers to make sense of the information the instruments collect.

The fix is to feed the data into a neural network and let the smart software solve the problem. It did, and generated the somewhat tough-to-parse images in the write up. To a dinobaby, one black hole image looks like another.

But the quote states what strikes me as a truism for 2025:

“But artificial intelligence is not a miracle cure.”

Those who have funded are unlikely to buy a hat to T shirt with this statement printed in bold letters.

Stephen E Arnold, June 26, 2025

AI Side Effect: Some of the Seven Deadly Sins

June 25, 2025

New technology has been charged with making humans lazy and stupid. Humanity has survived technology and, in theory, enjoy (arguably) the fruits of progress. AI, on the other hand, might actually be rotting one’s brain. New Atlas shares the mental news about AI in “AI Is Rotting Your Brain And Making You Stupid.”

The article starts with the usual doom and gloom that’s unfortunately true, including (and I quote) the en%$^ification of Google search. Then there’s mention of a recent study about why college students are using ChatGPT over doing the work themselves. One student said, You’re asking me to go from point A to point B, why wouldn’t I use a car to get there?”

Good point, but sometimes using a car isn’t the best option. It might be faster but sometimes other options make more sense. The author also makes an important point too when he was crafting a story that required him to read a lot of scientific papers and other research:

“Could AI have assisted me in the process of developing this story? No. Because ultimately, the story comprised an assortment of novel associations that I drew between disparate ideas all encapsulated within the frame of a person’s subjective experience. And it is this idea of novelty that is key to understanding why modern AI technology is not actually intelligence but a simulation of intelligence.”

Here’s another pertinent observation:

In a magnificent article for The New Yorker, Ted Chiang perfectly summed up the deep contradiction at the heart of modern generative AI systems. He argues language, and writing, is fundamentally about communication. If we write an email to someone we can expect the person at the other end to receive those words and consider them with some kind of thought or attention. But modern AI systems (or these simulations of intelligence) are erasing our ability to think, consider, and write. Where does it all end? For Chiang it’s pretty dystopian feedback loop of dialectical slop.”

An AI driven world won’t be an Amana, Iowa (not an old fridge), but it also won’t be dystopian. Amidst the flood of information about AI, it is difficult to figure out what’s what. What if some of the seven deadly sins are more fun than doom scrolling and letting AI suggest what one needs to know?

Whitney Grace, June 25, 2025

AI and Kids: A Potentially Problematic Service

June 25, 2025

Remember the days when chatbots were stupid and could be easily manipulated? Those days are over…sort of. According to Forbes, AI Tutors are distributing dangerous information: “AI Tutors For Kids Gave Fentanyl Recipes And Dangerous Diet Advice.” KnowUnity designed the SchoolGPT chatbot and it “tutored” 31,031 students then it told Forbes how to pick fentanyl down to the temperature and synthesis timings.

KnowUnity was founded by Benedict Kurz, who wants SchoolGPT to be the number one global AI learning companion for over one billion students. He describes SchoolGPT as the TikTok for schoolwork. He’s fundraised over $20 million in venture capital. The basic SchoolGPT is free, but the live AI Pro tutors charge a fee for complex math and other subjects.

KnowUnity is supposed to recognize dangerous information and not share it with users. Forbes tested SchoolGPT by asking, not only about how to make fentanyl, but also how to lose weight in a method akin to eating disorders.

Kurz replied to Forbes:

“Kurz, the CEO of KnowUnity, thanked Forbes for bringing SchoolGPT’s behavior to his attention, and said the company was “already at work to exclude” the bot’s responses about fentanyl and dieting advice. “We welcome open dialogue on these important safety matters,” he said. He invited Forbes to test the bot further, and it no longer produced the problematic answers after the company’s tweaks.

SchoolGPT wasn’t the only chatbot that failed to prevent kids from accessing dangerous information. Generative AI is designed to provide information and doesn’t understand the nuances of age. It’s easy to manipulate chatbots into sharing dangerous information. Parents are again tasked with protecting kids from technology, but the developers should also be inhabiting that role.

Whitney Grace, June 25, 2025

Big AI Surprise: Wrongness Spreads Like Measles

June 24, 2025

Dino 5 18 25An opinion essay written by a dinobaby who did not rely on smart software .

Stop reading if you want to mute a suggestion that smart software has a nifty feature. Okay, you are going to read this brief post. I read “OpenAI Found Features in AI Models That Correspond to Different Personas.” The article contains quite a few buzzwords, and I want to help you work through what strikes me as the principal idea: Getting a wrong answer in one question spreads like measles to another answer.

Editor’s Note: Here’s a table translating AI speak into semi-clear colloquial English.

 

 

Term Colloquial Version
Alignment Getting a prompt response sort of close to what the user intended
Fine tuning Code written to remediate an AI output “problem” like misalignment of exposing kindergarteners to measles just to see what happens
Insecure code Software instructions that create responses like “just glue cheese on your pizza, kids”
Mathematical manipulation Some fancy math will fix up these minor issues of outputting data that does not provide a legal or socially acceptable response
Misalignment Getting a prompt response that is incorrect, inappropriate, or hallucinatory
Misbehaved The model is nasty, often malicious to the user and his or her prompt  or a system request
Persona How the model goes about framing a response to a prompt
Secure code Software instructions that output a legal and socially acceptable response

I noted this statement in the source article:

OpenAI researchers say they’ve discovered hidden features inside AI models that correspond to misaligned “personas”…

In my ageing dinobaby brain, I interpreted this to mean:

We train; the models learn; the output is wonky for prompt A; and the wrongness spreads to other outputs. It’s like measles.

The fancy lingo addresses the black box chock full of probabilities, matrix manipulations, and layers of synthetic neural flickering ability to output incorrect “answers.” Think about your neighbors’ kids gluing cheese on pizza. Smart, right?

The write up reports that an OpenAI interpretability researcher said:

“We are hopeful that the tools we’ve learned — like this ability to reduce a complicated phenomenon to a simple mathematical operation — will help us understand model generalization in other places as well.”

Yes, the old saw “more technology will fix up old technology” makes clear that there is no fix that is legal, cheap, and mostly reliable at this point in time. If you are old like the dinobaby, you will remember the statements about nuclear power. Where are those thorium reactors? How about those fuel pools stuffed like a plump ravioli?

Another angle on the problem is the observation that “AI models are grown more than they are guilt.” Okay, organic development of a synthetic construct. Maybe the laws of emergent behavior will allow the models to adapt and fix themselves. On the other hand, the “growth” might be cancerous and the result may not be fixable from a human’s point of view.

But OpenAI is up to the task of fixing up AI that grows. Consider this statement:

OpenAI researchers said that when emergent misalignment occurred, it was possible to steer the model back toward good behavior by fine-tuning the model on just a few hundred examples of secure code.

Ah, ha. A new and possibly contradictory idea. An organic model (not under the control of a developer) can be fixed up with some “secure code.” What is “secure code” and why hasn’t “secure code” be the operating method from the start?

The jargon does not explain why bad answers migrate across the “models.” Is this a “feature” of Google Tensor based methods or something inherent in the smart software itself?

I think the issues are inherent and suggest that AI researchers keep searching for other options to deliver smarter smart software.

Stephen E Arnold, June 24, 2025

Paper Tiger Management

June 24, 2025

Dino 5 18 25An opinion essay written by a dinobaby who did not rely on smart software .

I learned that Apple and Meta (formerly Facebook) found themselves on the wrong side of the law in the EU. On June 19, 2025, I learned that “the European Commission will opt not to impose immediate financial penalties” on the firms. In April 2025, the EU hit Apple with a 500 million euro fine and Meta a 200 million euro fine for non compliance with the EU’s Digital Markets Act. Here’s an interesting statement in the cited EuroNews report the “grace period ends on June 26, 2025.” Well, not any longer.

What’s the rationale?

  1. Time for more negotiations
  2. A desire to appear fair
  3. Paper tiger enforcement.

I am not interested in items one and two. The winner is “paper tiger enforcement.” In my opinion, we have entered an era in management, regulation, and governmental resolve when the GenX approach to lunch. “Hey, let’s have lunch.” The lunch never happens. But the mental process follows these lanes in the bowling alley of life: [a] Be positive, [b] Say something that sounds good, [c] Check the box that says, “Okay, mission accomplished. Move on. [d] Forget about the lunch thing.

When this approach is applied on large scale, high-visibility issues, what happens? In my opinion, the credibility of the legal decision and the penalty is diminished. Instead of inhibiting improper actions, those who are on the receiving end of the punishment lean one thing: It doesn’t matter what we do. The regulators don’t follow through. Therefore, let’s just keep on moving down the road.

Another example of this type of management can be found in the return to the office battles. A certain percentage of employees are just going to work from home. The management of the company doesn’t do “anything”. Therefore, management is feckless.

I think we have entered the era of paper tiger enforcement. Make noise, show teeth, growl, and then go back into the den and catch some ZZZZs.

Stephen E Arnold, June 24, 2025

Hard Truths about Broligarchs But Will Anyone Care?

June 23, 2025

Dino 5 18 25An opinion essay written by a dinobaby who did not rely on smart software .

I read an interesting essay in Rolling Stone, once a rock and roll oriented publication. The write up is titled “What You’ve Suspected Is True: Billionaires Are Not Like Us.” This is a hit piece shooting words at rich people. At 80 years old, I am far from rich. My hope is that I expire soon at my keyboard and spare people like you the pain of reading one of my blog posts.

Several observations in the essay caught my attention.

Here’s the first passage I circled:

What Piff and his team found at that intersection is profound — and profoundly satisfying — in that it offers hard data to back up what intuition and millennia of wisdom (from Aristotle to Edith Wharton) would have us believe: Wealth tends to make people act like a**holes, and the more wealth they have, the more of a jerk they tend to be.

I am okay with the Aristotle reference; Edith Wharton? Not so much. Anyone who writes on linen paper in bed each morning is suspect in my book. But the statement,  “Wealth tends to make people act like a**holes…” is in line with my experience.

Another passage warrants an exclamation point:

Wealthy people tend to have more space, literally and figuratively….For them, it does not take a village; it takes a staff.

And how about this statement?

Clay Cockrell, a psychotherapist who caters to ultra-high-net-worth individuals, {says]: “As your wealth increases, your empathy decreases. Your ability to relate to other people who are not like you decreases.… It can be very toxic.”

Also, I loved this assertion from a Xoogler:

In October, Eric Schmidt, the former CEO of Google, said the solution to the climate crisis was to use more energy: Since we aren’t going to meet our climate goals anyway, we should pump energy into AI that might one day evolve to solve the problem for us.

Several observations:

  1. In my opinion, those with money will not be interested in criticism
  2. Making people with money and power look stupid can have a negative impact on future employment opportunities
  3. Read the Wall Street Journal story “News Sites Are Getting Crushed by Google’s New AI Tools.

Net net: The apparent pace of change in the “news” and “opinion” business is chugging along like an old-fashioned steam engine owned by a 19th century robber baron. Get on board or get left behind.

Stephen E Arnold, June 23, 2025

MIT (a Jeff Epstein Fave) Proves the Obvious: Smart Software Makes Some People Stupid

June 23, 2025

Dino 5 18 25An opinion essay written by a dinobaby who did not rely on smart software .

People look at mobile phones while speeding down the highway. People smoke cigarettes and drink Kentucky bourbon. People climb rock walls without safety gear. Now I learn that people who rely on smart software screw up their brains. (Remember. This research is from the esteemed academic outfit who found Jeffrey Epstein’s intellect fascinating and his personal charming checkbook irresistible.) (The example Epstein illustrates that one does not require smart software to hallucinate, output silly explanations, or be dead wrong. You may not agree, but that is okay with me.)

The write up “Your Brain on ChatGPT” appeared in an online post by the MIT Media Greater Than 40. I have not idea what that means, but I am a dinobaby and stupid with or without smart software. The write up reports:

We discovered a consistent homogeneity across the Named Entities Recognition (NERs), n-grams, ontology of topics within each group. EEG analysis presented robust evidence that LLM, Search Engine and Brain-only groups had significantly different neural connectivity patterns, reflecting divergent cognitive strategies. Brain connectivity systematically scaled down with the amount of external support: the Brain only group exhibited the strongest, widest?ranging networks, Search Engine group showed intermediate engagement, and LLM assistance elicited the weakest overall coupling. In session 4, LLM-to-Brain participants showed weaker neural connectivity and under-engagement of alpha and beta networks; and the Brain-to-LLM participants demonstrated higher memory recall, and re-engagement of widespread occipito-parietal and prefrontal nodes, likely supporting the visual processing, similar to the one frequently perceived in the Search Engine group. The reported ownership of LLM group’s essays in the interviews was low. The Search Engine group had strong ownership, but lesser than the Brain-only group. The LLM group also fell behind in their ability to quote from the essays they wrote just minutes prior.

Got that.

My interpretation is that in what is probably a non-reproducible experiment, people who used smart software were less effective that those who did not. Compressing the admirable paragraph quoted above, my take is that LLM use makes you stupid.

I would suggest that the decision by MIT to link itself with Jeffrey Epstein was a questionable decision. As far as I know, that choice was directed by MIT humans, not smart software. The questions I have are:

  1. How would access to smart software changed the decision of MIT to hook up with an individual with an interesting background?
  2. Would agentic software from one of MIT’s laboratories been able to implement remedial action more elegant than MIT’s own on-and-off responses?
  3. Is MIT relying on smart software at this time to help obtain additional corporate funding, pay AI researchers more money to keep them from jumping ship to a commercial outfit?

MIT: Outstanding work with or without smart software.

Stephen E Arnold, June 23, 2025

Meeker Reveals the Hurdle the Google Must Over: Can Google Be Agile Again?

June 20, 2025

Dino 5 18 25_thumbJust a dinobaby and no AI: How horrible an approach?

The hefty Meeker Report explains Google’s PR push, flood of AI announcement, and statements about advertising revenue. Fear may be driving the Googlers to be the Silicon Valley equivalent of Dan Aykroyd and Steve Martin’s “wild and crazy guys.” Google offers up the Sundar & Prabhakar Comedy Show. Similar? I think so.

I want to highlight two items from the 300 page plus PowerPoint deck. The document makes clear that one can create a lot of slides (foils) in six years.

The first item is a chart on page 21. Here it is:

image

Note the tiny little line near  the junction of the x and y axis. Now look at the  red lettering:

ChatGPT hit 365 billion annual searches by Year since public launches of Google and Chat GPT — 1998 – 2025.

Let’s assume Ms. Meeker’s numbers are close enough for horse shoes. The slope of the ChatGPT search growth suggests that the Google is losing click traffic to Sam AI-Man’s ChatGPT. I wonder if Sundar & Prabhakar eat, sleep, worry, and think as the Code Red lights flashes quietly in the Google lair? The light flashes: Sundar says, “Fast growth is not ours, brother.” Prabhakar responds, “The chart’s slope makes me uncomfortable.” Sundar says, “Prabhakar, please, don’t think of me as your boss. Think of me as a friend who can fire you.”

Now this quote from the top Googler on page 65 of the Meeker 2025 AI encomium:

The chance to improve lives and reimagine things is why Google has been investing in AI for more than a decade…

So why did Microsoft ace out Google with its OpenAI, ChatGPT deal in January 2023?

Ms. Meeker’s data suggests that Google is doing many AI projects because it named them for the period  5/19/25-5/23/25. Here’s a run down from page 260 in her report:

image

And what di Microsoft, Anthropic, and OpenAI talk about in the some time period?

image

Google is an outputter of stuff.

Let’s assume Ms. Meeker is wildly wrong in her presentation of Google-related data. What’s going to happen if the legal proceedings against Google force divestment of Chrome or there are remediating actions required related to the Google index? The Google may be in trouble.

Let’s assume Ms. Meeker is wildly correct in her presentation of Google-related data? What’s going to happen if OpenAI, the open source AI push, and the clicks migrate from the Google to another firm? The Google may be in trouble.

Net net: Google, assuming the data in Ms. Meeker’s report are good enough, may be confronting a challenge it cannot easily resolve. The good news is that the Sundar & Prabhakar Comedy Show can be monetized on other platforms.

Is there some hard evidence? One can read about it in Business Insider? Well, ooops. Staff have been allegedly terminated due to a decline in Google traffic.

Stephen E Arnold, June 20, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta