Advice for Programmers: AI-Proof Your Career

February 24, 2025

Software engineer and blogger Sean Goedecke has some career advice for those who, like himself, are at risk of losing their programming jobs to AI. He counsels, "To Avoid Being Replaced by LLMs, Do What They Can’t." Logical enough. But what will these tools be able to do, and when will they be able to do it? That is the $25 million question. Goedecke has suggestions for the medium term, and the long term.

Right now, he advises, engineers should do three things: First, use the tools. They can help you gain an advantage in the field. And also, know-thine-enemy, perhaps? Next, learn how LLMs work so you can transition to the growing field of AI work. If you can’t beat them, join them, we suppose. Finally, climb the ranks posthaste, for those in junior roles will be the first to go. Ah yes, the weak get eaten. It is a multipronged approach.

For the medium term, Goedecke predicts which skills LLMs are likely to master first. Get good at the opposite of that. For example, ill-defined or poorly-scoped problems, solutions that are hard to verify, and projects with huge volumes of code are all very difficult for algorithms. For now.

In the long term, work yourself into a position of responsibility. There are few of those to go around. So, as noted above, start vigorously climbing over your colleagues now. Why? Because executives will always need at least one good human engineer they can trust. The post observes:

"A LLM strong enough to take responsibility – that is, to make commitments and be trusted by management – would have to be much, much more powerful than a strong engineer. Why? Because a LLM has no skin in the game, which means the normal mechanisms of trust can’t apply. Executives trust engineers because they know those engineers will experience unpleasant consequences if they get it wrong. Because the engineer is putting something on the line (e.g. their next bonus, or promotion, or in the extreme case being fired), the executive can believe in the strength of their commitment. A LLM has nothing to put on the line, so trust has to be built purely on their track record, which is harder and takes more time. In the long run, when almost every engineer has been replaced by LLMs, all companies will still have at least one engineer around to babysit the LLMs and to launder their promises and plans into human-legible commitments. Perhaps that engineer will eventually be replaced, if the LLMs are good enough. But they’ll be the last to go."

If you are lucky, it will be time to retire by then. For those young enough that this is unlikely, or for those who do not excel at the rat race, perhaps a career change is in order. What jobs are safe? Sadly, this dino-baby writer does not have the answer to that question.

Cynthia Murrell, February 24, 2025

OpenAI Furthers Great Research

February 21, 2025

Unsatisfied with existing AI cheating solutions? If so, Gizmodo has good news for you: “OpenAI’s ‘Deep Research’ Gives Students a Whole New Way to Cheat on Papers.” Writer Kyle Barr explains:

“OpenAI’s new ‘Deep Research’ tool seems perfectly designed to help students fake their way through a term paper unless asked to cite sources that don’t include Wikipedia. OpenAI’s new feature, built on top of its upcoming o3 model and released on Sunday, resembles one Google introduced late last year with Gemini 2.0. Google’s ‘Deep Research’ is supposed to generate long-form reports over the course of 30 minutes or more, depending on the depth of the requested topic. Boiled down, Google’s and OpenAI’s tools are AI agents capable of performing multiple internet searches while reasoning about the next step to generate a report.”

Deep Research even functions in a side panel, providing updates on its direction and progress. So helpful! However, the tool is not for those looking to score an A. Like a student rushing to finish a paper the old-fashioned way, Barr notes, it relies heavily on Wikipedia. An example report did include a few trusted sites, like Pew Research, but such reliable sources were in the minority. Besides, the write-up emphasizes:

“Remember, this is just a bot scraping the internet, so it won’t be accessing any non-digitized books or—ostensibly—any content locked behind a paywall. … Because it’s essentially an auto-Googling machine, the AI likely won’t have access to the most up-to-date and large-scale surveys from major analysis firms. … That’s not to say the information was inaccurate, but anybody who generates a report is at the mercy of suspect data and the AI’s interpretation of that data.”

Meh, we suppose that is okay if one just needs a C to get by. But is it worth the $200 per month subscription? I suppose that depends on the student, and the parents willingness to sign up for services that will make gentle Ben and charming Chrissie smarter. Besides, we are sure more refined versions are in our future.

Cynthia Murrell, February 21, 2025

Sam Altman: The Waffling Man

February 17, 2025

Hopping DinoAnother dinobaby commentary. No smart software required.

Chaos is good. Flexibility is good. AI is good. Sam Altman, whom I reference as “Sam AI-Man” has some explaining to do. OpenAI is a consumer of cash. The Chinese PR push suggests that Deepseek has found a way to do OpenAI-type computing like Shein and Temu do gym clothes.

I noted “Sam Altman Admits OpenAI Was On the Wrong Side of History in Open Source Debate.” The write up does not come out state, “OpenAI was stupid when it embraced proprietary software’s approach” to meeting user needs. To be frank, Sam AI-Man was not particularly clear either.

The write up says that Sam AI-Man said:

“Yes, we are discussing [releasing model weights],” Altman wrote. “I personally think we have been on the wrong side of history here and need to figure out a different open source strategy.” He noted that not everyone at OpenAI shares his view and it isn’t the company’s current highest priority. The statement represents a remarkable departure from OpenAI’s increasingly proprietary approach in recent years, which has drawn criticism from some AI researchers and former allies, most notably Elon Musk, who is suing the company for allegedly betraying its original open source mission.

My view is that Sam AI-Man wants to emulate other super techno leaders and get whatever he wants. Not surprisingly, other super techno leaders have their own ideas. I would suggest that the objective of these AI jousts is power, control, and money.

“What about the users?” a faint voice asks. “And the investors?” another bold soul queries.

Who?

Stephen E Arnold, February 17, 2025

What Happens When Understanding Technology Is Shallow? Weakness

February 14, 2025

dino orange_thumb_thumb_thumbYep, a dinobaby wrote this blog post. Replace me with a subscription service or a contract worker from Fiverr. See if I care.

I like this question. Even more satisfying is that a big name seems to have answered it. I refer to an essay by Gary Marcus in “The Race for “AI Supremacy” Is Over — at Least for Now.”

Here’s the key passage in my opinion:

China caught up so quickly for many reasons. One that deserves Congressional investigation was Meta’s decision to open source their LLMs. (The question that Congress should ask is, how pivotal was that decision in China’s ability to catch up? Would we still have a lead if they hadn’t done that? Deepseek reportedly got its start in LLMs retraining Meta’s Llama model.) Putting so many eggs in Altman’s basket, as the White House did last week and others have before, may also prove to be a mistake in hindsight. … The reporter Ryan Grim wrote yesterday about how the US government (with the notable exception of Lina Khan) has repeatedly screwed up by placating big companies and doing too little to foster independent innovation

The write up is quite good. What’s missing, in my opinion, is the linkage of a probe to determine how a technology innovation released as a not-so-stealthy open source project can affect the US financial markets. The result was satisfying to the Chinese planners.

Also, the write up does not put the probe or “foray” in a strategic context. China wants to make certain its simple message “China smart, US dumb” gets into the world’s communication channels. That worked quite well.

Finally, the write up does not point out that the US approach to AI has given China an opportunity to demonstrate that it can borrow and refine with aplomb.

Net net: I think China is doing Shien and Temu in the AI and smart software sector.

Stephen E Arnold, February 14, 2025

Orchestration Is Not Music When AI Agents Work Together

February 13, 2025

Are multiple AIs better than one? Megaputer believes so. The data firm sent out a promotional email urging us to “Build Multi-Agent Gen-AI Systems.” With the help of its products, of course. We are told:

“Most business challenges are too complex for a single AI engine to solve. What is the way forward? Introducing Agent-Chain Systems: A novel groundbreaking approach leveraging the collaborative strengths of specialized AI models, each configured for distinct analytical tasks.

  • Validate results through inter-agent verification mechanisms, minimizing hallucinations and inconsistencies.
  • Dynamically adapt workflows by redistributing tasks among Gen-AI agents based on complexity, optimizing resource utilization and performance.
  • Build AI applications in hours for tasks like automated taxonomy building and complex fact extraction, going beyond traditional AI limitations.”

If this approach really reduces AI hallucinations, there may be something to it. The firm invites readers to explore a few case studies they have put together: One is for an anonymous pharmaceutical company, one for a US regulatory agency, and the third for a large retail company. Snapshots of each project’s dashboard further illustrate the concept. Are cooperative AI agents the next big thing in generative AI? Megaputer, for one, is banking on it. Founded back in 1997, the small business is based in Bloomington, Indiana.

Cynthia Murrell, February 10, 2025

The Google: Tell Me, Please, What Is a Malicious App?

February 12, 2025

dino orange_thumbYep, another dinobaby emission. No smart software required.

I suggest you take a quick look at an important essay about the data which flows from Google’s Android and Apple’s iOS. The paper is “Everyone Knows Your Location: Tracking Myself Down Through In-App Ads” by Tim. The main point of the write up is to disclose information that has been generally closely held by a number of entities. I strongly recommend the write up, and it is possible that it could be made difficult to locate in the near future. The article says:

After more than couple dozen hours of trying, here are the main takeaways:

  1. I found a couple requests sent by my phone with my location + 5 requests that leak my IP address, which can be turned into geolocation using reverse DNS.
  2. Learned a lot about the RTB (real-time bidding) auctions and OpenRTB protocol and was shocked by the amount and types of data sent with the bids to ad exchanges.
  3. Gave up on the idea to buy my location data from a data broker or a tracking service, because I don’t have a big enough company to take a trial or $10-50k to buy a huge database with the data of millions of people + me.
    Well maybe I do, but such expense seems a bit irrational.
    Turns out that EU-based peoples` data is almost the most expensive.

But still, I know my location data was collected and I know where to buy it!

Tim’s essay sets the stage for a Google Security Blog post titled “How We Kept the Google Play & Android App Ecosystems Safe in 2024.” That write up is another example of Google’s self-promotion. It lacks the snap of the quantum supremacy pitch and the endless backpatting about Google’s smart software.

The write up says:

To keep out bad actors, we have always used a combination of human security experts and the latest threat-detection technology. In 2024, we used Google’s advanced AI to improve our systems’ ability to proactively identify malware, enabling us to detect and block bad apps more effectively. It also helps us streamline review processes for developers with a proven track record of policy compliance. Today, over 92% of our human reviews for harmful apps are AI-assisted, allowing us to take quicker and more accurate action to help prevent harmful apps from becoming available on Google Play.

I want to ask one question, “Is Google’s advertising a malicious app?” The answer depends on one’s point of view. Google would assert that it is not doing anything other than making high value services available either for free or at a very low cost to the consumer.

A skeptical person might respond, “Your system sustains the digital online advertising sector. Your technology helps, to some degree, the third party advertising services firms to gather information and cross correlate it for the fine-grained intelligence described in Tim’s article?”

Google, which is it? Is your advertising system malicious or is it a benefit to users? This is a serious question, and it is one that smarmy self promotion and PR campaigns are likely to have difficulty answering.

Stephen E Arnold, February 11, 2025

A Case for Export Controls in the Wake of Deepseek Kerfuffle

February 11, 2025

Some were shocked by recent revelations of Deepseek’s AI capabilities, including investors. Others had been forewarned about the (allegedly) adept firm. Interesting how social media was used to create the shock and awe that online information services picked up and endlessly repeated. Way to amplify the adversary’s propaganda.

At any rate, this escalating AI arms race is now top-of-mind for many. Could strong export controls give the US an edge? After all, China’s own chip manufacturing is said to lag about five years behind ours. Anthropic CEO Dario Amodei believes they can, as he explains in his post, "On Deepseek and Export Controls."

The AI maestro begins with some groundwork. First, he describes certain ways AI development scales and shifts. He then looks at what makes Deepseek so special—and what does not. See the post for those details, but here is the key point for our discussion: AI developers everywhere require more and more hardware to progress. So far, Chinese and US companies have had access to similar reserves of both funds and chips. However, if we limit the number of chips flowing into China, Chinese firms will eventually hit a proverbial wall. Amodei compares hypothetical futures:

"The question is whether China will also be able to get millions of chips. If they can, we’ll live in a bipolar world, where both the US and China have powerful AI models that will cause extremely rapid advances in science and technology — what I’ve called ‘countries of geniuses in a datacenter‘. A bipolar world would not necessarily be balanced indefinitely. Even if the US and China were at parity in AI systems, it seems likely that China could direct more talent, capital, and focus to military applications of the technology. Combined with its large industrial base and military-strategic advantages, this could help China take a commanding lead on the global stage, not just for AI but for everything."

How ominous. And if we successfully implement and enforce export controls? He continues:

"If China can’t get millions of chips, we’ll (at least temporarily) live in a unipolar world, where only the US and its allies have these models. It’s unclear whether the unipolar world will last, but there’s at least the possibility that, because AI systems can eventually help make even smarter AI systems, a temporary lead could be parlayed into a durable advantage. Thus, in this world, the US and its allies might take a commanding and long-lasting lead on the global stage."

"Might," he says. There is no certainty here. Still, an advantage like this may be worthwhile if it keeps China’s military from outstripping ours. Hindering an Anthropic competitor is just a side effect of this advice, right? Sure, in a peaceful world, international "competition and collaboration make the world a better place." But that is not our reality at the moment.

Amodei hastens to note he thinks the Deepseek folks are fine researchers and curious innovators. It is just that bit about being beholden to their authoritarian government that may be the issue.

Cynthia Murrell, February 11, 2025

Google Goes Googley in Paris Over AI … Again

February 10, 2025

Google does some interesting things in Paris. The City of Light was the scene of a Googler’s demonstration of its AI complete with hallucinations about two years ago. On Monday, February 10, 2025, Google’s “leadership” Sundar Pichai alleged leaked his speech or shared some memorable comments with journalists. These were reported in AAWSAT.com, an online information service in the story “AI Is ‘Biggest Shift of Our Lifetimes’, Says Google Boss.”

I like the shift; it reminds me of the word “shifty.”

One of the passages catching my attention was this one, although I am not sure of the accuracy of the version in the cited article. The gist seems on point with Google’s posture during Code Red and its subsequent reorganization of the firm’s smart software unit. The context, however, does not seem to include the impact of Deepseek’s bargain basement approach to AI. Google is into big money for big AI. One wins big in a horse race bet by plopping big bucks on a favorite nag. AI is doing the big bet on AI, about $75 billion in capital expenditures in the next 10 months.

Here’s the quote:

Artificial intelligence (AI) is a "fundamental rewiring of technology" that will act as an "accelerant of human ingenuity." We’re still in the early days of the AI platform shift, and yet we know it will be the biggest of our lifetimes… With AI, we have the chance to democratize access (to a new technology) from the start, and to ensure that the digital divide doesn’t become an AI divide….

The statement exudes confidence. With billions riding on Mr. Pichai gambler’s instinct, stakeholders and employees not terminated for cost savings hope he is correct. Those already terminated may be rooting for a different horse.

Google’s head of smart software (sorry, Jeff Dean) allegedly offered this sentiment:

“Material science, mathematics, fusion, there is almost no area of science that won’t benefit from these AI tools," the Nobel chemistry laureate said.

Are categorical statements part of the mental equipment that makes a Nobel prize winner. He did include an “almost,” but I think the hope is that many technical disciplines will reap the fruits of smart software. Some smart software may just reap fruits from users of smart software’s inputs.

A statement which I found more remarkable was:

Every generation worries that the new technology will change the lives of the next generation for the worse — and yet it’s almost always the opposite.

Another hedged categorical affirmative: “Almost always”. The only issue is that as Jacques Ellul asserted in The Technological Bluff, technology creates problems which invoke more technology to address old problems while simultaneously creating a new technology. I think Father Ellul was on the beam.

How about this for a concluding statement:

We must not let our own bias for the present get in the way of the future. We have a once-in-a-generation opportunity to improve lives at the scale of AI.

Scale. Isn’t that what Deepseek demonstrated may be a less logical approach to smart software? Paris has quite an impact on Google thought processes in my opinion. Did Google miss the Deepseek China foray? Did the company fail to interpret it in the context of wide adoption of AI? On the other hand, maybe if one does not talk about something, one can pretend that something does not exist. Like the Super Bowl ad with misinformation about cheese. Yes, cheese, again.

Stephen E Arnold, February 10, 2025

Microsoft, Deepseek, and OpenAI: An Interesting Mixture Like RDX?

February 10, 2025

dino orange_thumbWe have smart software, but the dinobaby continues to do what 80 year olds do: Write the old-fashioned human way. We did give up clay tablets for a quill pen. Works okay.

I have successfully installed Deepseek and run some queries. The results seem okay, but most of the large language models we have installed have their strengths and weaknesses. What’s interesting about Deepseek is that it caused a bit of a financial squall when it was publicized during a Chinese dignitary’s visit to Colombia.

A short time after a high flying video card company lost a few bucks, an expert advising the new US administration suggested “there’s substantial evidence that Deepseek used OpenAI’s models to train its own.” This story appeared X.com via Fox. Another report said that Microsoft was investigating Deepseek. When I checked my newsfeed this morning (January 30, 2025), Slashdot pointed me to this story: “Microsoft makes Deepseek’s R1 Model Available on Azure AI and GitHub.”

Did Microsoft do a speedy investigation or is the inclusion of Deepseek in AzureAI and GitHub part of its investigation. Did loading up Deepseek kill everyone’s favorite version of Office on January 29, 2024? Probably not, but there is a lot of action in the AI space at Microsoft Town.

Let’s recap the stuff from the AI chemistry lab. First, we have the fascinating Sam AI-Man. With a deal of note because Oracle is in and Grok is out, OpenAI remains a partner with Microsoft. Second, Microsoft, fresh from bumper revenues, continues to embrace AI and demonstrate that a welcome mat is outside Satya Nadella’s door for AI outfits. Third, who stole what? AI companies have been viewed as information bandits by some outfits. Legal eagles cloud the sunny future of smart software.

What will these chemical elements combine to deliver? Let’s consider a few options.

  1. Like RDX a go-to compound for some kinetic applications, the elements combust.
  2. The legal eagles effectively grind innovation to a halt due to restrictions on Nvidia, access to US open source software, and getting in the way of the reinvigoration of the USA.
  3. Nothing. That’s right. The status quo chugs along with predictable ups and downs but nothing changes.

Net net: This will be an interesting techno-drama to watch in real time. On the other hand, I may wait until the Slice outfit does a documentary about the dust up, partnerships, and failed bro-love affairs.

Stephen E Arnold, February 10, 2025

What Does One Do When Innovation Falters? Do the Me-Too Bop

February 10, 2025

Hopping Dino_thumbAnother dinobaby commentary. No smart software required.

I found the TechRadar story “In Surprise Move Microsoft Announces Deepseek R1 Is Coming to CoPilot+ PCs – Here’s How to Get It” an excellent example of bit tech innovation. The article states:

Microsoft has announced that, following the arrival of Deepseek R1 on Azure AI Foundry, you’ll soon be able to run an NPU-optimized version of Deepseek’s AI on your Copilot+ PC. This feature will roll out first to Qualcomm Snapdragon X machines, followed by Intel Core Ultra 200V laptops, and AMD AI chipsets.

Yep, me too, me too. The write up explains the ways in which one can use Deepseek, and I will leave taking that step to you. (On the other hand, navigate to Hugging Face and download it, or you could zip over to You.com and give it a try.)

The larger issue is not the speed with which Microsoft embraced the me too approach to innovation. For me, the decision illustrates the paucity of technical progress in one of the big technology giants. You know, Microsoft, the originator of Bob and the favorite software company of bad actors who advertise their malware on Telegram.

Several observations:

  1. It doesn’t matter how the Chinese start up nurtured by a venture capital firm got Deepseek to work. The Chinese outfit did it. Bang. The export controls and the myth of trillions of dollars to scale up disappeared. Poof.
  2. No US outfit — with or without US government support — was in the hockey rink when the Chinese team showed up and blasted a goal in the first few minutes of a global game. Buzz. 1 to zip. The question is, “Why not?” and “What’s happened since Microsoft triggered the crazy Code Red or whatever at the Google?” Answer: Burning money quickly.
  3. More pointedly, are the “innovations” in AI touted by Product Hunt and podcasters innovations? What if these are little more than wrappers with some snappy names? Answer: A reminder that technical training and some tactical kung fu can deliver a heck of a punch.

Net net: Deepseek was a tactical foray or probe. The data are in. Microsoft will install Chinese software in its global software empire. That’s interesting, and it underscores the problem of me to. Innovation takes more than raising prices and hiring a PR firm.

Stephen E Arnold, February 10, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta