"Real" Entities or Sock Puppets? A New Solution Can Help Analysts and Investigators

January 28, 2025

Bitext’s NAMER (shorthand for "named entity recognition") can deliver precise entity tagging across dozens of languages.

Graphs — knowledge graphs and social graphs — have moved into the mainstream since Leonhard Euler formed the foundation for graph theory in the mid 18th century in Berlin.

With graphs, analysts can take advantage of smart software’s ability to make sense of Named Entity Recognition (NER), event extraction, and relationship mapping.

The problem is that humans change their names (handles, monikers, or aliases) for many reasons: Public embarrassment, a criminal record, a change in marital status, etc.

Bitext’s NER solution, NAMER, is specifically designed to meet the evolving needs of knowledge graph companies, offering exceptional features that tackle industry challenges.

Consider a person disgraced with involvement in a scheme to defraud investors in an artificial intelligence start up. The US Department of Justice published the name of a key actor in this scheme. (Source: https://www.justice.gov/usao-ndca/pr/founder-and-former-ceo-san-francisco-technology-company-and-attorney-indicted-years). The individual was identified by the court as Valerie Lau Beckman. The official court documents used the name "Lau" to reference her involvement in a multi-million dollar scam.

However, in order to correctly identify her in social media, subsequent news stories, and in possible public summaries of her training on a LinkedIn-type of smart software is not enough.

That’s the role of a specialized software solution. Here’s what NAMER delivers.

The system identifies and classifies entities (e.g., people, organizations, locations) in unstructured data. The system accurately links data across different sources of content. The NAMER technology can tag and link significant events (transactions, announcements) to maintain temporal relevance; for example, when Ms. Lau Beckman is discharged from the criminal process. NAMER can connect entities like Ms. Lau or Ms. Beckman to other individuals with whom she works or interacts and her "names" appearance in content streams.

The licensee specifies the languages NAMER is to process, either in a knowledge base or prior to content processing via a large language model.

Access to the proprietary NAMER technology is via a local SDK which is essential for certain types of entity analysis. NAMER can also be integrated into another system or provided as a "white label service" to enhance an intelligence system with NAMER’s unique functions. The developer provides for certain use cases direct access to the source code of the system.

For an organization or investigative team interested in keeping data about Lau Beckman at the highest level of precision, Bitext’s NAMER is an essential service.

Stephen E Arnold, January 28, 2025

What Do DeepSeek, a Genius Girl, and Temu Have in Common? Quite a Lot

January 28, 2025

Hopping DinoA write up from a still-living dinobaby.

The Techmeme for January 28, 2024, was mostly Deepseek territory. The China-linked AI model has roiled the murky waters of the US smart software fishing hole. A big, juicy AI creature has been pulled from the lake, and it is drawing a crowd. Here’s a small portion of the datasphere thrashing on January 28, 2025 at 0700 am US Eastern time:

image

I have worked through a number of articles about this open source software. I noted its back story about a venture firm’s skunk works tackling AI. Armed with relatively primitive tools due to the US restriction of certain computer components, the small team figured out how to deliver results comparable to the benchmarks published about US smart software systems.

image

Genius girl uses basic and cheap tools to repair an old generator. Americans buy a new generator from Harbor Freight. Genius girl repairs old generator proving the benefits of a better way or a shining path. Image from the YouTube outfit which does work the American way.

The story is torn from the same playbook which produces YouTube “real life” stories like “The genius girl helps the boss to repair the diesel generator, full of power!” You can view the one-hour propaganda film at this link. Here’s a short synopsis, and I want you to note the theme of the presentation:

  1. Young-appearing female works outside
  2. She uses primitive tools
  3. She takes apart a complex machine
  4. She repairs it
  5. The machine is better than a new machine.

The videos are interesting. The message has not been deconstructed. My interpretation is:

  1. Hard working female tackles tough problem
  2. Using ingenuity and hard work she cracks the code
  3. The machine works
  4. Why buy a new one? Use what you have and overcome obstacles.

This is not the “Go west, young man” or private equity approach to cracking an important problem. It is political and cultural with a dash of Hoisin technical sauce. The video presents a message like that of “plum blossom boxing.” It looks interesting but packs a wallop.

Here’s a point that has not been getting much attention; specifically, the AI probe is designed to direct a flow of energy at the most delicate and vulnerable part of the US artificial intelligence “next big thing” pumped up technology “bro.”

What is that? The answer is cost. The method has been refined by Shein and Temu by poking at Amazon. Here’s how the “genius girl” uses ingenuity.

  1. Technical papers are published
  2. Open source software released
  3. Basic information about using what’s available released
  4. Cost information is released.

The result is that a Chinese AI app surges to the top of downloads on US mobile stores. This is a first. Not even the TikTok service achieved this standing so quickly. The US speculators dump AI stocks. Techmeme becomes the news service for Chinese innovation.

I see this as an effective tactic for demonstrating the value of the “genius girl” approach to solving problems. And where did Chinese government leadership watch the AI balloon lose some internal pressure. How about Colombia, a three-hour plane flight from the capital of Central and South America. (That’s Miami in the event my reference was too oblique.)

In business, cheaper and good enough are very potent advantages. The Deepseek AI play is indeed about a new twist to today’s best method of having software perform in a way that most call “smart.” But the Deepseek play is another “genius girl” play from the Middle Kingdom.

How can the US replicate the “genius girl” or the small venture firm which came up with a better idea? That’s going to be touch. While the genius girl was repairing the generator, the US AI sector was seeking more money to build giant data centers to hold thousands of exotic computing tools. Instead of repairing, the US smart software aficionados were planning on modular nuclear reactors to make the next-generation of smart software like the tail fins on a 1959 pink Cadillac.

Deepseek and the “genius girl” are not about technology. Deepseek is a manifestation of the Shein and Temu method: Fast cycle, cheap and good enough. The result is an arm flapping response from the American way of AI. Oh, does the genius girl phone home? Does she censor what she says and does?

Stephen E Arnold, January 28, 2025

China Smart, US Dumb: Some AI Readings in English

January 28, 2025

dino orange_thumb_thumbA blog post from an authentic dinobaby. He’s old; he’s in the sticks; and he is deeply skeptical.

I read a short post in YCombinator’s Hacker News this morning (January 23, 2025). The original article is titled “Deepseek and the Effects of GPU Export Controls.” If you are interested in the poli sci approach to smart software, dive in. However, in the couple of dozen comments on Hacker News to the post, a contributor allegedly named LHL posted some useful links. I have pulled these from the comments and displayed them for your competitive intelligence large language model. On the other hand, you can read them because you are interested in what’s shaking in the Lin-gang Free Trade Zone in the Middle Kingdom:

Deepseek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning

Deepseek-V3 Technical Report

Deepseek Coder V2: Breaking the Barrier of Closed Source Models in Code Intelligence

Deepseek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model

Deepseek LLM Scaling Open-Source Language Models with Longtermism

GitHub Deepseek AI

Hugging Face Deepseek AI.

First, a thanks to the poster LHL. The search string links timed out, so you may already be part of the HN herd who is looking at the generated bibliography.

Second, several observations:

  1. China has lots of people. There are numerous highly skilled mathematicians, Monte Carlo and gradient descent wonks, and darned good engineers. One should not assume that wizardry ends with big valuations and tie ups among Oracle, Open AI and the savvy funder of Banjo, an intelware outfit of some repute.
  2. Computing resource constraints translate into one outcome. Example: Howard Flank, one of my team members, received the Information Industry Association Award decades ago for cramming a searchable index of the Library of Congress’ holdings. Remember those wonderful machines in the early 1980s. Yeah, Howard did wonders with limited resources. The Chinese professionals can too and have. (Note to US government committee members: Keep Howard and similar engineering whiz kids in mind when thinking about how curtailing computer resources will stop innovation.)
  3. Deepseek’s methods are likely to find there way into some US wrapper products presented as groundbreaking AI. Nope. These innovations are enabled by an open source technology. Now what happens if an outfit like Telegram or one of the many cyber gangs which Microsoft’s Brad Smith references? Yeah. Innovation of a type that is not salubrious.
  4. The authors of the papers are important. Should these folks be cross correlated with other information about grants, academic affiliations with US institutions, and conference attendance?

In case anyone is curious, from my dinobaby point of view, the most important paper in the bunch is the one about a “mixture of experts.”

Stephen E Arnold, January 28, 2025

How to Make Software Smart Like Humans

January 27, 2025

Artificial intelligence algorithms are still only as smart as they’re programmed. In other words, they’re still software, sometimes stupid pieces of software. Most AI algorithms are trained on large language models (LLMS) and datasets that lack the human magic to make them “think” like a smart 14-year-old. That could change says Science Daily based on the research from Linköping University: “Machine Psychology: A Bridge To General AI?”

The Robert Johansson of Linköping University asserted in his dissertation that psychological learning models combined with AI could be the key to making machines smart like humans. Johansson developed the concept of Machine Psychology and explains that, unlike many people, he’s not afraid of an AI future. Artificial General Intelligence (AGI) has many positive and negatives. The technology must be carefully created, but AGI could counter many societal destructive developments.

Johansson suggests that AI developers should follow the principle-led path. He means that through his research he’s identified important psychological learning principles that could explain intelligence and they could be implemented in machines. He’s used a logic system called Non-Axiomatic Reasoning System (NARS) that is purposely designed without complete data, computational power, and in real time. This provides the flexibility to handle problems that arise in reality.

NARS works on limited information like a human:

“The combination of NARS and learning psychology principles constitutes an interdisciplinary approach that Robert Johansson calls Machine Psychology, a concept he was the first to coin but more actors have now started to use, including Google DeepMind. The idea is that artificial intelligence should learn from different experiences during its lifetime and then apply what it has learned to many different situations, just as humans begin to do as early as the age of 18 months — something no other animal can do.”

Johansson said that it is possible machines could be as smart as humans within five years. It is a plan, but do computers have the correct infrastructure to handle that type of intelligence? Do humans have the smarts to handle smarter software?

Whitney Grace, January 27, 2025

AI Will Doom You to Poverty Unless You Do AI to Make Money

January 23, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb Prepared by a still-alive dinobaby.

I enjoy reading snippets of the AI doomsayers. Some spent too much time worrying about the power of Joe Stalin’s approach to governing. Others just watched the Terminator series instead of playing touch football. A few “invented” AI by cobbling together incremental improvements in statistical procedures lashed to ever-more-capable computing infrastructures. A couple of these folks know that Nostradamus became a brand and want to emulate that predictive master.

I read “Godfather of AI Explains How Scary AI Will Increase the Wealth Gap and Make Society Worse.” That is a snappy title. Whoever wrote it crafted the idea of an explainer to fear. Plus, the click bait explains that homelessness is for you too. Finally, it presents a trope popular among the elder care set. (Remember, please, that I am a dinobaby myself.) Prod a group of senior citizens to a dinner and you will hear, “Everything is broken.” Also, “I am glad I am old.” Then there is the ever popular, “Those tattoos! The check out clerks cannot make change! I  don’t understand commercials!” I like to ask, “How many wars are going on now? Quick.”

two robots

Two robots plan a day trip to see the street people in Key West. Thanks, You.com. I asked for a cartoon; I get a photorealistic image. I asked for a coffee shop; I get weird carnival setting. Good enough. (That’s why I am not too worried.)

Is society worse than it ever was? Probably not. I have had an opportunity to visit a number of countries, go to college, work with intelligent (for the most part) people, and read books whilst sitting on the executive mailing tube. Human behavior has been consistent for a long time. Indigenous people did not go to Wegman’s or Whole Paycheck. Some herded animals toward a cliff. Other harvested the food and raw materials from the dead bison at the bottom of the cliff. There were no unskilled change makers at this food delivery location.

The write up says:

One of the major voices expressing these concerns is the ‘Godfather of AI’ himself Geoffrey Hinton, who is viewed as a leading figure in the deep learning community and has played a major role in the development of artificial neural networks. Hinton previously worked for Google on their deep learning AI research team ‘Google Brain’ before resigning in 2023 over what he expresses as the ‘risks’ of artificial intelligence technology.

My hunch is that like me the “worked at” Google was for a good reason — Money. Having departed from the land of volleyball and weird empty office buildings, Geoffrey Hinton is in the doom business. His vision is that there will be more poverty. There’s some poverty in Soweto and the other townships in South Africa. The slums of Rio are no Palm Springs. Rural China is interesting as well. Doesn’t everyone want to run a business from the area in front of a wooden structure adjacent an empty highway to nowhere? Sounds like there is some poverty around, doesn’t it?

The write up reports:

“We’re talking about having a huge increase in productivity. So there’s going to be more goods and services for everybody, so everybody ought to be better off, but actually it’s going to be the other way around. “It’s because we live in a capitalist society, and so what’s going to happen is this huge increase in productivity is going to make much more money for the big companies and the rich, and it’s going to increase the gap between the rich and the people who lose their jobs.”

The fix is to get rid of capitalism. The alternative? Kumbaya or a better version of those fun dudes Marx. Lenin, and Mao. I stayed in the “last” fancy hotel the USSR built in Tallinn, Estonia. News flash: The hotels near LaGuardia are quite a bit more luxurious.

The godfather then evokes the robot that wanted to kill a rebel. You remember this character. He said, “I’ll be back.” Of course, you will. Hollywood does not do originals.

The write up says:

Hinton’s worries don’t just stop at the wealth imbalance caused by AI too, as he details his worries about where AI will stop following investment from big companies in an interview with CBC News: “There’s all the normal things that everybody knows about, but there’s another threat that’s rather different from those, which is if we produce things that are more intelligent than us, how do we know we can keep control?” This is a conundrum that has circulated the development of robots and AI for years and years, but it’s seeming to be an increasingly relevant proposition that we might have to tackle sooner rather than later.

Yep, doom. The fix is to become an AI wizard, work at a Google-type outfit, cash out, and predict doom. It is a solid career plan. Trust me.

Stephen E Arnold, January 23, 2025

Teenie Boppers and Smart Software: Yep, Just Have Money

January 23, 2025

dino-orange_thumb_thumbThis blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

I scanned the research summary “About a Quarter of U.S. Teens Have Used ChatGPT for Schoolwork – Double the Share in 2023.” Like other Pew data, the summary contained numerous numbers. I was not sufficiently motivated to dig into the methodology to find out how the sample was assembled nor how Pew got the mobile addicted youth were prompted to provide presumably truthful answers to direct questions. But why nit pick? We are at the onset of an interesting year which will include forthcoming announcements about how algorithms are agentic and able to fuel massive revenue streams for those in the know.

image

Students doing their homework while their parents play polo. Thanks, MSFT Copilot. Good enough. I do like the croquet mallets and volleyball too. But children from well-to-do families have such items in abundance.

Let’s go to the video tape, as the late and colorful Warner Wolf once said to his legion of Washington, DC, fan.

One of the highlights of the summary was this finding:

Teens who are most familiar with ChatGPT are more likely to use it for their schoolwork. Some 56% of teens who say they’ve heard a lot about it report using it for schoolwork. This share drops to 18% among those who’ve only heard a little about it.

Not surprisingly, the future leaders of America embrace short cuts. The question is, “How quickly will awareness reach 99 percent and usage nosing above 75 percent?” My guesstimate is pretty quickly. Convenience and more time to play with mobile phones will drive the adoption. Who in America does not like convenience?

Another finding catching my eye was:

Teens from households with higher annual incomes are most likely to say they’ve heard about ChatGPT. The shares who say this include 84% of teens in households with incomes of $75,000 or more say they’ve heard at least a little about ChatGPT.

I found this interesting because it appears to suggest that if a student comes from a home where money does not seem to be a huge problem, the industrious teens are definitely aware of smart software. And when it comes to using the digital handmaiden, Pew finds apparently nothing. There is no data point relating richer progeny with greater use. Instead we learned:

Teens who are most familiar with the chatbot are also more likely to say using it for schoolwork is OK. For instance, 79% of those who have heard a lot about ChatGPT say it’s acceptable to use for researching new topics. This compares with 61% of those who have heard only a little about it.

My thought is that more wealthy families are more likely to have teens who know about smart software. I would hypothesize that wealthy parents will pay for the more sophisticated smart software and smile benignly as the future intelligentsia stride confidently to ever brighter futures. Those without the money will get the opportunity to watch their classmates have more time for mobile phone scrolling, unboxing Amazon deliveries, and grabbing burgers at Five Guys.

I am not sure that the link between wealth and access to learning experiences is a random, one-off occurrence. If I am correct, the Pew data suggest that smart software is not reinforcing democracy. It seems to be making a digital Middle Ages more and more probable. But why think about what a dinobaby hypothesizes? It is tough to scroll zippy mobile phones with old paws and yellowing claws.

Stephen E Arnold, January 23, 2025

AI: Yes, Intellectual Work Will Succumb, Just Sooner Rather Than Later

January 22, 2025

dino orangeThis blog post is the work of an authentic dinobaby. Sorry. No smart software can help this reptilian thinker.

Has AI innovation stalled? Nope. “It’s Getting Harder to Measure Just How Good AI Is Getting” explains:

OpenAI’s end-of-year series of releases included their latest large language model (LLM), o3. o3 does not exactly put the lie to claims that the scaling laws that used to define AI progress don’t work quite that well anymore going forward, but it definitively puts the lie to the claim that AI progress is hitting a wall.

Okay, that proves that AI is hitting the gym and getting pumped.

However, the write up veers into an unexpected calcified space:

The problem is that AIs have been improving so fast that they keep making benchmarks worthless. Once an AI performs well enough on a benchmark we say the benchmark is “saturated,” meaning it’s no longer usefully distinguishing how capable the AIs are, because all of them get near-perfect scores.

What is wrong with the lack of benchmarks? Nothing. Smart software is probabalistic. How accurate is the weather? Ask a wonk at the National Weather Service and you get quite optimistic answers. Ask a child whose birthday party at the park was rained out on a day Willie the Weather said that it would be sunny, and you get a different answer.

Okay, forget measurements. Here’s what the write up says will happen, and the prediction sounds really rock solid just like Willie the Weatherman:

The way AI is going to truly change our world is by automating an enormous amount of intellectual work that was once done by humans…. Like it or not (and I don’t really like it, myself; I don’t think that this world-changing transition is being handled responsibly at all) none of the three are hitting a wall, and any one of the three would be sufficient to lastingly change the world we live in.

Follow the argument? I must admit jumping from getting good, to an inability to measure “good” to humans will be replaced because AI can do intellectual work is quite a journey. Perhaps I am missing something, but:

  1. Just because people outside of research labs have smart software that seems to be working like a smart person, what about those hallucinatory outputs? Yep, today’s models make stuff up because probability dictates the output
  2. Use cases for smart software doing “intellectual work” are where in the write up? They aren’t because Vox doesn’t have any which are comfortable to journalists and writers who can be replaced by the SEO AI’s advertised on Telegram search engine optimization channels or by marketers writing for Forbes Magazine. That’s right. Excellent use cases are smart software killing jobs once held by fresh MBAs or newly minted CFAs. Why? Cheaper and as long as the models are “good enough” to turn a profit, let ‘em rip. Yahoooo.
  3. Smart software is created by humans, and humans shape what it does, how it is deployed, and care not a whit about the knock on effects. Technology operates in the hands of humans. Humans are deeply flawed entities. Mother Theresas are outnumbered by street gangs in Reno, Nevada, based on my personal observations of that fine city.

Net net: Vox which can and will be replaced by a cheaper and good enough alternative doesn’t want to raise that issue. Instead, Vox wanders around the real subject. That subject is that as those who drive AI figure out how to use what’s available and good enough, certain types of work will be pushed into the black boxes of smart software. Could smart software have written this essay? Yes. Could it have done a better job? Publications like the supremely weird Buzzfeed and some consultants I know sure like “good enough.” As long as it is cheap, AI is a winner.

Stephen E Arnold, January 22, 2025

AWS and AI: Aw, Of Course

January 21, 2025

Mat Garman Interview Reveals AWS Perspective on AI

It should be no surprise that AWS is going all in on Artificial Intelligence. Will Amazon become an AI winner? Sure, if it keeps those managing the company’s third-party reseller program away from AWS. Nilay Patel, The Verge‘s Editor-in Chief, interviewed AWS head Matt Garmon. He explains “Why CEO Matt Garman Is Willing to Bet AWS on AI.” Patel writes:

“Matt has a really interesting perspective for that kind of conversation since he’s been at AWS for 20 years — he started at Amazon as an intern and was AWS’s original product manager. He’s now the third CEO in just five years, and I really wanted to understand his broad view of both AWS and where it sits inside an industry that he had a pivotal role in creating. … Matt’s perspective on AI as a technology and a business is refreshingly distinct from his peers, including those more incentivized to hype up the capabilities of AI models and chatbots. I really pushed Matt about Sam Altman’s claim that we’re close to AGI and on the precipice of machines that can do tasks any human could do. I also wanted to know when any of this is going to start returning — or even justifying — the tens of billions of dollars of investments going into it. His answers on both subjects were pretty candid, and it’s clear Matt and Amazon are far more focused on how AI technology turns into real products and services that customers want to use and less about what Matt calls ‘puffery in the press.'”

What a noble stance within a sea of AI hype. The interview touches on topics like AWS’ domination of streaming delivery, its partnerships with telco companies, and problems of scale as it continues to balloon. Garmon also compares the shift to AI to the shift from typewriters to computers. See the write-up for more of their conversation.

Cynthia Murrell, January 21, 2025

AI Doom: Really Smart Software Is Coming So Start Being Afraid, People

January 20, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumb_thumb Prepared by a still-alive dinobaby.

The essay “Prophecies of the Flood” gathers several comments about software that thinks and decides without any humans fiddling around. The “flood” metaphor evokes the streams of money about which money people fantasize. The word “flood” evokes the Hebrew Biblical idea’s presentation of a divinely initiated cataclysm intended to cleanse the Earth of widespread wickedness. Plus, one cannot overlook the image of small towns in North Carolina inundated in mud and debris from a very bad storm.

Screenshot 2025-01-12 055443

When the AI flood strikes as a form of divine retribution, will the modern arc be filled with humans? Nope. The survivors will be those smart agents infused with even smarter software. Tough luck, humanoids. Thanks, OpenAI, I knew you could deliver art that is good enough.

To sum up: A flood is bad news, people.

The essay states:

the researchers and engineers inside AI labs appear genuinely convinced they’re witnessing the emergence of something unprecedented. Their certainty alone wouldn’t matter – except that increasingly public benchmarks and demonstrations are beginning to hint at why they might believe we’re approaching a fundamental shift in AI capabilities. The water, as it were, seems to be rising faster than expected.

The signs of darkness, according to the essay, include:

  • Rising water in the generally predictable technology stream in the park populated with ducks
  • Agents that “do” something for the human user or another smart software system. To humans with MBAs, art history degrees, and programming skills honed at a boot camp, the smart software is magical. Merlin wears a gray T shirt, sneakers, and faded denims
  • Nifty art output in the form of images and — gasp! — videos.

The essay concludes:

The flood of intelligence that may be coming isn’t inherently good or bad – but how we prepare for it, how we adapt to it, and most importantly, how we choose to use it, will determine whether it becomes a force for progress or disruption. The time to start having these conversations isn’t after the water starts rising – it’s now.

Let’s assume that I buy this analysis and agree with the notion “prepare now.” How realistic is it that the United Nations, a couple of super powers, or a motivated individual can have an impact? Gentle reader, doom sells. Examples include The Big Short: Inside the Doomsday Machine, The Shifts and Shocks: What We’ve Learned – and Have Still to Learn – from the Financial Crisis, and Too Big to Fail: How Wall Street and Washington Fought to Save the Financial System from Crisis – and Themselves, and others, many others.

Have these dissections of problems had a material effect on regulators, elected officials, or the people in the bank down the street from your residence? Answer: Nope.

Several observations:

  1. Technology doom works because innovations have positive and negative impacts. To make technology exciting, no one is exactly sure what the knock on effects will be. Therefore, doom is coming along with the good parts
  2. Taking a contrary point of view creates opportunities to engage with those who want to hear something different. Insecurity is a powerful sales tool.
  3. Sending messages about future impacts pulls clicks. Clicks are important.

Net net: The AI revolution is a trope. Never mind that after decades of researchers’ work, a revolution has arrived. Lionel Messi allegedly said, “It took me 17 years to become an overnight success.” (Mr. Messi is a highly regarded professional soccer player.)

Will the ill-defined technology kill humans? Answer: Who knows. Will humans using ill-defined technology like smart software kill humans? Answer: Absolutely. Can “anyone” or “anything” take an action to prevent AI technology from rippling through society.  Answer: Nope.

Stephen E Arnold, January 20, 2025

How To: Create Junk Online Content with AI

January 16, 2025

animated-dinosaur-image-0055_thumbA dinobaby produced this post. Sorry. No smart software was able to help the 80 year old this time around.

Why sign up for a Telegram SEO expert in Myanmar or the Philippines? You can do it yourself. An explainer called “AI Marketing Strategy: How to Use AI for Marketing (Examples & Tools)” provides the recipe. The upside? Well, I am not sure. The downside? More baloney with which to dupe smart software and dumb humans.

What does the free write up cover? Here’s the list of topics. Which whet your appetite for AI-generated baloney?

  • A definition of AI marketing
  • How to use AI in your strategy for cutting corners and whacking out “real” information
  • The steps you have to follow to find the pot of gold
  • The benefits of being really smart with smart software
  • The three — count them — types of smart software marketing
  • The three — count them — “best” AI marketing software (I love “best”. So credible)
  • A smart software FAQ
  • How to “future proof” your business with an AI marketing strategy.

Let me give you an example of the riches tucked inside this EWeek “real” news article. The write up says:

Maintain data quality

Okay, marketers are among the world’s leaders in data accuracy, thoroughness, and detail fact checking. That’s why the handful of giant outfits providing smart software explain how to keep cheese on pizza with glue and hallucinate.

Why should one use smart software to market? That’s easy. The answer is that smart software makes it easy to produce output which may be incorrect. If you want more benefits, here’s the graphic from the write up which explains it to short-cutters who don’t want to spend time doing work the old-fashioned way:

Screenshot 2025-01-11 075459

A graphic which may or may not have been produced with smart software designed to create “McKinsey” type illustrations suitable for executives with imposter syndrome.

This graphic is followed by an explanation of the three — count them — three types of AI marketing. I am not sure the items listed are marketing, but, hey, when one is doing a deep dive one doesn’t linger too long deep in the content ocean with concepts like machine learning, natural language processing, and computer vision. (I am not joking. These are the three types of AI marketing. Who knew? Certainly not this dinobaby.

The author, according to the definitive write up, possesses “more than 10 years of experience covering technology, software, and news.” The home base for this professional is the Philippines, which along with Thailand and Cambodia, one of the hot beds for a wide range of activities, including the use of smart software to generate those SEO services publicized on Telegram.

Was the eWeek article written with the help of AI? Boy, this dinobaby doesn’t know.

Stephen E Arnold, January 16, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta