Mr. Musk Knows Best When It Comes to Online Ads

November 9, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Other than the eye-catching and underwhelming name change, X (formerly Twitter) has remained quiet. Users still aren’t paying for the check mark that verifies their identity and Elon Musk hasn’t garnered any ire. Mashable has the most exciting news about X and it relates to ads: “X Rolls Out New Ad Format That Can’t Be Reported, Blocked.”

X might be a social media platform but it is also a business that needs to make a profit. X has failed to attract new advertisers but the social platform is experimenting with a new type of ad. X users report act the new ads don’t allow them to like tweet them. What is even stranger is that the ads do not disclose that they are advertisements or any other disclosure.

The ads consist of a photo, a fake avatar, and vague yet interesting text. They are disguised as a regular tweet. The new ads are of the “chumbox” quality, meaning they are low quality, spammy aka those clickbait ads at the bottom of articles on content farm Web sites. They’re similar to the ads in the back of magazines or comic books that advertised for drawing schools, mail order gadget scams, and sea monkeys.

Chumbox ads point to X’s failing profitability. Advertisers lost interest in X after Musk acquired the platform. X is partnering with third-party advertisers in the ad tech industry to sell available ad inventory. Google also announced a partnership with X to sell programmatic advertising.

Musk made another change that isn’t sitting well with users:

“The new ad format arrives to X around the same time the company made another decision that makes the platform less transparent. Earlier this week, under a directive from Musk himself, X removed headlines and other context from links shared to the platform. Instead of seeing the title of an article or other link posted to X, users now simply see an embed of the header image with the corresponding domain name displayed like a watermark-like overlay in the corner of the photo. Musk said he made the change to how links were displayed because he didn’t like the way it previously looked.”

X as an advertising platform is doing a bang up job. Lots of advertisers. Lots of money. Lots of opportunity. I, however, am not sure I see X as does Mr. M.

Whitney Grace, November 9, 2023

Mommy, Mommy, He Will Not Share the Toys (The Rat!)

November 8, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

In your past, did someone take your toy dump truck or walk up to you in the lunch room in full view of the other nine year olds and take your chocolate chip cookie? What an inappropriate action. What does the aggrieved nine year old do if he or she comes from an upper economic class? Call the family lawyer? Of course. That is a logical action. The cookie is not a cookie; it is a principle.

11 8 kid and mommy

“That’s right, mommy. The big kid at school took my lunch and won’t let me play on the teeter totter. Please, help me, mommy. That big kid is not behaving right,” says the petulant child. The mommy is sympathetic. An injustice has been wrought upon her flesh and blood. Thanks, MidJourney. I learned that “nasty” is a forbidden word. It is a “nasty blow” that you dealt me.

Google and Prominent Telecom Groups Call on Brussels to Act Over Apple’s Imessage” strikes me as a similar situation. A bigger child has taken the cookies. The aggrieved children want those cookies back. They also want retribution. Taking the cookies. That’s unjust from the petulant kids’ point of view.

The Financial Times’s article takes a different approach, using more mature language. Here’s a snippet of what’s shakin’ in the kindergarten mind:

Currently, only Apple users are able to communicate via iMessage, making its signature “blue bubble” texts a key factor in retaining iPhone owners’ loyalty, especially among younger consumers. When customers using smartphones running Google’s Android software join an iMessage chat group all the messages change color, indicating it has defaulted to standard SMS.

So what’s up? The FT reports:

Rivals have long sought to break iMessage’s exclusivity to Apple’s hardware, in the hope that it might encourage customers to switch to its devices. In a letter sent to the commission and seen by the Financial Times, the signatories, which include a Google senior vice-president and the chief executives of Vodafone, Deutsche Telekom, Telefónica and Orange, claimed Apple’s service meets the qualitative thresholds of the act. It therefore should be captured by the rules to “benefit European consumers and businesses”, they wrote.

I wonder if these giant corporations realize that some perceive many of their business actions as somewhat similar; specifically, the fences constructed so that competitors cannot encroach on their products and services.

I read the FT’s article as the equivalent of the child who had his cookie taken away. The parent — in this case — is the legal system of the European Union.

Those blue and green bubbles are to be shared. What will mommy decide? In the US, some mommies call their attorneys and threaten or take legal action. That’s right and just. I want those darned cookies and my mommy is going to get them, get the wrongdoers put in jail, and do significant reputational damage.

“Take my cookies; you pay,” some say in a most Googley way.

Stephen E Arnold, November 8, 2023

The AI Bandwagon: A Hoped for Lawyer Billing Bonanza

November 8, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

The AI bandwagon is picking up speed. A dark smudge appears in the sky. What is it? An unidentified aerial phenomenon? No, it is a dense cloud of legal eagles. I read “U.S. Regulation of Artificial Intelligence: Presidential Executive Order Paves the Way for Future Action in the Private Sector.”

image

A legal eagle — aka known as a lawyer or the segment of humanity one of Shakespeare’s characters wanted to drown — is thrilled to read an official version of the US government’s AI statement. Look at what is coming from above. It is money from fees. Thanks, Microsoft Bing, you do understand how the legal profession finds pots of gold.

In this essay, which is free advice and possibly marketing hoo hah, I noted this paragraph:

While the true measure of the Order’s impact has yet to be felt, clearly federal agencies and executive offices are now required to devote rigorous analysis and attention to AI within their own operations, and to embark on focused rulemaking and regulation for businesses in the private sector. For the present, businesses that have or are considering implementation of AI programs should seek the advice of qualified counsel to ensure that AI usage is tailored to business objectives, closely monitored, and sufficiently flexible to change as laws evolve.

Absolutely. I would wager a 25 cents coin that the advice, unlike the free essay, will incur a fee. Some of those legal fees make the pittance I charge look like the cost of chopped liver sandwich in a Manhattan deli.

Stephen E Arnold, November 8, 2023

Tech Leaders May Be Over Dramatizing AI Risks For Profit and Lock In

November 8, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Advancing technology is good, because new innovations can help humanity. As much as technology can help humanity, it can also hinder the species.  That’s why it’s important for rules to be established to regulate new technology, such as AI algorithms.  Rules shouldn’t be so stringent as to prevent further innovation, however.  You’d think that Big Tech companies would downplay the risks of AI so they could experiment without constraints. It’s actually the opposite says Google Brain cofounder Andrew Ng.

He spoke out against the corporate overlords via Yahoo Finance: “Google Brain Cofounder Says Big Tech Companies Are Inflating Fears About The Risks Of AI Wiping Out Humanity Because They Want To Dominate The Market.”  Ng claims that Big Tech companies don’t want competition from open source AI.  He said that Big Tech companies are inflating the dangers of AI driving humans to extinction so governments will enforce hefty regulations. These regulations would force open AI and smaller tech businesses to tread water until they went under. 

Big Tech companies want to make and sell their products in a free for all environment so they can earn as much money as possible.  If they have less competition, then they don’t need to worry about their margins or losing control of their markets.  Open source AI offers the biggest competition to Big Tech so they want it gone.

In May 2023, AI experts and CEOs signed a statement from the Center for AI Safety that compared the risks of AI to nuclear war and a pandemic.

“Governments around the world are looking to regulate AI, citing concerns over safety, potential job losses, and even the risk of human extinction. The European Union will likely be the first region to enforce oversight or regulation around generative AI. Ng said the idea that AI could wipe out humanity could lead to policy proposals that require licensing of AI, which risked crushing innovation. Any necessary AI regulation should be created thoughtfully, he added.”

Are Big Tech heads adding to the already saturated culture of fear that runs rampant in the United States?  It’s already fueled by the Internet and social media which is like a computer science major buzzing from seven Red Bulls.  Maybe AI fears will be the next biggest thing we’ll need to worry about.  Should we start taking bets?

Whitney Grace, November 8, 2023

Amazon: Numerical Recipes Poison Good Deals

November 8, 2023

Dinobaby here. I read “FTC Alleges Amazon Used a Price-Gouging Algorithm.” The allegations in the article are likely to ruffle some legal eagles wearing Amazon merchandise. The main idea is that a numerical recipe named after the dinobaby’s avatar manipulated prices to generate more revenue for the Bezos bulldozer. This is a bulldozer relocating to Miami too. Miami says, “Buenos días.” Engadget says:

Amazon faces allegations from the U.S. Federal Trade Commission (FTC) of wielding price-gouging algorithms through an operation called “Project Nessie” according to court documents filed Thursday. The FTC says the algorithm has generated more than $1 billion in excess profit for Jeff Bezos’s e-commerce giant.

Let’s assume the allegations contain a dinosaur scale or two of truth. What could one living in rural Kentucky conclude? How about these notions:

  • Amazon knows how to use fancy math in a way that advantages itself. Imagine the earning power of manipulated algorithms powered by smart software in the hands of engineers eager to earn a bonus, a promotion, and maybe a ride in a rocket ship from the fountain head of the online bookstore. Yep, just imagine.
  • Amazon got caught. If the justice system prevails, will shoppers avoid Anazon?l lNope, in my opinion. There are more Amazon delivery vehicles in the area where I live in nowhere Kentucky than on the main highway. Convenience wins. So what if the pricing is wonky. Couch potatoes like couches, not driving 30 minutes to a so-called store. Laws just may not matter when it comes to big tech outfits.
  • Other companies may learn from Amazon. The estimable CocaCola machines in some whiz kids’ dreams learns what a person likes and prices accordingly. That innovation may become a reality as some bright sparks invent the future of billing as much as possible and hamstringing competitors. Nice work, if Amazon does have the alleged money machine algorithms.

What is the future of retail? I would offer the opinion that trickery, mendacity, and cleverness will become the keys to success. I am glad I am an old dinobaby, but I like the name “Nessie.” My mama Dino had a friend named Nessie. Nice fangs and big quiet pads on her claws. Perfect for catching and killing prey.

Stephen E Arnold, November 7, 2023

The Risks of Smart Software in the Hands of Fullz Actors and Worse

November 7, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

The ChatGPT and Sam AI-Man parade is getting more acts. I spotted some thumbs up from Satya Nadella about Sam AI-Man and his technology. The news service Techmeme provided me with dozens of links and enticing headlines about enterprise this and turbo that GPT. Those trumpets and tubas were pumping out the digital version of Funiculì, Funiculà.

I want to highlight one write up and point out an issue with smart software that appears to have been ignored, overlooked, or like the iceberg possibly that sank the RMS Titanic, was a heck of a lot more dangerous than Captain Edward Smith appreciated.

11 7 parade

The crowd is thrilled with the new capabilities of smart software. Imagine automating mundane, mindless work. Over the oom-pah of the band, one can sense the excitement of the Next Big Thing getting Bigger and more Thingier. In the crowd, however, are real or nascent bad actors. They are really happy too. Imagine how easy it will be to automate processes designed to steal personal financial data or other chinks in humans’ armor!

The article is “How OpenAI Is Building a Path Toward AI Agents.” The main idea is that one can type instructions into Sam AI-Man’s GPT “system” and have smart software hook together discrete functions. These functions can then deliver an output requiring the actions of different services.

The write up approaches this announcement or marketing assertion with some prudence. The essay points out that “customer chatbots aren’t a new idea.” I agree. Connecting services has been one of the basic ideas of the use of software. Anyone who has used notched cards to retrieve items related to one another is going to understand the value of automation. And now, if the Sam AI-Man announcements are accurate that capability no longer requires old-fashioned learning the ropes.

The cited write up about building a path asserts:

Once you start enabling agents like the ones OpenAI pointed toward today, you start building the path toward sophisticated algorithms manipulating the stock market; highly personalized and effective phishing attacks; discrimination and privacy violations based on automations connected to facial recognition; and all the unintended (and currently unimaginable) consequences of infinite AIs colliding on the internet.

Fear, uncertainty, and doubt are staples of advanced technology. And the essay makes clear that the rule maker in chief is Sam AI-Man; to wit the essay says:

After the event, I asked Altman how he was thinking about agents in general. Which actions is OpenAI comfortable letting GPT-4 take on the internet today, and which does the company not want to touch? Altman’s answer is that, at least for now, the company wants to keep it simple. Clear, direct actions are OK; anything that involves high-level planning isn’t.

Let me introduce my observations about the Sam AI-Man innovations and the type of explanations about the PR and marketing event which has whipped up pundits, poohbahs, and Twitter experts (perhaps I should say X-spurts?)

First, the Sam AI-Man announcements strike me as making orchestration a service easy to use and widely available. Bad things won’t be allowed. But the core idea of what I call “orchestration” is where the parade is marching. I hear the refrain “Some think the world is made for fun and frolic.” But I don’t agree, I don’t agree. Because as advanced tools become widely available, the early adopters are not exclusively those who want to link a calendar to an email to a document about a meeting to talk about a new marketing initiative.

Second, the ability of Sam AI-Man to determine what’s in bounds and out of bounds is different from refereeing a pickleball game. Some of the players will be nation states with an adversarial view of the US of A. Furthermore, there are bad actors who have a knack for linking automated information to online extortion. These folks will be interested in cost cutting and efficiency. More problematic, some of these individuals will be more active in testing how orchestration can facilitate their human trafficking activities or drug sales.

Third, government entities and people like Sam AI-Man are, by definition, now in reactive mode. What I mean is that with the announcement and the chatter about automating the work required to create a snappy online article is not what a bad actor will do. Individuals will see opportunities to create new ways to exploit the cluelessness of employees, senior citizens, and young people. The cheerful announcements and the parade tunes cannot drown out the low frequency rumbles of excitement now rippling through the bad actor grapevines.

Net net: Crime propelled by orchestration is now officially a thing. The “regulations” of smart software, like the professionals who will have to deal with the downstream consequences of automation, are out of date. Am I worried? For me personally, no, I am not worried. For those who have to enforce the laws which govern a social construct? Yep, I have a bit of concern. Certainly more than those who are laughing and enjoying the parade.

Stephen E Arnold, November 7, 2023

Missing Signals: Are the Tools or Analysts at Fault?

November 7, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Returning from a trip to DC yesterday, I thought about “signals.” The pilot — a specialist in hit-the-runway-hard landings  — used the word “signals” in his welcome-aboard speech. The word sparked two examples of missing signals. The first is the troubling kinetic activities in the Middle East. The second is the US Army reservist who went on a shooting rampage.

image

The intelligence analyst says, “I have tools. I have data. I have real time information. I have so many signals. Now which ones are important, accurate, and actionable?” Our intrepid professionals displays the reality of separating the signal from the noise. Scary, right? Time for a Starbuck’s visit.

I know zero about what software and tools, systems and informers, and analytics and smart software the intelligence operators in Israel relied upon. I know even less about what mechanisms were in place when Robert Card killed more than a dozen people.

The Center for Strategic and International Studies published “Experts React: Assessing the Israeli Intelligence and Potential Policy Failure.” The write up stated:

It is incredible that Hamas planned, procured, and financed the attacks of October 7, likely over the course of at least two years, without being detected by Israeli intelligence. The fact that it appears to have done so without U.S. detection is nothing short of astonishing. The attack was complex and expensive.

And one more passage:

The fact that Israeli intelligence, as well as the international intelligence community (specifically the Five Eyes intelligence-sharing network), missed millions of dollars’ worth of procurement, planning, and preparation activities by a known terrorist entity is extremely troubling.

Now let’s shift to the Lewiston Maine shooting. I had saved on my laptop “Six Missed Warning Signs Before the Maine Mass Shooting Explained.” The UK newspaper The Guardian reported:

The information about why, despite the glaring sequence of warning signs that should have prevented him from being able to possess a gun, he was still able to own over a dozen firearms, remains cloudy.

Those “signs” included punching a fellow officer in the US Army Reserve force, spending some time in a mental health facility, family members’ emitting “watch this fellow” statements, vibes about issues from his workplace, and the weapon activity.

On one hand, Israel had intelligence inputs from just about every imaginable high-value source from people and software. On the other hand, in a small town the only signal that was not emitted by Mr. Card was buying a billboard and posting a message saying, “Do not invite Mr. Card to a church social.”

As the plane droned at 1973 speeds toward the flyover state of Kentucky, I jotted down several thoughts. Like or not, here these ruminations are:

  1. Despite the baloney about identifying signals and determining which are important and which are not, existing systems and methods failed bigly. The proof? Dead people. Subsequent floundering.
  2. The mechanisms in place to deliver on point, significant information do not work. Perhaps it is the hustle bustle of everyday life? Perhaps it is that humans are not very good at figuring out what’s important and what’s unimportant. The proof? Dead people. Constant news releases about the next big thing in open source intelligence analysis. Get real. This stuff failed at the scale of SBF’s machinations.
  3. The uninformed pontifications of cyber security marketers, the bureaucratic chatter flowing from assorted government agencies, and the cloud of unknowing when the signals are as subtle as the foghorn on cruise ship with a passenger overboard. Hello, hello, the basic analysis processes don’t work. A WeWork investor’s thought processes were more on point than the output of reporting systems in use in Maine and Israel.

After the aircraft did the thump-and-bump landing, I was able to walk away. That’s more than I can say for the victims of analysis, investigation, and information processing methods in use where moose roam free and where intelware is crafted and sold like canned beans at TraderJoe’s.

Less baloney and more awareness that talking about advanced information methods is a heck of a lot easier than delivering actual signal analysis.

Stephen E Arnold, November 7, 2023

test

AI Makes Cyberattacks Worse. No Fooling?

November 7, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

Why does everyone appear to be surprised by the potential dangers of cyber attacks?  Science fiction writers and even the crazy conspiracy theorists with their tin foil hats predicted that technology would outpace humanity one day.  Tech Radar wrote an article about how AI like ChatGPT makes cyber attacks more dangerous than ever: “AI Is Making Cyberattacks Even Smarter And More Dangerous.

Tech experts want to know how humans and AI algorithms compare when it comes to creating scams.  IBM’s Security Intelligence X-Force team accepted the challenge with an experiment about phishing emails.  They compared human written phishing emails against those ChatGPT wrote.  IBM’s X-Force team discovered that the human written emails had higher clicks rates, giving them a slight edge over the ChatGPT.  It was a very slight edge that proves AI algorithms aren’t far from competing and outpacing human scammers. 

Human written phishing scams have higher click rates, because of emotional intelligence, personalization, and ability to connect with their victims. 

“All of these factors can be easily tweaked with minimal human input, making AI’s work extremely valuable. It is also worth noting that the X-Force team could get a generative AI model to write a convincing phishing email in just five minutes from five prompts – manually writing such an email would take the team about 16 hours. ‘While X-Force has not witnessed the wide-scale use of generative AI in current campaigns, tools such as WormGPT, which were built to be unrestricted or semi-restricted LLMs were observed for sale on various forums advertising phishing capabilities – showing that attackers are testing AI’s use in phishing campaigns,’ the researchers concluded.”

It’s only a matter of time before the bad actors learn how to train the algorithms to be as convincing as their human creators.  White hat hackers have a lot of potential to earn big bucks as venture startups.

Whitney Grace, November 7, 2023

Tech Writer Overly Frustrated With Companies

November 7, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

We all begin our adulthoods as wide-eyed, naïve go-getters who are out to change the world.  It only takes a few years for our hopes and dreams to be dashed by the menial, insufferable behaviors that plague businesses.  We all have stories about incompetence, wasted resources, passing the buck, and butt kissers.  Ludicity is a blog written by a tech engineer where he vents his frustrations and shares his observations about his chosen field.  His first post in November 2023 highlights the stupidity of humanity and upper management: “What The Goddamn Hell Is Going On In The Tech Industry?

For this specific post, the author reflects on a comment he received regarding how companies can save money by eliminating useless bodies and giving the competent staff the freedom to do their jobs.  The comment in question blamed the author for creating unnecessary stress and not being a team player.  In turn, the author pointed out the illogical actions of the comment and subsequently dunked his head in water to dampen his screams.  The author writes Ludicity for cathartic reasons, especially to commiserate with his fellow engineers. 

The author turned 29 in 2023, so he’s ending his twenties with the same depression and dismal outlook we all share:

“There’s just some massive unwashed mass of utterly stupid companies where nothing makes any sense, and the only efficiencies exist in the department that generates the money to fund the other stupid stuff, and then a few places doing things halfway right. The places doing things right tend to be characterized by being small, not being obsessed with growth, and having calm, compassionate founders who still keep a hand on the wheel. And the people that work there tend not to know the people that work elsewhere. They’re just in some blessed bubble where the dysfunction still exists in serious quantities, but that quantity is like 1/10th the intensity of what it is elsewhere.”

The author, however, still possesses hope.  He wants to connect with like-minded individuals who are tired of the same corporate shill and want to work together at a company that actually gets work done. 

We all want to do that.  Unfortunately the author might be better off starting his own company to attract his brethren and see what happens.  It’ll be hard but not as hard as going back to school or dealing with corporate echo chambers.

Whitney Grace, November 7, 2023

ACM Kills Print Publications But Dodges the Money Issue

November 6, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

In January 2024, the Association for Computing Machinery will kill off its print publication. “Ceasing Print Publication of ACM Journals and Transaction” says good bye to the hard copy instances of Communications of ACM, ACM InRoads, and a couple of other publications. It is possible that ACM will continue to produce print versions of material for students. (I thought students were accustomed to digital content. Guess the ACM knows something I don’t. That’s not too difficult. I am a dinobaby, who read ACM publications for the stories, not the pictures.)

image

The perspiring clerk asks, “But what about saving the whales?” The CFO carrying the burden of talking to auditors, replies, “It’s money stupid, not that PR baloney.” Thanks, Microsoft Bind. You understand accountants perspiring. Do you have experience answering IRS questions about some calculations related to Puerto Rico?

Why would a professional trade outfit dismiss paper? My immediate and uninformed answer to this question is, “Cost. Stuff like printing, storage, fulfillment, and design cost money.” I would be wrong, of course. The ACM gives these reasons:

  • Be environmentally friendly. (Don’t ACM supporters use power sucking data centers often powered by coal?)(
  • Electronic publications have more features. (One example is a way to charge a person who wants to read an article and cut off at the bud the daring soul pumping money into a photocopy machine to have an article to read whilst taking a break from the coffee and mobile phone habit.)
  • Subscriptions are tanking.

I think the “subscriptions” bit is a way to say, “Print stuff is very expensive to produce and more expensive to sell.”

With the New York Times allegedly poised to use smart software to write its articles, when will the ACM dispense with member contributions?

Stephen E Arnold, November 6, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta