AI RIFing Financial Analysts (Juniors Only for Now). And Tomorrow?

April 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Bill Gates Worries AI Will Take His Job, Says, ‘Bill, Go Play Pickleball, I’ve Got Malaria Eradication’.” Mr. Gates is apparently about becoming farmer. He is busy buying land. He took time out from his billionaire work today to point out that AI will nuke lots of jobs. What type of jobs will be most at risk? Amazon seems to be focused on using robots and smart software to clear out expensive, unreliable humans.

But the interesting profession facing what might be called an interesting future are financial analysts. “AI Is Coming for Wall Street: Banks Are Reportedly Weighing Cutting Analyst Hiring by Two-Thirds” asserts:

Incoming classes of junior investment-banking analysts could up being cut as much as two-thirds, some of the people suggested, while those brought on board could fetch lower salaries, on account of their work being assisted by artificial intelligence.

Okay, it is other people’s money, so no big deal if the smart software hallucinates as long as there is churn and percentage scrapes. But what happens when the “senior” analysts leave or get fired? Will smart software replace them, or it the idea that junior analyst who are “smart” will move up and add value “smart” software cannot?

image

Thanks, OpenAI. This is a good depiction of the “best of the best” at a major Wall Street financial institution after learning their future was elsewhere.

The article points out:

The consulting firm Accenture has an even more extreme outlook for industry disruption, forecasting that AI could end up replacing or supplementing nearly 75% of all working hours in the banking sector.

Let’s look at the financial sector’s focus on analysts. What other industrial sectors use analysts? Here are several my team and I track:

  1. Intelligence (business and military)
  2. Law enforcement
  3. Law
  4. Medical subrogation
  5. Consulting firms (niche, general, and technical)
  6. Publishing.

If the great trimming at McKinsey and the big New York banks deliver profits, how quickly will AI-anchored software and systems diffuse across organizations?

The answer to the question is, “Fast.”

Stephen E Arnold, April 19, 2024

Google Gem: Arresting People Management

April 18, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have worked for some well-managed outfits: Halliburton, Booz Allen, Ziff Communications, and others in the 55 year career. The idea that employees at Halliburton Nuclear (my assignment) would occupy the offices of a senior officer like Eugene Saltarelli was inconceivable. (Mr. Saltarelli sported a facial scar. When asked about the disfigurement, he would stare at the interlocutor and ask, “What scar?” Do you want to “take over” his office?) Another of my superiors at a firm in New York had a special method of shaping employee behavior. This professional did nothing to suppress rumors that two of his wives drowned  during “storms” after falling off his sail boat. Did I entertain taking over his many-windowed office in Manhattan? Answer: Are you sure you internalized the anecdote?

! google gems

Another Google management gem glitters in the public spot light.

But at the Google life seems to be different, maybe a little more frisky absent psychological behavior controls. I read “Nine Google Workers Get Arrested After Sit-In Protest over $1.2B Cloud Deal with Israel.” The main idea seems to be that someone at Google sold cloud services to the Israeli government. Employees apparently viewed the contract as bad, wrong, stupid, or some combination of attributes. The fix involved a 1960s-style sit in. After a period of time elapsed, someone at Google called the police. The employee-protesters were arrested.

I recall hearing years ago that Google faced a similar push back about a contract with the US government. To be honest, Google has generated so many human resource moments, I have a tough time recalling each. A few are Mt. Everests of excellence; for example, the termination of Dr. Timnit Gebru. This Googler had the nerve to question the bias of Google’s smart software. She departed. I assume she enjoyed the images of biased signers of documents related to America’s independence and multi-ethnic soldiers in the World War II German army. Bias? Google thinks not I guess.

The protest occurs as the Google tries to cope with increased market pressure and the tough-to-control costs of smart software. The quick fix is to nuke or RIF employees. “Google Lays Off Workers As Part of Pretty Large-Scale Restructuring” reports by citing Business Insider:

Ruth Porat, Google’s chief financial officer, sent an email to employees announcing that the company would create “growth hubs” in India, Mexico and Ireland. The unspecified number of layoffs will affect teams in the company’s finance department, including its treasury, business services and revenue cash operations units

That looks like off-shoring to me. The idea was a cookie cutter solution spun up by blue chip consulting companies 20, maybe 30 years ago. On paper, the math is more enticing than a new Land Rover and about as reliable. A state-side worker costs X fully loaded with G&A, benefits, etc. An off-shore worker costs X minus Y. If the delta means cost savings, go for it. What’s not to like?

According to a source cited in the New York Post:

“As we’ve said, we’re responsibly investing in our company’s biggest priorities and the significant opportunities ahead… To best position us for these opportunities, throughout the second half of 2023 and into 2024, a number of our teams made changes to become more efficient and work better, remove layers and align their resources to their biggest product priorities.

Yep, align. That senior management team has a way with words.

Will those who are in fear of their jobs join in the increasingly routine Google employee protests? Will disgruntled staff sandbag products and code? Will those who are terminated write tell-alls about their experiences at an outfit operating under Code Red for more than a year?

Several observations:

  1. Microsoft’s quite effective push of its AI products and services continues. In certain key markets like New York City and the US government, Google is on the defensive. Hint: Microsoft has the advantage, and the Google is struggling to catch up.
  2. Google’s management of its personnel seems to create the wrong type of news. Example: Staff arrests. Is that part of Peter Drucker’s management advice.
  3. The Google leadership team appears to lack the ability to do their job in a way that operates in a quiet, effective, positive, and measured way.

Net net: The online ad money machine keeps running. But if the investigations into Google’s business practices get traction, Google will have additional challenges to face. The Sundar & Prabhakar Comedy team should make a TikTok-type,  how-to video about human resource management. I would prefer a short video about the origin story for the online advertising method which allowed Google to become a fascinating outfit.

Stephen E Arnold, April 18, 2024

RIFed by AI? Do Not Give Hope Who Enter There

April 18, 2024

Rest assured, job seekers, it is not your imagination. Even those with impressive resumes are having trouble landing an interview, never mind a position. Case in point, Your Tango shares, “Former Google Employee Applies to 50 Jobs that He’s Overqualified For and Tracks the Alarming Number of Rejections.” Wrier Nia Tipton summarizes a pair of experiments documented on TikTok by ex-Googler Jonathan Javier. He found prospective employers were not impressed with his roles at some of the biggest tech firms in the world. In fact, his years of experience may have harmed his chances: his first 50 applications were designed to see how he would fare as an overqualified candidate. Most companies either did not respond or rejected him outright. He was not surprised. Tipton writes:

“Javier explained that recruiters are seeing hundreds of applications daily. ‘For me, whenever I put a job break out, I get about 30 to 50 every single day,’ he said. ‘So again, everybody, it’s sometimes not your resume. It’s sometimes that there’s so many qualified candidates that you might just be candidate number two and number three.’”

So take heart, applicants, rejections do not necessarily mean you are not worthy. There are just not enough positions to go around. The write-up points to February numbers from the Bureau of Labor Statistics that show that, while the number of available jobs has been growing, so is the unemployment rate. Javier’s experimentation continued:

“In another TikTok video, Jonathan continued his experiment and explained that he applied to 50 jobs with two similar resumes. The first resume showed that he was overqualified, while the other showed that he was qualified. Jonathan quickly received 24 rejections for the overqualified resume, while he received 15 rejections for the qualified resume. Neither got him any interviews. Something interesting that Javier noted was how fast he was rejected with his overqualified resume. From this, he observed that overqualified candidates are often overlooked in favor of candidates that fit 100% of the qualities they are looking for. ‘That’s unfortunate because it creates a bias for people who might be older or who might have a lot more experience, but they’re trying to transition into a specific industry or a new position,’ he said.”

Ouch. It is unclear what, if anything, can be done about this specificity bias in hiring. It seems all one can do is keep trying. But, not that way.

Cynthia Murrell, April 18, 2024

McKinsey & Co. Emits the Message “You Are No Longer the Best of the Best”

April 4, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I love blue chip consulting firms’ management tactics. I will not mention the private outfits which go public and then go private. Then the firms’ “best of the best” partners decide to split the firm. Wow. Financial fancy dancing or just evidence that “best of the best” is like those plastic bottles killing off marine life?

I read “McKinsey Is so Eager to Trim Staff That It’s Offering Some Employees 9 Months’ Pay to Go and Do Something Else. I immediately asked myself, “What’s some mean?” I am guessing based on my experience that “all” of the RIF’ed staff are not getting the same deal. Well, that’s life in the exciting world of the best and the brightest. Some have to accept that there are blue chippers better and, therefore, able to labor enthusiastically at a company known as the Big Dog in the consulting world.

image

Thanks MSFT Copilot. (How’s your security today?)

The write up reports as “real” NY news:

McKinsey is attempting  to slim the company down in a caring and supporting way by paying its workers to quit.

Hmmm. “Attempting” seems an odd word for a consulting firm focused on results. One slims down or one remains fat and prone to assorted diseases if I understood my medical professional. Is McKinsey signaling that its profit margin is slipping like the trust level for certain social media companies? Or is artificial intelligence the next big profit making thing; therefore, let’s clear out the deadwood and harvest the benefits of smart software unencumbered by less smart humans?

Plus, the formerly “best and brightest” will get help writing their résumés. My goodness, imagine a less good Type A super achiever unable to write a résumé. But just yesterday those professionals were able to advise executives often with decades more experience, craft reports with asterisk dot points, and work seven days a week. These outstanding professionals need help writing their résumés. This strikes me as paternalistic and a way to sidestep legal action for questionable termination.

Plus, the folks given the chance to find their future elsewhere (as long as the formerly employed wizard conforms to McKinsey’s policies about client poaching) can allegedly use their McKinsey email accounts. What might a person who learns he or she is no longer the best of the best might do with a live McKinsey email account? I have a couple of people on my research team who have studied mischief with emails. I assume McKinsey’s leadership knows a lot more than my staff. We don’t pontificate about pharmaceutical surfing; we do give lectures to law enforcement and intelligence professionals. Therefore, my team knows much, much less about the email usage that McKinsey management.

Deloitte, another blue chip outfit, is moving quickly into the AI space. I have heard that it wants to use AI and simultaneously advise its clients about AI. I wonder if Deloitte has considered that smart software might be marginally less expensive than paying some of the “best of the best” to do manual work for clients? I don’t know.

The blue chip outfit at which I worked long ago was a really humane place. Those rumors that an executive drowned a loved one were just rumors. The person was a kind and loving individual with a raised dais in his office. I recall I hard to look up at him when seated in front of his desk. Maybe that’s just an AI type hallucination from a dinobaby. I do remember the nurturing approach he took when pointing at a number and demanding the VP presenting the document, “I want to know where that came from now.” Yes, that blue chip professional was patient and easy going as well.

I noted this passage in the Fortune “real” NY news:

A McKinsey spokesperson told Fortune that its unusual approach to layoffs is all part of the company’s core mission to help people ‘learn and grow into leaders, whether they stay at McKinsey or continue their careers elsewhere.’

I loved the sentence including the “learn and grow into leaders” verbiage. I am imagining a McKinsey HR professional saying, “Remember when we recruited you? We told you that you were among the top one percent of the top one percent. Come on. I know you remember? Oh, you don’t remember my assurances of great pay, travel, wonderful colleagues, tremendous opportunities to learn, and build your interpersonal skills. Well, that’s why you have been fired. But you can use your McKinsey email. Please, leave now. I have billable work to do that you obviously were not able to undertake and complete in a satisfactory manner. Oh, here’s your going away gift. It is a T shirt which says, ‘Loser@mckinsey.com.’

Stephen E Arnold, April 4, 2024

Yeah, Stability at Stability AI: Will Flame Outs Light Up the Bubble?

April 4, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Inside the $1 Billion Love Affair between Stability AI’s Complicated Founder and Tech Investors Coatue and Lightspeed—And How It Turned Bitter within Months.” Interesting but, from my point of view, not surprising. High school science club members, particularly when preserving some of their teeny bopper ethos into alleged adulthood can be interesting people. And at work, exciting may be a suitable word. The write up’s main idea is that the wizard “left home in his pajamas.” Well, that’s a good summary of where Stability AI is.

image

The high school science club finds itself at odds with a mere school principal. The science club student knows that if the principal were capable, he would not be a mere principal. Thanks, MSFT Copilot. Were your senior managers in a high school science club?

The write up points out that Stability was the progenitor of Stable Diffusion, the art generator. I noticed the psycho-babbly terms stability and stable. Did you? Did the investors? Did the employees? Answer: Hey, there’s money to be made.

I noted this statement in the article:

The collaborative relationship between the investors and the promising startup gradually morphed into something more akin to that of a parent and an unruly child as the extent of internal turmoil and lack of clear direction at Stability became apparent, and even increased as Stability used its funding to expand its ranks.

Yep, high school management methods: “Don’t tell me what to do. I am smarter than you, Mr. Assistant Principal. You need me on the Quick Recall team, so go away,” echo in my mind in an Ezoic AI voice.

The write up continued the tale of mismanagement and adolescent angst, quoting the founder of Stability AI:

“Nobody tells you how hard it is to be a CEO and there are better CEOs than me to scale a business,” Mostaque said. “I am not sure anyone else would have been able to build and grow the research team to build the best and most widely used models out there and I’m very proud of the team there. I look forward to moving onto the next problem to handle and hopefully move the needle.”

I interpreted this as, “I did not know that calcium carbide in the lab sink drain could explode when in contact with water and then ignited, Mr. Principal.”

And, finally, let me point out this statement:

Though Stability AI’s models can still generate images of space unicorns and Lego burgers, music, and videos, the company’s chances of long-term success are nothing like they once appeared. “It’s definitely not gonna make me rich,” the investor says.

Several observations:

  1. Stability may presage the future for other high-flying and low-performing AI outfits. Why? Because teen management skills are problematic in a so-so economic environment
  2. AI is everywhere and its value is now derived by having something that solves a problem people will pay to have ameliorated. Shiny stuff fresh from the lab won’t make stakeholders happy
  3. Discipline, particularly in high school science club members, may not be what a dinobaby like me would call rigorous. Sloppiness produces a mess and lost opportunities.

Net net: Ask about a potential employer’s high school science club memories.

Stephen E Arnold, April 4, 2024

Angling to Land the Big Google Fish: A Humblebrag Quest to Be CEO?

April 3, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My goodness, the staff and alums of DeepMind have been in the news. Wherever there are big bucks or big buzz opportunities, one will find the DeepMind marketing machinery. Consider “Can Demis Hassabis Save Google?” The headline has two messages for me. The first is that a “real” journalist things that Google is in big trouble. Big trouble translates to stakeholder discontent. That discontent means it is time to roll in a new Top Dog. I love poohbahing. But opining that the Google is in trouble. Sure, it was aced by the Microsoft-OpenAI play not too long ago. But the Softies have moved forward with the Mistral deal and the mysterious Inflection deal . But the Google has money, market share, and might. Jake Paul can say he wants the Mike Tyson death stare. But that’s an opinion until Mr. Tyson hits Mr. Paul in the face.

The second message in the headline that one of the DeepMind tribe can take over Google, defeat Microsoft, generate new revenues, avoid regulatory purgatory, and dodge the pain of its swinging door approach to online advertising revenue generation; that is, people pay to get in, people pay to get out, and soon will have to subscribe to watch those entering and exiting the company’s advertising machine.

image

Thanks, MSFT Copilot. Nice fish.

What are the points of the essay which caught my attention other than the headline for those clued in to the Silicon Valley approach to “real” news? Let me highlight a few points.

First, here’s a quote from the write up:

Late on chatbots, rife with naming confusing, and with an embarrassing image generation fiasco just in the rearview mirror, the path forward won’t be simple. But Hassabis has a chance to fix it. To those who known him, have worked alongside him, and still do — all of whom I’ve spoken with for this story — Hassabis just might be the perfect person for the job. “We’re very good at inventing new breakthroughs,” Hassabis tells me. “I think we’ll be the ones at the forefront of doing that again in the future.”

Is the past a predictor of future success? More than lab-to-Android is going to be required. But the evaluation of the “good at inventing new breakthroughs” is an assertion. Google has been in the me-too business for a long time. The company sees itself as a modern Bell Labs and PARC. I think that the company’s perception of itself, its culture, and the comments of its senior executives suggest that the derivative nature of Google is neither remembered nor considered. It’s just “we’re very good.” Sure “we” are.

Second, I noted this statement:

Ironically, a breakthrough within Google — called the transformer model — led to the real leap. OpenAI used transformers to build its GPT models, which eventually powered ChatGPT. Its generative ‘large language’ models employed a form of training called “self-supervised learning,” focused on predicting patterns, and not understanding their environments, as AlphaGo did. OpenAI’s generative models were clueless about the physical world they inhabited, making them a dubious path toward human level intelligence, but would still become extremely powerful. Within DeepMind, generative models weren’t taken seriously enough, according to those  inside, perhaps because they didn’t align with Hassabis’s AGI priority, and weren’t close to reinforcement learning. Whatever the rationale, DeepMind fell behind in a key area.

Google figured something out and then did nothing with the “insight.” There were research papers and chatter. But OpenAI (powered in part by Sam AI-Man) used the Google invention and used it to carpet bomb, mine, and set on fire Google’s presumed lead in anything related to search, retrieval, and smart software. The aftermath of the Microsoft OpenAI PR coup is a continuing story of rehabilitation. From what I have seen, Google needs more time getting its ageingbody parts working again. The ad machine produces money, but the company reels from management issue to management issue with alarming frequency. Biased models complement spats with employees. Silicon Valley chutzpah causes neurological spasms among US and EU regulators. Something is broken, and I am not sure a person from inside the company has the perspective, knowledge, and management skills to fix an increasingly peculiar outfit. (Yes, I am thinking of ethnically-incorrect German soldiers loyal to a certain entity on Google’s list of questionable words and phrases.)

And, lastly, let’s look at this statement in the essay:

Many of those who know Hassabis pine for him to become the next CEO, saying so in their conversations with me. But they may have to hold their breath. “I haven’t heard that myself,” Hassabis says after I bring up the CEO talk. He instantly points to how busy he is with research, how much invention is just ahead, and how much he wants to be part of it. Perhaps, given the stakes, that’s right where Google needs him. “I can do management,” he says, ”but it’s not my passion. Put it that way. I always try to optimize for the research and the science.”

I wonder why the author of the essay does not query Jeff Dean, the former head of a big AI unit in Mother Google’s inner sanctum about Mr. Hassabis? How about querying Mr. Hassabis’ co-founder of DeepMind about Mr. Hassabis’ temperament and decision-making method? What about chasing down former employees of DeepMind and getting those wizards’ perspective on what DeepMind can and cannot accomplish. 

Net net: Somewhere in the little-understood universe of big technology, there is an invisible hand pointing at DeepMind and making sure the company appears in scientific publications, the trade press, peer reviewed journals, and LinkedIn funded content. Determining what’s self-delusion, fact, and PR wordsmithing is quite difficult.

Google may need some help. To be frank, I am not sure anyone in the Google starting line up can do the job. I am also not certain that a blue chip consulting firm can do much either. Google, after a quarter century of zero effective regulation, has become larger than most government agencies. Its institutional mythos creates dozens of delusional Ulysses who cannot separate fantasies of the lotus eaters from the gritty reality of the company as one of the contributors to the problems facing youth, smaller businesses, governments, and cultural norms.

Google is Googley. It will resist change.

Stephen E Arnold, April 3, 2024

AI and Stupid Users: A Glimpse of What Is to Come

March 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

When smart software does not deliver, who is responsible? I don’t have a dog in the AI fight. I am thinking about deployment of smart software in professional environments. When the outputs are wonky or do not deliver the bang of a  competing system, what is the customer supposed to do. Is the vendor responsible? Is the customer responsible? Is the person who tried to validate the outputs guilty of putting a finger on the scale of a system which its developers cannot explain exactly how an output was determined? Viewed from one angle, this is the Achilles’ heel of artificial intelligence. Viewed from another angle determining responsibility is an issue which, in my opinion, will be decided by legal processes. In the meantime, the issue of a system’s not working can have significant consequences. How about those automated systems on aircraft which dive suddenly or vessels which can jam a ship channel?

I read a write up which provides a peek at what large outfits pushing smart software will do when challenged about quality, accuracy, or other subjective factors related to AI-imbued systems. Let’s take a quick look at “Customers Complain That Copilot Isn’t As Good as ChatGPT, Microsoft Blames Misunderstanding and Misuse.”

The main idea in the write up strikes me as:

Microsoft is doing absolutely everything it can to force people into using its Copilot AI tools, whether they want to or not. According to a new report, several customers have reported a problem: it doesn’t perform as well as ChatGPT. But Microsoft believes the issue lies with people who aren’t using Copilot correctly or don’t understand the differences between the two products.

Yep, the user is the problem. I can imagine the adjudicator (illustrated as a mother) listening to a large company’s sales professional and a professional certified developer arguing about how the customer went off the rails. Is the original programmer the problem? Is the new manager in charge of AI responsible? Is it the user or users?

image

Illustration by MSFT Copilot. Good enough, MSFT.

The write up continues:

One complaint that has repeatedly been raised by customers is that Copilot doesn’t compare to ChatGPT. Microsoft says this is because customers don’t understand the differences between the two products: Copilot for Microsoft 365 is built on the Azure OpenAI model, combining OpenAI’s large language models with user data in the Microsoft Graph and the Microsoft 365 apps. Microsoft says this means its tools have more restrictions than ChatGPT, including only temporarily accessing internal data before deleting it after each query.

Here’s another snippet from the cited article:

In addition to blaming customers’ apparent ignorance, Microsoft employees say many users are just bad at writing prompts. “If you don’t ask the right question, it will still do its best to give you the right answer and it can assume things,” one worker said. “It’s a copilot, not an autopilot. You have to work with it,” they added, which sounds like a slogan Microsoft should adopt in its marketing for Copilot. The employee added that Microsoft has hired partner BrainStorm, which offers training for Microsoft 365, to help create instructional videos to help customers create better Copilot prompts.

I will be interested in watching how these “blame games” unfold.

Stephen E Arnold, March 29, 2024

The Many Faces of Zuckbook

March 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

As evidenced by his business decisions, Mark Zuckerberg seems to be a complicated fellow. For example, a couple recent articles illustrate this contrast: On one hand is his commitment to support open source software, an apparently benevolent position. On the other, Meta is once again in the crosshairs of EU privacy advocates for what they insist is its disregard for the law.

First, we turn to a section of VentureBeat’s piece, “Inside Meta’s AI Strategy: Zuckerberg Stresses Compute, Open Source, and Training Data.” In it, reporter Sharon Goldman shares highlights from Meta’s Q4 2023 earnings call. She emphasizes Zuckerberg’s continued commitment to open source software, specifically AI software Llama 3 and PyTorch. He touts these products as keys to “innovation across the industry.” Sounds great. But he also states:

“Efficiency improvements and lowering the compute costs also benefit everyone including us. Second, open source software often becomes an industry standard, and when companies standardize on building with our stack, that then becomes easier to integrate new innovations into our products.”

Ah, there it is.

Our next item was apparently meant to be sneaky, but who did Meta think it was fooling? The Register reports, “Meta’s Pay-or-Consent Model Hides ‘Massive Illegal Data Processing Ops’: Lawsuit.” Meta is attempting to “comply” with the EU’s privacy regulations by making users pay to opt in to them. That is not what regulators had in mind. We learn:

“Those of us with aunties on FB or friends on Instagram were asked to say yes to data processing for the purpose of advertising – to ‘choose to continue to use Facebook and Instagram with ads’ – or to pay up for a ‘subscription service with no ads on Facebook and Instagram.’ Meta, of course, made the changes in an attempt to comply with EU law. But privacy rights folks weren’t happy about it from the get-go, with privacy advocacy group noyb (None Of Your Business), for example, sarcastically claiming Meta was proposing you pay it in order to enjoy your fundamental rights under EU law. The group already challenged Meta’s move in November, arguing EU law requires consent for data processing to be given freely, rather than to be offered as an alternative to a fee. Noyb also filed a lawsuit in January this year in which it objected to the inability of users to ‘freely’ withdraw data processing consent they’d already given to Facebook or Instagram.”

And now eight European Consumer Organization (BEUC) members have filed new complaints, insisting Meta’s pay-or-consent tactic violates the European General Data Protection Regulation (GDPR). While that may seem obvious to some, Meta insists it is in compliance with the law. Because of course it does.

Cynthia Murrell, March 29, 2024

My Way or the Highway, Humanoid

March 28, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Curious how “nice” people achieve success? “Playground Bullies Do Prosper – And Go On to Earn More in Middle Age” may have an answer. The write up says:

Children who displayed aggressive behavior at school, such as bullying or temper outbursts, are likely to earn more money in middle age, according to a five-decade study that upends the maxim that bullies do not prosper.

If you want a tip for career success, I would interpret the write up’s information to start when young. Also, start small. The Logan Paul approach to making news is to fight the ageing Mike Tyson. Is that for you? I know I would not start small by irritating someone who walks with a cane. But, to each his or her own. If there is a small child selling Girl Scout Cookies, one might sharpen his or her leadership skills by knocking the cookie box to the ground and stomping on it. The modest demonstration of power can then be followed with the statement, “Those cookies contain harmful substances. You should be ashamed.” Then as your skills become more fluid and automatic, move up. I suggest testing one’s bullying expertise on a local branch of a street gang involved in possibly illegal activities.

image

Thanks MSFT Copilot. I wonder if you used sophisticated techniques when explaining to OpenAI that you were hedging your bets.

The write up quotes an expert as saying:

“We found that those children who teachers felt had problems with attention, peer relationships and emotional instability did end up earning less in the future, as we expected, but we were surprised to find a strong link between aggressive behavior at school and higher earnings in later life,” said Prof Emilia Del Bono, one of the study’s authors.

A bully might respond to this professor and say, “What are you going to do about it?” One response is, “You will earn more, young student.” The write up reports:

Many successful people have had problems of various kinds at school, from Winston Churchill, who was taken out of his primary school, to those who were expelled or suspended.

Will nice guys who are not bullies become the leaders of the post Covid world? The article quotes another expert as saying:

“We’re also seeing a generational shift where younger generations expect to have a culture of belonging and being treated with fairness, respect and kindness.”

Sounds promising. Has anyone told the companies terminating thousands of workers? What about outfits like IBM which are dumping humans for smart software? Yep, progress just like that made at Google in the last couple of years.

Stephen E Arnold, March 28, 2024

A Single, Glittering Google Gem for 27 March 2024

March 27, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

So many choices. But one gem outshines the others. Google’s search generative experience is generating publicity. The old chestnut may be true. Any publicity is good publicity. I would add a footnote. Any publicity about Google’s flawed smart software is probably good for Microsoft and other AI competitors. Google definitely looks as though it has some behaviors that are — how shall I phrase it? — questionable. No, maybe, ill-considered. No, let’s go with bungling. That word has a nice ring to it. Bungling.

! google gems

I learned about this gem in “Google’s New AI Search Results Promotes Sites Pushing Malware, Scams.” The write up asserts:

Google’s new AI-powered ‘Search Generative Experience’ algorithms recommend scam sites that redirect visitors to unwanted Chrome extensions, fake iPhone giveaways, browser spam subscriptions, and tech support scams.

The technique which gets the user from the quantumly supreme Google to the bad actor goodies is redirects. Some user notification functions to pump even more inducements toward the befuddled user. (See, bungling and befuddled. Alliteration.)

Why do users fall for these bad actor gift traps? It seems that Google SGE conversational recommendations sound so darned wonderful, Google users just believe that the GOOG cares about the information it presents to those who “trust” the company. k

The write up points out that the DeepMinded Google provided this information about the bumbling SGE:

"We continue to update our advanced spam-fighting systems to keep spam out of Search, and we utilize these anti-spam protections to safeguard SGE," Google told BleepingComputer. "We’ve taken action under our policies to remove the examples shared, which were showing up for uncommon queries."

Isn’t that reassuring? I wonder if the anecdote about this most recent demonstration of the Google’s wizardry will become part of the Sundar & Prabhakar Comedy Act?

This is a gem. It combines Google’s management process, word salad frippery, and smart software into one delightful bouquet. There you have it: Bungling, befuddled, bumbling, and bouquet. I am adding blundering. I do like butterfingered, however.

Stephen E Arnold, March 27, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta