FOGINT: Telegram Game Surfs on an Implied Link: Musk, X, Crypto Game

October 29, 2024

dino orange_thumbWritten by a humanoid dinobaby. No AI except the illustration.

The FOGINT team spotted a report from Decrypt.com. The article is “Why ‘X Empire’ Telegram Players Are Complaining to Elon Musk About the Airdrop.” If you don’t recognize the Crypto and Telegram jargon, the information in the Decrypt article will not make much sense.

For crypto folks, the X Empire Telegram game is news. According to the cited article:

Telegram tap-to-earn game X Empire will launch its X token on The Open Network (TON) on Thursday, but its reveal of airdrop allocations has drawn complaints from players who say they were deemed ineligible for a share of the rewards. And some of them are telling Elon Musk about it.

From the point of view of Telegram, X Empire is another entrepreneur leveraging the Telegram platform. With each popular egame, Telegram edges closer to its objective of becoming a very important player in what may be viewed as a Web3 service provider. In fact, when the potential payoff from its crypto interests, the craziness of some of the Group and Channel controversies becomes less important to the company. In fact, the hope for a Telegram initial public offering pay day is more important than refusing to cooperate with law enforcement. Telegram is working to appease France. Pavel Durov wants to get back to the 2024 and beyond opportunity with the Telegram crypto activities.

What is interesting to the FOGINT team are these considerations:

  1. Telegram’s bots and crypto linkages provide an interesting way to move funds and befuddle investigators
  2. Telegram has traction among crypto entities in Southeast Asia, and innovators operating without minimal regulatory oversight can use Telegram to extend their often illegal interests quickly and in a novel way
  3. Telegram’s bots or automated software embody a form of workflow automation which does not require getting involved with high profile, closely monitored organizations.

FOGINT wants to point out that Elon Musk is not involved in the X Empire play. However, Decrypt’s article suggests that some game players are complaining directly to him about the “earned” token policy. This is not a deep fake play. X Empire is an example of identity or entity surfing.

Investigators can make sense of some blockchain centric criminal activities. But the emergence of in game tokens, Telegram’s own STAR token, and their integration within the Telegram platform creates a one-stop shop for online crypto activities. Cyber investigators face another challenge: The non-US, largely unregulated Telegram operating as a virtual company with an address in Dubai. France took a bold step in detaining Pavel Durov. How will he adapt? It is unlikely he will be able to resist the lure of a big payoff from the innovations embodied in the Telegram platform.

Stephen E Arnold, October 29, 2024

Surprise: Those Who Have Money Keep It and Work to Get More

October 29, 2024

dino orangeWritten by a humanoid dinobaby. No AI except the illustration.

The Economist (a newspaper, not a magazine) published “Have McKinsey and Its Consulting Rivals Got Too Big?” Big is where the money is. Small consultants can survive but a tight market, outfits like Gerson Lehrman, and AI outputters of baloney like ChatGPT mean trouble in service land.

image

A next generation blue chip consultant produces confidential and secret reports quickly and at a fraction of the cost of a blue chip firm’s team of highly motivated but mostly inexperienced college graduates. Thanks, OpenAI, close enough.

The write up says:

Clients grappling with inflation and economic uncertainty have cut back on splashy consulting projects. A dearth of mergers and acquisitions has led to a slump in demand for support with due diligence and company integrations.

Yikes. What outfits will employ MBAs expecting $180,000 per year to apply PowerPoint and Excel skills to organizations eager for charts, dot points, and the certainty only 24 year olds have? Apparently fewer than before Covid.

How does the Economist know that consulting outfits face headwinds? Here’s an example:

Bain and Deloitte have paid some graduates to delay their start dates. Newbie consultants at a number of firms complain that there is too little work to go around, stunting their career prospects. Lay-offs, typically rare in consulting, have become widespread.

Consulting firms have chased projects in China but that money machine is sputtering. The MBA crowd has found the Middle East a source of big money jobs. But the Economist points out:

In February the bosses of BCG, McKinsey and Teneo, a smaller consultancy, along with Michael Klein, a dealmaker, were hauled before a congressional committee in Washington after failing to hand over details of their work for Saudi Arabia’s Public Investment Fund.

The firm’s response was, “Staff clould be imprisoned…” (Too bad the opioid crisis folks’ admissions did not result in such harsh consequences.)

Outfits like Deloitte are now into cyber security with acquisitions like Terbium Labs. Others are in the “reskilling” game, teaching their consultants about AI. The idea is that those pollinated type A’s will teach the firms’ clients just what they need to know about smart software. Some MBAs have history majors and an MBA in social media. I wonder how that will work out.

The write up concludes:

The quicker corporate clients become comfortable with chatbots, the faster they may simply go directly to their makers in Silicon Valley. If that happens, the great eight’s short-term gains from AI could lead them towards irrelevance.

Wow, irrelevance. I disagree. I think that school relationships and the networks formed by young people in graduate school will produce service work. A young MBA who mother or father is wired in will be valuable to the blue chip outfits in the future.

My take on the next 24 months is:

  1. Clients will hire employees who use smart software and can output reports with the help of whatever AI tools get hyped on LinkedIn.
  2. The blue chip outfits will get smaller and go back to their carpeted havens and cook up some crises or trends that other companies with money absolutely have to know about.
  3. Consulting firms will do the start up play. The failure rate will be interesting to calculate. Consultants are not entrepreneurs, but with connections the advice givers can tap their contacts for some tailwind.

I have worked at a blue chip outfit. I have done some special projects for outfits trying to become blue chip outfits. My dinobaby point of view boils down to seeing the Great Eight becoming the Surviving Six and then the end game, the Tormenting Two.

What picks up the slack? Smart software. Today’s systems generate the same type of normalized pablum many consulting firms provide. Note to MBAs: There will be jobs available for individuals who know how to perform Search GEO (generated engine optimization).

Stephen E Arnold, October 29, 2024

That AI Technology Is Great for Some Teens

October 29, 2024

The New York Times ran and seemed to sensationalized a story about a young person who formed an emotional relationship with AI from Character.ai. I personally like the Independent’s story “The Disturbing Messages Shared between AI Chatbot and Teen Who Took His Own Life,” which was redisplayed on the estimable MSN.com. If the link is dead, please, don’t write Beyond Search. Contact those ever responsible folks at Microsoft. The British “real” news outfit said:

Sewell [the teen] had started using Character.AI in April 2023, shortly after he turned 14. In the months that followed, the teen became “noticeably withdrawn,” withdrew from school and extracurriculars, and started spending more and more time online. His time on Character.AI grew to a “harmful dependency,” the suit states.

Let’s shift gears. The larger issues is that social media has changed the way humans interact with each other and smart software. The British are concerned. For instance, the BBC delves into how social media has changed human interaction: “How Have Social Media Algorithms Changed The Way We Interact?”

Social media algorithms are fifteen years old. Facebook unleashed the first in 2009 and the world changed. The biggest problem associated with social media algorithms are the addiction and excess. Teenagers and kids are the populations most affected by social media and adults want to curb their screen time. Global governments are steeping up to enforce rules on social media.

The US could ban TikTok if the Chinese parent company doesn’t sell it. The UK implemented a new online safety act for content moderation, while the EU outlined new rules for tech companies. The rules will fine them 6% of turnover and suspend them if they don’t prevent election interference. Meanwhile Brazil banned X for a moment until the company agreed to have a legal representative in the country and blocked accounts that questioned the legitimacy of the country’s last election.

While the regulation laws pose logical arguments, they also limit free speech. Regulating the Internet could tip the scale from anarchy to authoritarianism:

“Adam Candeub is a law professor and a former advisor to President Trump, who describes himself as a free speech absolutist. Social media is ‘polarizing, it’s fractious, it’s rude, it’s not elevating – I think it’s a terrible way to have public discourse”, he tells the BBC. “But the alternative, which I think a lot of governments are pushing for, is to make it an instrument of social and political control and I find that horrible.’ Professor Candeub believes that, unless ‘there is a clear and present danger’ posed by the content, ‘the best approach is for a marketplace of ideas and openness towards different points of view.’”

When Musk purchased X, he compared it to a “digital town square.” Social media, however, isn’t like a town square because the algorithms rank and deliver content based what eyeballs want to see. There isn’t fair and free competition of ideas. The smart algorithms shape free speech based on what users want to see and what will make money.

So where are we? Headed to the grave yard?

Whitney Grace, October 29, 2024

Apple: Challenges Little and Bigly

October 28, 2024

dino orangeAnother post from a dinobaby. No smart software required except for the illustration.

At lunch yesterday (October 23, 2024), one of the people in the group had a text message with a long string of data. That person wanted to move the data from the text message into an email. The idea was copy a bit of ascii, put it in an email, and email the data to his office email account. Simple? He fiddled but could not get the iPhone to do the job. He showed me the sequence and when he went through the highlighting, the curly arrow, and the tap to copy, he was following the procedure. When he switched to email and pressed the text was not available. A couple of people tried to make this sequence of tapping and long pressing work. Someone handed the phone to me. I fooled around with it, asked the person to restart the phone, and went through the process. It took two tries but I got the snip of ASCII to appear in the email message. Yep, that’s the Apple iPhone. Everyone loves the way it works, except when it does not. The frustration the iPhone owner demonstrated illustrates the “good enough” approach to many functions in Apple’s and other firms’ software.

image

Will the normal course of events swamp this big time executive? Thanks, You.com. You were not creative, but you were good enough.

Why mention this?

Apple is a curious company. The firm has been a darling of cored fans, investors, and the MBA crowd. I have noted two actions related to Apple which suggest that the company may have a sleek exterior but the interior is different. Let’s look at these two recent developments.

The first item concerns what appear to be untoward behavior by Apple and those really good folks at Goldman Sachs. The Apple credit card received a statement showing that $89 million was due. The issue appears to be fumbling the ball with customers. For a well managed company, how does this happen? My view is that getting cute was not appreciated by some government authorities. A tiny mistake? Yes. The fine is miniscule compared to the revenue represented by the outstanding enterprises paying the fine. With small fines, have the Apple and Goldman Sachs professionals learned a lesson. Yes, get out of the credit card game. Other than that, I surmise that neither of the companies will veer from their game plans.

The second item is, from my point of view, a bit more interesting than credit cuteness. Apple, if the news report in the Washington Times, is close to the truth, is getting very comfortable with China. The basic idea is that Apple wants to invest in China. Is China the best friend forever of the US? I thought some American outfits were somewhat cautious with regard to their support of that nation state. Well, that does not appear to apply to China.

With the weird software, the credit card judgment, and the China love fest, we have three examples of a company operating in what I would describe as a fog of pragmatism. The copy paste issue makes clear that simplicity and attention to a common task on a widely used device is not important. The message for the iPhone is, “Figure out our way. Don’t even think about a meaningful, user centric change. Just upgrade and get the vapor of smart software.”

The message from the credit card judgment is, “Hey, we will do what we want. If there is a problem, send us a bill. We will continue to do what we want.” That shows me that Apple buys into the behavior pattern which makes Silicon Valley behavior the gold standard in management excellence.

My interpretation of the China-Apple BFF activity is that the policy of the US government is of little interest. Apple, like other large technology outfits, is effectively operating as a nation state. The company will do what it wants and let lawyer and PR people make the activity palatable.

I find it amusing that Apple appears to be reducing orders for its next big iPhone release. The market may be reaching a saturation point or the economic conditions in certain markets make lower cost devices more appealing. My own view is that the AI vapor spewed by Apple and other US companies is dissipating. Another utility function which does not work in a reliable way may not be enough.

Why not make copy paste more usable or is that a challenge beneath your vast aspirations?

Stephen E Arnold, October 28, 2024

Fake Defined? Next Up Trust, Ethics, and Truth

October 28, 2024

dino orange_thumbAnother post from a dinobaby. No smart software required except for the illustration.

This is a snappy headline: “You Can Now Get Fined $51,744 for Writing a Fake Review Online.” The write up states:

This mandate includes AI-generated reviews (which have recently invaded Amazon) and also encompasses dishonest celebrity endorsements as well as testimonials posted by a company’s employees, relatives, or friends, unless they include an explicit disclaimer. The rule also prohibits brands from offering any sort of incentive to prompt such an action. Suppressing negative reviews is no longer allowed, nor is promoting reviews that a company knows or should know are fake.

So, what does “fake” mean? The word appears more than 160 times in the US government document.

My hunch is that the intrepid US Federal government does not want companies to hype their products with “fake” reviews. But I don’t see a definition of “fake.” On page 10 of the government document “Use of Consumer Reviews”, I noted:

“…the deceptive or unfair commercial acts or practices involving reviews or other endorsement.”

That’s a definition of sort. Other words getting at what I would call a definition are:

  • buying reviews (these can be non fake or fake it seems)
  • deceptive
  • false
  • manipulated
  • misleading
  • unfair

On page 23 of the government document, A. 465. – Definitions appears. Alas, the word “fake” is not defined.

The document is 163 pages long and strikes me as a summary of standard public relations, marketing, content marketing, and social media practices. Toss in smart software and Telegram-type BotFather capability and one has described the information environment which buzzes, zaps, and swirls 24×7 around anyone with access to any type of electronic communication / receiving device.

image

Look what You.com generated. A high school instructor teaching a debate class about a foundational principle.

On page 119, the authors of the government document arrive at a key question, apparently raised by some of the individuals sufficiently informed to ask “killer” questions; for example:

Several commenters raised concerns about the meaning of the term “fake” in the context of indicators of social media influence. A trade association asked, “Does ‘fake’ only mean that the likes and followers were created by bots or through fake accounts? If a social media influencer were to recommend that their followers also follow another business’ social media account, would that also be ‘procuring’ of ‘fake’ indicators of social media influence? . . . If the FTC means to capture a specific category of ‘likes,’ ‘follows,’ or other metrics that do not reflect any real opinions, findings, or experiences with the marketer or its products or services, it should make that intention more clear.”

Alas, no definition is provided. “Fake” exists in a cloud of unknowing.

What if the US government prosecutors find themselves in the position of a luminary who allegedly said: “Porn. I know it when I see it.” That posture might be more acceptable than trying to explain that an artificial intelligence content generator produced a generic negative review of an Italian restaurant. A competitor uses the output via a messaging service like Telegram Messenger and creates a script to plug in the name, location, and date for 1,000 Italian restaurants. The individual then lets the script rip. When investigators look into this defamation of Italian restaurants, the trail leads back to a virtual assert service provider crime as a service operation in Lao PDR. The owner of that enterprise resides in Cambodia and has multiple cyber operations supporting the industrialized crime as a service operation. Okay, then what?

In this example, “fake” becomes secondary to a problem as large or larger than bogus reviews on US social media sites.

What’s being done when actual criminal enterprises are involved in “fake” related work. According the the United Nations, in certain nation states, law enforcement is hampered and in some cases prevented from pursuing a bad actor.

Several observations:

  1. As most high school debaters learn on Day One of class: Define your terms. Present these in plain English, not a series of anecdotes and opinions.
  2. Keep the focus sharp. If reviews designed to damage something are the problem, focus on that. Avoid the hand waving.
  3. The issue exists due to a US government policy of looking the other way with regard to the large social media and online services companies. Why not become a bit more proactive? Decades of non-regulation cannot be buried under 160 page plus documents with footnotes.

Net net: “Fake,” like other glittering generalities cannot be defined. That’s why we have some interesting challenges in today’s world. Fuzzy is good enough.

PS. If you have money, the $50,000 fine won’t make any difference. Jail time will.

Stephen E Arnold, October 28, 2024

AI Has An Invisible Language. Bad Actors Will Learn It

October 28, 2024

Do you remember those Magic Eyes back from the 1990s? You needed to cross your eyes a certain way to see the pony or the dolphin. The Magic Eyes were a phenomenon of early computer graphics and it was like an exclusive club with a secret language. There’s a new secret language on the Internet generated by AI and it could potentially sneak in malicious acts says Ars Technica: “Invisible Text That AI Chatbots Understand And Humans Can’t? Yep, It’s A Thing.”

The secret text could potentially include harmful instructions into AI chatbots and other code. The purpose would be to steal confidential information and conduct other scams all without a user’s knowledge:

“The invisible characters, the result of a quirk in the Unicode text encoding standard, create an ideal covert channel that can make it easier for attackers to conceal malicious payloads fed into an LLM. The hidden text can similarly obfuscate the exfiltration of passwords, financial information, or other secrets out of the same AI-powered bots. Because the hidden text can be combined with normal text, users can unwittingly paste it into prompts. The secret content can also be appended to visible text in chatbot output.”

The steganographic framework is built into a text encoding network and LLMs and read it. Researcher Johann Rehberger ran two proof-of-concept attacks with the hidden language to discover potential risks. He ran the tests on Microsoft 365 Copilot to find sensitive information. It worked:

“When found, the attacks induced Copilot to express the secrets in invisible characters and append them to a URL, along with instructions for the user to visit the link. Because the confidential information isn’t visible, the link appeared benign, so many users would see little reason not to click on it as instructed by Copilot. And with that, the invisible string of non-renderable characters covertly conveyed the secret messages inside to Rehberger’s server.”

What is nefarious is that the links and other content generated by the steganographic code is literally invisible. Rehberger and his team used a tool to decode the attack. Regular users are won’t detect the attacks. As we rely more on AI chatbots, it will be easier to infiltrate a person’s system.

Thankfully the Big Tech companies are aware of the problem, but not before it will probably devastate some people and companies.

Whitney Grace, October 28, 2024

Boring Technology Ruins Innovation: Go, Chaos!

October 25, 2024

Jonathan E. Magen is an experienced computer scientist and he writes a blog called Yonkeltron. He recently posted, “Boring Tech Is Stifling Improvement.” After a brief anecdote about highway repair that wasn’t hindered because of bureaucracy and the repair crew a new material to speed up the job, Magen got to thinking about the current state of tech.

He thinks it is boring.

Magen supports tech teams being allocated budgets to adopt old technology. The montage of “don’t fix what’s not broken” comes to mind, but sometimes newer is definitely better. He relates that it is problematic if tech teams have too much technology or solution, but there’s also the problem if the one-size-fits all solution no longer works. It’s like having a document that can only be opened by Microsoft Office and you don’t have the software. It’s called a monoculture with a single point of failure. Tech nerds and philosophers have names for everything!

Magen bemoans that a boring tech environment is a buzzkill. He then shares this “happy thoughts”:

“A second negative effect is the chilling of innovation. Creating a better way of doing things definitionally requires deviation from existing practices. If that is too heavily disincentivized by “engineering standards”, then people don’t feel they have enough freedom to color outside the lines here and there. Therefore, it chills innovation in company environments where good ideas could, conceivably, come from anywhere. Put differently, use caution so as not to silence your pioneers.

Another negative effect is the potential to cause stagnation. In this case, devotion to boring tech leads to overlooking better ways of doing things. Trading actual improvement and progress for “the devil you know” seems a poor deal. One of the main arguments in favor of boring tech is operability in the polycontext composed of predictability and repairability. Despite the emergence of Site Reliability Engineering (SRE), I think that this highlights a troubling industry trope where we continually underemphasize, and underinvest in, production operations.”

Necessity is the mother of invention, but boring is the killer of innovation. Bring on chaos.

Whitney Grace, October 25, 2024

Mobiles in Schools: No and a Partial Ban Is No Ban

October 25, 2024

dino orangeNo smart software but we may use image generators to add some modern spice to the dinobaby’s output.

Common sense appears to be in short supply in about one-third of the US population. I am assuming that the data from Pew Research’s “Most Americans Back Cellphone Bans during Class, but Fewer Support All-Day Restrictions” are reasonably accurate. The write up reports:

Less than half of adults under 30 (45%) say they support banning students from using cellphones during class. This share rises to 67% among those ages 30 to 49 and 80% among those ages 50 and older.

I know going to school, paying attention, and (hopefully) learning how to read, write, and do arithmetic is irrelevant in the Smart Software Era. Why have a person who can select groceries and keep a rough running tally of how much money is represented by the items in the cart? Why have a young person working at a retail outlet able to make change without puzzling over a point-of-sale screen.

image

My dream: A class of students handing over their mobile phones to the dinobaby instructor. He also has an extendible baton. This is the ideal device for rapping a student on the head. Nuns used rulers. Too old technology for today’s easily distracted youthful geniuses. Thanks, Mr. AI-Man, good enough.

The write up adds:

Our survey finds the public is far less supportive of a full-day ban on cellphone use than a classroom ban. About one-third (36%) support banning middle and high school students from using cellphones during the entire school day, including at lunch as well as during and between classes. By comparison, 53% oppose this more restrictive approach.

If I understand this information, out of 100 parents of school age children, only 64 percent of those allegedly responsible adults want their progeny to be able to use their mobile devices during the school day. I suppose if I were a parent terrified that an outside was going to enter a school and cause a disturbance, I would like to get a call or a text that says, “Daddy, I am scared.” Exactly what can that parent do about that message? Drive to the school, possibly breaking speed limits, and demand to talk to the administrative assistant. What if there were a serious issue? Would those swarming parents obstruct the officers and possibly contribute to the confusion and chaos swirling around such an event? On the other hand, maybe the parent is a trained special operations officer, capable of showing credentials and participating in the response to the intruder?

As a dinobaby, here’s my view:

  1. School is where students go to learn.
  2. Like certain government facilities, mobile devices are surrendered prior to admission. The devices are returned when the student exits the premises.
  3. The policy is posted and communicated to parents and students. The message is, “This is the rule. Period.”
  4. In the event of a problem, a school official or law enforcement officer will determine when and how to retrieve the secured devices.

I have a larger concern. School is for the purpose of education. My dinobaby common sense dictates that a student’s attention should be available to the instructors. Other students, general fooling around, and the craziness of controlling young people are difficult enough. Ensuring that a student can lose his or her attention in a mobile device is out of step with my opinion.

Falling test scores, the desire of some parents to get their children into high-demand schools, and use of tutors tells me that some parents have their ducks in a row. The idea that one can sort of have mobile devices in schools is the opposite of a tidy row of ducks. Imagine the problems that will result if a mobile device with software specifically engineered to capture and retain attention were not allowed in a school. The horror! Jim or Jane might actually learn to read and do sums. But, hey, TikTok-type services and selfies are just more fun.

Check out Neil Postman’s Amusing Ourselves to Death: Public Discourse in the Age of Show Business. Is that required reading in some high school classes? Probably not.

Stephen E Arnold, October 25, 2024

The DoJ Wants to Break Up Google and Maybe Destroy the Future of AI

October 25, 2024

Contrary to popular belief, the United States is an economically frisky operation. The country runs on a fluid system that mixes aspects regulation, the Wild West, monopolies, oligopolies, and stuff operating off the reservation. The government steps in when something needs regulation. The ageing Sherman Anti-Trust Act forbids monopolies. Yahoo Finance says that “Google Is About To Learn How DOJ Wants To Remake Its Empire.”

There have been rumblings about breaking up Big Tech companies like Google for a while. District of Columbia Judge Amit Mehta ruled that Google abused its power and that its search and ad businesses violated antitrust law. Nothing is clear about what will happen to Google, but a penalty may emerge in 2025. Judge Mehta could potentially end Google’s business agreements that make it the default search engine of devices and force search data to be available to competition. Google’s products: AdWords, Chrome browser, and the Android OS could be broken up and no longer send users to the search engine.

Judge Mehta must consider how breaking up Google will affect third parties, especially those who rely on Google and associated products to (basically) run society. Mehta has a lot to think about: Judge Mehta, however, may have to consider how remedies to restore competition in the traditional search engine market may impact competition in the emerging market for AI-assisted search.

One concern, legal experts said, is that Google’s search dominance could unfairly entrench its position in the market for next-generation search.

At the same time, these fresh threats may work to Google’s advantage in the remedies trial, allowing it to argue that its overall search dominance is already under threat.”

Nothing is going to happen quickly. The 2024 presidential election results will influence Mehta’s decision. Politicians will definitely have their say and the US government needs to evaluate how they use Google.

What’s Google’s answer to these charges? The company is suggesting that fiddling with Google could end the future of AI. Promise or threat?

Whitney Grace, October 25, 2024

Meta, Politics, and Money

October 24, 2024

Meta and its flagship product, Facebook, makes money from advertising. Targeted advertising using Meta’s personalization algorithm is profitable and political views seem to turn the money spigot. Remember the January 6 Riots or how Russia allegedly influenced the 2016 presidential election? Some of the reasons those happened was due to targeted advertising through social media like Facebook.

Gizmodo reviews how much Meta generates from political advertising in: “How Meta Brings In Millions Off Political Violence.” The Markup and CalMatters tracked how much money Meta made from Trump’s July assassination attempt via merchandise advertising. The total runs between $593,000 -$813,000. The number may understate the actual money:

“If you count all of the political ads mentioning Israel since the attack through the last week of September, organizations and individuals paid Meta between $14.8 and $22.1 million dollars for ads seen between 1.5 billion and 1.7 billion times on Meta’s platforms. Meta made much less for ads mentioning Israel during the same period the year before: between $2.4 and $4 million dollars for ads that were seen between 373 million and 445 million times.  At the high end of Meta’s estimates, this was a 450 percent increase in Israel-related ad dollars for the company. (In our analysis, we converted foreign currency purchases to current U.S. dollars.)”

The organizations that funded those ads were supporters of Palestine or Israel. Meta doesn’t care who pays for ads. Tracy Clayton is a Meta spokesperson and she said that ads go through a review process to determine if they adhere to community standards. She also that advertisers don’t run their ads during times of strife, because they don’t want their goods and services associates with violence.

That’s not what the evidence shows. The Markup and CalMatters researched the ads’ subject matter after the July assassination attempt. While they didn’t violate Meta’s guidelines, they did relate to the event. There were ads for gun holsters and merchandise about the shooting. It was a business opportunity and people ran with it with Meta holding the finish line ribbon.

Meta really has an interesting ethical framework.

Whitney Grace, October 24, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta