TikTok: True Colors?

October 22, 2019

Since it emerged from China in 2017, the video sharing app TikTok has become very popular. In fact, it became the most downloaded app in October of the following year, after merging with Musical.ly. That deal opened up the U.S. market, in particular, to TikTok. Americans have since been having a blast with the short-form video app, whose stated mission is to “inspire creativity and joy.” The Verge, however, reminds us where this software came from—and how its owners behave—in the article, “It Turns Out There Really Is an American Social Network Censoring Political Speech.”

Reporter Casey Newton grants that US-based social networks have their limits, removing hate speech, violence, and sexual content from their platforms. However, that is a far cry from the types of censorship that are common in China. Newton points to a piece by Alex Hern in The Guardian that details how TikTok has directed its moderators to censor content about Tiananmen Square, Tibetan independence, and the Falun Gong religious group. It is worth mentioning that TikTok’s producer, ByteDance, maintains a separate version of the app (Douyin) for use within China’s borders. This suppression documented in the Guardian story, then, is specifically for the rest of us. Newton writes:

“As Hern notes, suspicions about TikTok’s censorship are on the rise. Earlier this month, as protests raged, the Washington Post reported that a search for #hongkong turned up ‘playful selfies, food photos and singalongs, with barely a hint of unrest in sight.’ In August, an Australian think tank called for regulators to look into the app amid evidence it was quashing videos about Hong Kong protests. On the one hand, it’s no surprise that TikTok is censoring political speech. Censorship is a mandate for any Chinese internet company, and ByteDance has had multiple run-ins with the Communist party already. In one case, Chinese regulators ordered its news app Toutiao to shut down for 24 hours after discovering unspecified ‘inappropriate content.’ In another case, they forced ByteDance to shutter a social app called Neihan Duanzi, which let people share jokes and videos. In the aftermath, the company’s founder apologized profusely — and pledged to hire 4,000 new censors, bringing the total to 10,000.”

For its part, TikTok insists the Guardian-revealed guidelines have been replaced with more “localized approaches,” and that they now consult outside industry leaders in creating new policies. Newton shares a link to TikTok’s publicly posted community guidelines, but notes it contains no mention of political posts. I wonder why that could be.

Cynthia Murrell, October 22, 2019

Understanding Social Engineering

September 6, 2019

“Quiet desperation”? Nope, just surfing on psychological predispositions. Social engineering leads to a number of fascinating security lapses. For a useful analysis of how pushing buttons can trigger some interesting responses, navigate to “Do You Love Me? Psychological Characteristics of Romance Scam Victims.” The write up provides some useful insights. We noted this statement from the article:

a susceptibility to persuasion scale has been developed with the intention to predict likelihood of becoming scammed. This scale includes the following items: premeditation, consistency, sensation seeking, self-control, social influence, similarity, risk preferences, attitudes toward advertising, need for cognition, and uniqueness. The current work, therefore, suggests some merit in considering personal dispositions might predict likelihood of becoming scammed.

Cyberpsychology at work.

Stephen E Arnold, September 6, 2019

Citizen Action within Facebook

September 5, 2019

Pedophiles flock anywhere kids are. Among these places are virtual hangouts, such as Facebook, Instagram, YouTube, Twitter, and more. One thing all criminals can agree on is that they hate pedophiles and in the big house they take justice into their own hands. Outside of prison, Facebook vigilantes take down pedophiles. Quartz reports on how in the article, “There’s A Global Movement Of Facebook Vigilantes Who Hunt Pedophiles.”

The Facebook vigilantes are regular people with families and jobs, who use their spare time to hunt pedophiles grooming children for sexual exploitation. Pedophile hunting became popular in the early 2000s when Chris Hansen hosted the show To Catch a Predator. It is not only popular in the United States, but countries around the world. A big part of the pedophile vigilantism is the public shaming:

“ “Pedophile hunting” or “creep catching” via Facebook is a contemporary version of a phenomenon as old as time: the humiliating act of public punishment. Criminologists even view it as a new expression of the town-square execution. But it’s also clearly a product of its era, a messy amalgam of influences such as reality TV and tabloid culture, all amplified by the internet.”

One might not think there is a problem with embarrassing pedophiles via live stream, but there are unintended consequences. Some of the “victims” commit suicide, vigilantes’ evident might not hold up in court, and they might not have all the facts and context:

“They have little regard for due process or expectations of privacy. The stings, live-streamed to an engaged audience, become a spectacle, a form of entertainment—a twisted consequence of Facebook’s mission to foster online communities.”

Facebook’s community driven algorithms make it easy to follow, support, and join these vigilante groups. The hunters intentions are often cathartic and keen on doling out street justice, but may operate outside the law.

Whitney Grace, September 5, 2019

A Partial Look: Data Discovery Service for Anyone

July 18, 2019

F-Secure has made available a Data Discovery Portal. The idea is that a curious person (not anyone on the DarkCyber team but one of our contractors will be beavering away today) can “find out what information you have given to the tech giants over the years.” Pick a social media service — for example, Apple — and this is what you see:

fsecure

A curious person plugs in the Apple ID information and F-Secure obtains and displays the “data.” If one works through the services for which F-Secure offers this data discovery service, the curious user will have provided some interesting data to F-Secure.

Sound like a good idea? You can try it yourself at this F-Secure link.

F-Secure operates from Finland and was founded in 1988.

Do you trust the Finnish anti virus wizards with your user names and passwords to your social media accounts?

Are the data displayed by F-Secure comprehensive? Filtered? Accurate?

Stephen E Arnold, July 18, 2019

Google Takes Another Run at Social Media

July 12, 2019

The Google wants to be a winner in social media. “Google Is Testing a New Social Network for Offline Meetups” describes the Shoelace social network. Shoelaces keep footwear together. The metaphor is … interesting.

The write up states:

The aim behind coming up with this innovative social networking app is to let people find like-minded people around with whom they can meet and share things between each other. The interests could be related to social activities, hobbies, events etc.

The idea of finding people seems innocuous enough. But what if one or more bad actors use the new Google social network in unanticipated ways?

The write up reports:

It will focus more on providing a platform to meet and expand businesses and building communities with real people.

The Google social play has “loops.” What’s a loop? DarkCyber learned:

This is a new name for Events. You can make use of this feature to create an event where people can see your listings and try to join the event as per their interests.

What an innovative idea? No other service — including Meetup.com, Facebook, and similar plays — have this capability.

Like YouTube’s “new” monetization methods which seem similar to Twitch.tv’s, Google is innovating again.

Mobile. Find people. Meet up.

Maybe Google’s rich, bold, proud experiences with Orkut, Google Buzz, and Google+ were useful? Effort does spark true innovation … maybe.

Stephen E Arnold, July 12, 2019

Twitter Tools

June 10, 2019

One of our readers spotted “5 Twitter Tools to Discover the Best and Funniest Tweets.” The article is a round up of software utilities which will provide a selected stream of information from Twitter “content creators.” Keep in mind that threads have been rendered almost useless by Twitter’s editorial procedures. Nevertheless, if you don’t have access to a system which provides the “firehose” content or a repository of indexed and parsed Twitter content, you may find one of these useful:

  • Funny Tweeter
  • Ketchup (an easy way to provide Google with information about Tweets)
  • Really Good Questions
  • Thread Reader (what about those disappeared tweets and the not available tweets
  • Twitter’s digest
  • Twubbler (not exactly a Palantir Gotham timeline, however)

Consult the source article for explanations of each and the links.

Stephen E Arnold, June 10, 2019

Reflecting about New Zealand

June 5, 2019

Following the recent attacks in two New Zealand mosques, during which a suspected terrorist successfully live-streamed horrific video of their onslaught for over a quarter-hour, many are asking why the AI tasked with keeping such content off social media failed us. As it turns out, context is key. CNN explains “Why AI Is Still Terrible at Spotting Violence Online.” Reporter Rachel Metz writes:

“A big reason is that whether it’s hateful written posts, pornography, or violent images or videos, artificial intelligence still isn’t great at spotting objectionable content online. That’s largely because, while humans are great at figuring out the context surrounding a status update or YouTube, context is a tricky thing for AI to grasp.”

Sites currently try to account for that shortfall with a combination of AI and human moderators, but they have trouble keeping up with the enormous influx of postings. For example, we’re told YouTube users alone upload more than 400 hours of video per minute. Without enough people to provide context, AI is simply at a loss. Metz notes:

“AI is not good at understanding things such as who’s writing or uploading an image, or what might be important in the surrounding social or cultural environment. … Comments may superficially sound very violent but actually be satire in protest of violence. Or they may sound benign but be identifiable as dangerous to someone with knowledge about recent news or the local culture in which they were created.

We also noted:

“… Even if violence appears to be shown in a video, it isn’t always so straightforward that a human — let alone a trained machine — can spot it or decide what best to do with it. A weapon might not be visible in a video or photo, or what appears to be violence could actually be a simulation.”

On top of that, factors that may not be apparent to human viewers, like lighting, background images, or even frames per seconds, complicate matters for AI. It appears it will be some time before we can rely on algorithms to shield social media from abhorrent content. Can platforms come up with some effective alternative in the meantime? The pressure is on.

Cynthia Murrell, June 5, 2019

US Government Social Media Archive

May 28, 2019

Library of Congress, hello, LOC, are you there? What about other US government agencies? Do you have these data?

Maybe not?

I read “U.S. Navy Creating a 350 Billion Record Social Media Archive” and there is not one word about the Library of Congress. The US Navy wants to build a social media collection. Based on the sketchy information available, the content scope will include:

  • Messages from 200 million unique users (about 30 percent of social media users)
  • Time window: July 1, 2014, to December 31, 2016
  • 100 languages
  • Metadata (date, time, location, etc.).

The RFP is located on FedBizOps.

Stephen E Arnold, May 28, 2019

Department of Defense: Learning from Social Media Posts

May 25, 2019

A solicitation request dated May 13, 2019, “A–Global Social Media Archive, 350 billion digital data records” is an interesting public message. Analysis of social media allegedly has been a task within other projects handled by firms specializing in content analytic. These data mining efforts are, based on DarkCyber’s understanding of open source information from specialist vendors, are nothing new. The solicitation offers some interesting insights which may warrant some consideration.

First, the scope of the task is 350 billion digital records. It is not clear what a “digital record” constitutes, but the 350 billion number represents about two or three months of Facebook posts. It is not clear if the content comes from one service like Twitter or is drawn from a range of messaging and content sources.

Second, the content pool must include 60 languages. The most used languages on the public Internet are English, Chinese, and Spanish. The other 57 languages contribute a small volume of content, and this fact may create a challenge for the vendors responding to the solicitation. The document states:

Data includes messages from at least 200 million unique users in at least 100 countries, with no single country accounting for more than 30% of users.

Third, the text content and the metadata must be included in the content bundle.

The exclusion of photographs and videos is interesting. These are important content mechanisms. Are commercial enterprises operating without connections to nation states operating large-scale content aggregation systems likely to be able to comply? Worth watching to find out who lands this project.

Stephen E Arnold, May 25, 2019

A Grain of Salt for Zuckerberg Suggestions

April 12, 2019

Given the pressures Facebook has been under to better regulate harmful content on its platform, it is no surprise Mark Zuckerberg has weighed in with a blog post on the matter. However, writer Mark Wyci?lik-Wilson scoffs at the Facebook founder’s ideas in the BetaNews write-up, “Mark Zuckerberg’s Calls for Internet Regulation Are Just an Attempt to Shift the Blame from Facebook.” The article outlines Zuckerberg’s “four ideas to regulate the internet,” noting that, coming from anyone else, they might be plausible suggestions: First, there’s the concept of privacy regulations like those in Europe’s GDPR. Zuckerberg also says he wants more control over hate speech, and to exert tighter standards over political advertising, especially near election time. Finally, he counsels data portability.

We’re reminded nothing is actually standing in the way of Facebook implementing these ideas on its own—and this is what makes Wyci?lik-Wilson suspicious of Zuckerberg’s motives. He also notes a couple tendencies he has observed in the Facebook CEO: to pass the buck when something goes wrong, and to spin any attempts to address users’ concerns as a PR positive. He writes:

Whilst admitting that ‘companies such as Facebook have immense responsibilities’ it seems the Facebook founder would rather have rules and guidelines handed down to him rather than having to do the hard work himself. This is understandable. It would help to absolve Facebook of blame and responsibility. If things go wrong when following regulations set out by the government or other agencies, it’s easy to point to the rulebook and say, ‘well, we’re were just doing as we were told’. At the moment it’s all too easy for Facebook to make a lot of noise about how it wants to improve things while simultaneously raping users’ privacy, and benefiting from the fake news, extremist content and everything else the social network claims not to want to be a platform for. But at the end of the day, a signed-up user is a signed-up user, and acts as a microscopic cog in the advertising-driven money-machine that is Facebook. Facebook has shown time and time again that it can do something about objectionable content and activity — be that political extremism, racism, election interference or whatever. But it doesn’t do anything until it faces insurmountable pressure to do so.

Wyci?lik-Wilson urges Facebook to just go ahead and implement these suggestions already, not wait to be told what to do outside forces. “Less talking, more doing,” he summarizes.

Cynthia Murrell, April 12, 2019

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta