Facebook Grapples with Moderation

August 1, 2017

Mashable’s Alex Hazlett seems quite vexed about the ways Facebook is mishandling the great responsibility that comes with its great power in, “Facebook’s Been Making It Up All Along and We’re Left Holding the Bag.” Reporting on a recent leak from the Guardian of Facebook moderator documents, Hazlett writes:

It confirmed what a lot of people had long suspected: Facebook is making it up as they go along and we’re the collateral damage. The leaked moderator documents cover how to deal with depictions of things like self-harm and animal cruelty in exceedingly detailed ways. A first read through suggests that the company attempted to create a rule for every conceivable situation, and if they missed one, well they’d write that guideline when it came up. It suggests they think that this is just a question of perfecting the rules, when they’ve been off-base from the outset.

The article notes that communities historically craft and disseminate the rules, ethics, and principles that guide their discourse; in this case, the community is the billions of Facebook users across the globe, and those crucial factors are known only to the folks in control (except what was leaked, of course.) Hazlett criticizes the company for its “generic platitudes” and lack of transparency around an issue that now helps shape the very culture of the entire world. He observes:

Sure, if Facebook had decided to take an actual stand, they’d have had detractors. But if they’d been transparent about why, their users would have gotten over it. If you have principles, and you stick to them, people will adjust. Instead, Facebook seems to change their policies based on the level of outrage that is generated. It contributes to a perception of them as craven and exploitative. This is why Facebook lurches from stupid controversy to stupid controversy, learning the hard way every. single. time.

These days, decisions by one giant social media company can affect millions of people, often in ways, those affected don’t even perceive, much less understand. A strategy of lurching from one controversy to another does seem unwise.

Cynthia Murrell, August 1, 2017

Free Content Destroying Print Media

July 27, 2017

Today’s generation has no concept of having to wait for the day’s top stories till the newspaper is delivered. If they want to know something (or even they don’t) they simply turn on their Smart phone, tablet or even watch! With news stories available 24/7 with automatic alerts, most people under thirty can’t possibly fathom paying for it.

It almost wasn’t that way. According to Poynter,

In the 1990s, a cantankerous, bottom-line-obsessed and visionary Tribune Company executive named Charles Brumback pushed something that was called The New Century News Network. The top print news organizations, including The New York Times, The Washington Post and Times-Mirror would form a network in which they’d house their content online and charge for it. Members would get paid based on usage. They even started a newswire that was similar to what we know as Google News.

Unfortunately, the heads of print media couldn’t see the future and how their pockets would be deflated due to the giving away of their content to online giants such as Facebook and Yahoo and Google.

Now, these same short-sighted network bigwigs are wanting Congress to intervene on their behalf. As the article points out, “running to Congress seems belated and impotent.”

Catherine Lamsfuss, July 27, 2017

Western in Western Out

July 26, 2017

A thoughtful piece at Quartz looks past filter bubbles to other ways mostly Western developers are gradually imposing their cultural perspectives on the rest of the world—“Silicon Valley Has Designed Algorithms to Reflect Your Biases, Not Disrupt Them.” Search will not get you objective information, but rather the content your behavior warrants. Writer Ramesh Srinivasan introduces his argument:

Silicon Valley dominates the internet—and that prevents us from learning more deeply about other people, cultures, and places. To support richer understandings of one another across our differences, we need to redesign social media networks and search systems to better represent diverse cultural and political perspectives. The most prominent and globally used social media networks and search engines— Facebook and Google—are produced and shaped by engineers from corporations based in Europe and North America. As a result, technologies used by nearly 2 billion people worldwide reflect the design perspectives of the limited few from the West who have power over how these systems are developed.

It is worth reading the whole article for its examination of the issue, and suggestions for what to do about it. Algorithm transparency, for example, would at least let users know what principles guide a platform’s  content selections. Taking input from user communities in other cultures is another idea. My favorite is a proposal to prioritize firsthand sources over Western interpretations, even ones with low traffic or that are not in English. As Srinivasan writes:

Just because this option may be the easiest for me to understand doesn’t mean that it should be the perspective I am offered.

That sums up the issue nicely.

Cynthia Murrell, July 26, 2017

Instagram Reins in Trolls

July 21, 2017

Photo-sharing app Instagram has successfully implemented DeeText, a program that can successfully weed out nasty and spammy comments from people’s feeds.

Wired in an article titled Instagram Unleashes an AI System to Blast Away Nasty Comments says:

DeepText is based on recent advances in artificial intelligence, and a concept called word embeddings, which means it is designed to mimic the way language works in our brains.

DeepText initially was built by Facebook, Instagram’s parent company for preventing abusers, trolls, and spammers at bay. Buoyed by the success, it soon implemented on Instagram.

The development process was arduous wherein a large number of employees and contractors for months were teaching the DeepText engine how to identify abusers. This was achieved by telling the algorithm which word can be abusive based on its context.

At the moment, the tools are being tested and rolled out for a limited number of users in the US and are available only in English. It will be subsequently rolled out to other markets and languages.

Vishal Ingole, July 21, 2017

Software That Detects Sarcasm on Social Media

July 20, 2017

Technion-Israel Institute of Technology Faculty of Industrial Engineering and Management has developed Sarcasm SIGN, a software that can detect sarcasm in social media content. People with learning difficulties will find this tool useful.

According to an article published by Digital Journal titled Software Detects Sarcasm on Social Media:

The primary aim is to interpret sarcastic statements made on social media, be they Facebook comments, tweets or some other form of digital communication.

As we move towards a more digitized world where the majority of our communications are through digital channels, people with learning disabilities are at the receiving end. As machine learning advances so do the natural language capabilities. Tools like these will be immensely helpful for people who are unable to understand the undertones of communication.

The same tool can also be utilized by brands for determining who is talking about them in a negative way. Now ain’t that wonderful Facebook?

Vishal Ingole, July 20, 2017

Facebook Factoid: Deleting User Content

July 6, 2017

Who knows if this number is accurate. I found the assertion of a specific number of Facebook deletions interesting. Plus, someone took the time to wrap the number is some verbiage about filtering, aka censorship. The factoid appears in “Facebook Deletes 66,000 Posts a Week to Curb Hate Speech, Extremism.”

Here’s the passage with the “data”:

Facebook has said that over the past two months, it has removed roughly 66,000 posts on average per week that were identified as hate speech.

My thought is that the 3.2 million “content objects” is neither high nor low. The number is without context other than my assumption that Facebook has two billion users per month. The method used to locate and scrub the data seems to be a mystical process powered by artificial intelligence and humans.

One thing is clear to me: Figuring out what to delete will give both the engineers writing the smart software and the lucky humans who get paid to identity inappropriate content in the musings of billions of happy Facebookers seems to be a somewhat challenging task.

What about those “failures”? Good question. What about that “context”? Another good question. Without context what have we with this magical 66,000? Not much in my opinion. One can’t find information if it has been deleted. That’s another issue to consider.

Stephen E Arnold, July 6, 2017

Facebook to Tackle Terrorism with Increased Monitoring

July 5, 2017

Due to recent PR nightmares involving terrorism organizations, Facebook is revamping their policies and policing of terrorism content within the social media network. A recent article in Digital Trends, Facebook Fights Against Terrorist Content on Its Site Using A.I., Human Expertise, explains how Zuckerberg and his team of anti-terrorism experts are changing the game in monitoring Facebook for terrorism activity.

As explained in the article,

To prevent AI from flagging a photo related to terrorism in a post like a news story, human judgment is still required. In order to ensure constant monitoring, the community operations team works 24 hours a day and its members are also skilled in dozens of languages.” Recently Facebook was in the news for putting their human monitors at risk by accidentally revealing personal information to the terrorists they were investigating on the site. As Facebook increase the number of monitors, it seems the risk to those monitors also increases.

The efforts put forth by Facebook are admirable, yet we can’t help wonder how – even with their impressive AI/human team – the platform can monitor the sheer number of live-streaming videos as those numbers continue to increase. The threats, terrorist or otherwise, present in social media continue to grow with the technology and will require a much bigger fix than more manpower.

Catherine Lamsfuss, July 5, 2017

Facebook: Search Images by the Objects They Contain

July 3, 2017

Has Facebook attained the holy grail of image search? Tech Crunch reports, “Facebook’s AI Unlocks the Ability to Search Photos by What’s in Them.” I imagine this will be helpful to law enforcement.

A platform Facebook originally implemented to help the visually impaired, Lumos (built on top of FBLearner Flow), is now being applied to search functionality across the social network. With this tool, one can search using keywords that describe things in the desired image, rather than relying on tags and captions. Writer John Mannes describes how this works:

Facebook trained an ever-fashionable deep neural network on tens of millions of photos. Facebook’s fortunate in this respect because its platform is already host to billions of captioned images. The model essentially matches search descriptors to features pulled from photos with some degree of probability. After matching terms to images, the model ranks its output using information from both the images and the original search. Facebook also added in weights to prioritize diversity in photo results so you don’t end up with 50 pics of the same thing with small changes in zoom and angle. In practice, all of this should produce more satisfying and relevant results.

Facebook expects to extrapolate this technology to the wealth of videos it continues to amass. This could be helpful to a user searching for personal videos, of course, but just consider the marketing potential. The article continues:

Pulling content from photos and videos provides an original vector to improve targeting. Eventually it would be nice to see a fully integrated system where one could pull information, say searching a dress you really liked in a video, and relate it back to something on Marketplace or even connect you directly with an ad-partner to improve customer experiences while keeping revenue growth afloat.

Mannes reminds us Facebook is operating amidst fierce competition in this area. Pinterest, for example, enables users to search images by the objects they contain. Google may be the furthest along, though; that inventive company has developed its own image captioning model that boasts an accuracy rate of over 90% when either identifying objects or classifying actions within images.

Cynthia Murrell, July 3, 2017

 

Facebook May Be Exploiting Emotions of Young Audiences

June 26, 2017

Open Rights Group, a privacy advocacy group is demanding details of a study Facebook conducted on teens and sold its results to marketing companies. This might be a blatant invasion of privacy and attempt to capitalize on emotional distress of teens.

In a press release sent out by the Open Rights Group and titled Rights Groups Demand More Transparency over Facebook’s ‘Insights’ into Young Users, the spokesperson says:

It is incumbent upon Facebook as a cultural leader to protect, not exploit, the privacy of young people, especially when their vulnerable emotions are involved.

This is not the first time technology companies have come under heavy criticism from privacy rights groups. Facebook through its social media platform collects information and metrics from users, analyzes it and sells the results to marketing companies. However, Facebook never explicitly tells the user that they are being watched. Open Rights Group is demanding that this information is made public. Though there is no hope, will Facebook concede?

Vishal Ingole, June 26, 2017

What to Do about the Powerful Tech Monopolies

June 14, 2017

Traditionally, we as a country have a thing against monopolies—fair competition for the little guy and all that. Have we allowed today’s tech companies amass too much power? That seems to be the conclusion of SiliconBeat’s article, “Google, Facebook, and Amazon: Monopolies that Should be Broken Up or Regulated?” Writer Ethan Baron summarizes these companies massive advantages, and the efforts of regulatory agencies to check them. He cites a New York Times article by Jonathan Taplin:

Taplin, in his op-ed, argued that Google, Facebook and Amazon ‘have stymied innovation on a broad scale.’ With industry giants facing limited competition, incumbent companies have a profound advantage over new entrants, Taplin said. And the tech firms’ explosive growth has caused massive damage to companies already operating, he said. ‘The platforms of Google and Facebook are the point of access to all media for the majority of Americans. While profits at Google, Facebook and Amazon have soared, revenues in media businesses like newspaper publishing or the music business have, since 2001, fallen by 70 percent,’ Taplin said. The rise of Google and Facebook have diverted billions of dollars from content creators to ‘owners of monopoly platforms,’ he said. All content creators dependent on advertising must negotiate with Google or Facebook as aggregator. Taplin proposed that for the three tech behemoths, there are ‘a few obvious regulations to start with.’

Taplin suggests limiting acquisitions as the first step since that is how these companies grow into such behemoths. For Google specifically, he suggests regulating it as a public utility. He also takes aim at the “safe harbor” provision of the federal Digital Millennium Copyright Act, which shields Internet companies from damages associated with intellectual property violations found on their platforms. Since the current political climate is not exactly ripe for regulation, Taplin laments that such efforts will have to wait a few years, by which time these companies will be so large that breaking them up will be the only remedy. We’ll see.

Cynthia Murrell, June 14, 2017

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta