Infohazards: Another 2020 Requirement
October 20, 2020
New technologies that become society staples have risks and require policies to rein in potential dangers. Artificial intelligence is a developing technology. Governing policies have yet to catch up with the emerging tool. Experts in computer science, government, and other controlling organizations need to discuss how to control AI says Vanessa Kosoy in the Less Wrong blog post: “Needed: AI Infohazard Policy.”
Kosoy approaches her discussion about the need for a controlling AI information policy with the standard science fiction warning argument: “AI risk is that AI is a danger, and therefore research into AI might be dangerous.” It is good to draw caution from science fiction to prevent real world disaster. Experts must develop a governing body of AI guidelines to determine what learned information should be shared and how to handle results that are not published.
Individuals and single organizations cannot make these decisions alone, even if they do have their own governing policies. Governing organizations and people must coordinate their knowledge regarding AI and develop a consensual policies to control AI information. Kozoy determines that any AI policy shoulder consider the following:
• “Some results might have implications that shorten the AI timelines, but are still good to publish since the distribution of outcomes is improved.
• Usually we shouldn’t even start working on something which is in the should-not-be-published category, but sometimes the implications only become clear later, and sometimes dangerous knowledge might still be net positive as long as it’s contained.
• In the midgame, it is unlikely for any given group to make it all the way to safe AGI by itself. Therefore, safe AGI is a broad collective effort and we should expect most results to be published. In the endgame, it might become likely for a given group to make it all the way to safe AGI. In this case, incentives for secrecy become stronger.
• The policy should not fail to address extreme situations that we only expect to arise rarely, because those situations might have especially major consequences.”
She continues that any AI information policy should determine the criteria for what information is published, what channels should be consulted to determine publication, and how to handle potentially dangerous information.
These questions are universal for any type of technology and information that has potential hazards. However, specificity of technological policies weeds out any pedantic bickering and sets standards for everyone, individuals and organizations. The problem is getting everyone to agree on the policies.
Whitney Grace, October 20, 2020
Covid Trackers Are Wheezing in Europe
October 19, 2020
COVID-19 continues to roar across the world. Health professionals and technologists have combined their intellects attempting to provide tools to the public. The Star Tribune explains how Europe wanted to use apps to track the virus: “As Europe Faces 2nd Wave Of Virus, Tracing Apps Lack Impact.”
Europe planned that mobile apps tracking where infected COVID-19 individuals are located would be integral to battling the virus. As 2020 nears the end, the apps have failed because of privacy concerns, lack of public interest, and technical problems. The latter is not a surprise given the demand for a rush job. The apps were supposed to notify people when they were near infected people.
Health professionals predicted that 60% of European country populations would download and use the apps, but adoption rates are low. The Finnish, however, reacted positively and one-third of the country downloaded their country’s specific COVID-19 tracking app. Finland’s population ironically resists wearing masks in public.
The apps keep infected people’s identities secret. Their data remains anonymous and the apps only alert others if they come in contact with a virus carrier. If the information provides any help to medical professionals remains to be seen:
“We might never know for sure, said Stephen Farrell, a computer scientist at Trinity College Dublin who has studied tracing apps. That’s because most apps don’t require contact information from users, without which health authorities can’t follow up. That means it’s hard to assess how many contacts are being picked up only through apps, how their positive test rates compare with the average, and how many people who are being identified anyway are getting tested sooner and how quickly. ‘I’m not aware of any health authority measuring and publishing information about those things, and indeed they are likely hard to measure,’ Farrell said.”
Are these apps actually helpful? Maybe. But they require maintenance and constant updating. They could prevent some of the virus from spreading, but sticking to tried and true methods of social distancing, wearing masks, and washing hands work better.
Whitney Grace, October 19, 2020
Apple and AWS: Security?
October 13, 2020
DarkCyber noted an essay-style report called “We Hacked Apple for 3 Months: Here’s What We Found.” The write up contains some interesting information. One particular item caught our attention:
AWS Secret Keys via PhantomJS iTune Banners and Book Title XSS
The information the data explorers located potential vulnerabilities to allow such alleged actions as:
- Obtain what are essentially keys to various internal and external employee applications
- Disclose various secrets (database credentials, OAuth secrets, private keys) from the various design.apple.com applications
- Likely compromise the various internal applications via the publicly exposed GSF portal
- Execute arbitrary Vertica SQL queries and extract database information
Other issues are touched upon in the write up.
Net net: The emperor has some clothes; they are just filled with holes and poorly done stitching if the write up is correct.
Stephen E Arnold, October 13, 2020
Amazon: The Bulldozer Grinds Forward
October 7, 2020
It is hard to tell whether the company is shameless or clueless. Either way, SlashGear observes, “Amazon Has A Creepiness Problem.” The growingly ubiquitous tech giant recently unveiled two products that will make privacy enthusiasts shiver. Writer Chris Davies reports:
“The Echo Show 10, for example, brings movement to Amazon’s smart displays, with a rotating base that promises to track you as you wander around the room. The result? A perfectly-centered video call, or a more attentive Alexa, whether you’re stood at the sink or raiding the refrigerator. Echo Show 10 seems positively pedestrian, though, in comparison to the Ring Always Home Cam. Part drone, part security camera, it launches out of a base station that resembles a fancy fragrance diffuser and then buzzes around your home to spot intruders or misbehaving pets. Never mind wondering whether the microphone on your Echo is disabled: now, the cameras themselves will be airborne.”
Naturally, Amazon offers reassurances that users are in complete control of what the devices observe and transmit. The Ring drone maintains a certain hum so one can hear it coming, and users can limit its flight area. Also, when it is docked, the camera is physically blocked. The Echo Show 10 relies on visual and audio cues to keep the user center stage, but we’re assured that data is processed locally and immediately deleted. But there is no easy way to verify the devices respect these restrictions. Users will just have to take Amazon’s word. Davies considers:
“Rationally, nothing Amazon has announced today is any more intrusive or dangerous to privacy than, well, any other smart speaker or connected camera the company has offered before. All the same, there’s a gulf between perception and reality. I could understand you being skeptical about Amazon’s intentions – and its technology – simply because it’s, well, Amazon. The company that knows so much about your shopping habits it can make pitch-perfect recommendations; the company that wants to put microphones and cameras all over your house, in your car, and in your hotel room. […]”
The man has a point. Apparently, many consumers do trust Amazon enough to place these potential spies in their homes and offices. Others, though, do not. Will a day come when it will be difficult to function in society without them? We think Amazon hopes so.
Cynthia Murrell, October 7, 2020
TikTok: Maybe Some Useful Information?
September 19, 2020
US President Donald Trump banned Americans from using TikTok, because of potential information leaks to China. In an ironic twist, The Intercept explains “Leaked Documents Reveal What TikTok Shares With Authorities—In The U.S.” It is not a secret in the United States that social media platforms from TikTok to Facebook collect user data as ways to spy and sell products.
While the US monitors its citizens, it does not take the same censorship measures as China does with its people. It is alarming the amount of data TikTok gathers for the Chinese, but leaked documents show that the US also accesses that data. Data privacy has been a controversial topic for years within the United States and experts argue that TikTok collects the same type of information as Google, Amazon, and Facebook. The documents reveal that ByteDance, TikTok’s parent company, the FBI, and Department of Homeland Security monitored the platform.
Law enforcement officials use TikTok as a means to monitor social unrest related to the death of George Floyd. Floyd suffocated when a police officer cut off his oxygen attempting to restrain him during arrest. TikTok users post videos about Black Lives Matter, police protests, tips for disarming law enforcement, and even jokes about the US’s current upheaval. TikTok’s user agreement says it collects information and will share it with third parties. The third parties include law enforcement if TikTok feels there is an imminent danger.
TikTok, however, also censors videos, particularly those the Chinese government dislikes. These videos include political views, the Hong Kong protests, Uyghur internment camps, and people considered poor, disabled, or ugly.
Trump might try to make the US appear as the better country, but:
““The common concern, whether we’re talking about TikTok or Huawei, isn’t the intentions of that company necessarily but the framework within which it operates,” said Elsa Kania, an expert on Chinese technology at the Center for a New American Security. “You could criticize American companies for having an opaque relationship to the U.S. government, but there definitely is a different character to the ecosystem.” At the same time, she added, the Trump administration’s actions, including a handling of Portland protests that brought to mind the police crackdown in Hong Kong, have undercut official critiques of Chinese practices: “At a moment when we’re seeing attempts by the administration to draw a contrast in terms of values and ideology with China, these eerie parallels that keep recurring do really undermine that.”
The issue is contentious. Information does not have to be used at the time of collection. The actions of youth can be used to exert pressure at a future time. That may be the larger risk.
Whitney Grace, September 19, 2020
Apple, Google Make it Easier for States to Adopt Virus Tracing App
September 12, 2020
Google and Apple created an app that would, with the cooperation of state governments, aid in tracing the spread of the coronavirus and notify citizens if they spent time around someone known to have tested positive. It is nice to see these rivals working together for the common good. So far, though, only a few states have adopted the technology. In order to encourage more states to join in, AP News reveals, “Apple, Google Build Virus-Tracing Tech Directly into Phones.” Reporter Matt O’Brien writes:
“Apple and Google are trying to get more U.S. states to adopt their phone-based approach for tracing and curbing the spread of the coronavirus by building more of the necessary technology directly into phone software. That could make it much easier for people to get the tool on their phone even if their local public health agency hasn’t built its own compatible app. The tech giants on Tuesday launched the second phase of their ‘exposure notification’ system, designed to automatically alert people if they might have been exposed to the coronavirus. Until now, only a handful of U.S. states have built pandemic apps using the tech companies’ framework, which has seen somewhat wider adoption in Europe and other parts of the world.”
In states that do adopt the system, iPhone users will be prompted for consent to run it on their phones. Android users will have to download the app, which Google will auto-generate for each public health agency that participates. Early adopters are expected to be Maryland, Nevada, Virginia, and Washington D.C. Virginia was the first to use the framework to launch a customized app in early August, followed by North Dakota, Wyoming, Alabama, and Nevada. O’Brien describes how it works:
“The technology relies on Bluetooth wireless signals to determine whether an individual has spent time near anyone else who has tested positive for the virus. Both people in this scenario must have signed up to use the Google-Apple technology. Instead of geographic location, the app relies on proximity. The companies say the app won’t reveal personal information either to them or public health officials.”
This all sounds helpful. However, the world being what it is today, we must ask: does this have surveillance applications? Perhaps. Note we’re promised the app won’t “reveal” personal data, but will it retain it? If it does, will agencies be able to resist this big, juicy pile of data? Promises about surveillance have a way of being broken, after all.
Cynthia Murrell, September 12, 2020
Surveillance Footage Has Value
September 10, 2020
It is not a secret that Google, Facebook, Apple, Instagram, and other large technology companies gather user data and sell it to the highest bidder. It is a easy way to pad their bottom line, especially when users freely give away this information. The Russian city of Moscow wants to ad more revenue to the city’s coffers, so they came up with an ingenious way to get more cash says Yahoo Finance, “Moscow May Sell Footage From Public Secret Camera: Report.”
According to the report, Moscow’s tech branch plans to broadcast videos captured on cameras in public areas. Technically, at least within the United States, if you are in a public place you are free to be filmed and whoever does the filming can do whatever they want with the footage. Russia must be acting on the same principle, so Moscow’s Department of Information Technologies purchased cameras to install outside of 539 hospitals. It might also be a way to increase security.
All of the footage will be stored on a central database and people will be able to purchase footage. The footage will also be shown on the Internet.
What is alarming is that MBK Media wrote in December 2019 that footage from Moscow’s street cameras was available for purchase on black markets with options to access individual or an entire system of cameras. This fact is scarier, however:
“The same department organized the blockchain-based electronic voting in Moscow and one more Russian region this summer when Russians voted to amend the country’s constitution. The voting process was criticized for the weak data protection.”
Moscow wants more ways to keep track of citizens in public areas and it wants to make some quick rubles off the process. Companies in the US do the same thing and the government as well.
Whitney Grace, September 10, 2020
—-
Oh, Oh, Millennials Want Their Words and Services Enhanced. Okay, Done!
September 9, 2020
A couple of amusing items caught my attention this morning. The first is Amazon’s alleged demand that a Silicon Valley real news outlet modify its word choice.
The Bezos bulldozer affects the social environment. The trillion horsepower Prime machine wants to make sure that its low cost gizmos are not identified with surveillance. Why is that? Perhaps because their inclusion of microphones, arrays, and assorted software designed to deal with voices in far corners performs surveillance? DarkCyber does not know. The solution? Amazon = surveillance. Now any word will do, right?
The second item is mentioned in “Microsoft Confirms Why Windows Defender Can’t Be Disabled via Registry.” The idea is that Microsoft’s system is now becoming Bob’s mom. You remember Bob, don’t you. User controls? Ho ho ho.
The third item is a rib tickler. You worry about censorship for text and videos, don’t you. Now you can worry about Google’s new user centric ability to filter your phone calls. That’s a howler. What if the call is from a person taking Google to court? Filtered. This benefits everyone. You can get the allegedly full story in “Google New Verified Calls Feature Will Tell You Why a Business Is Calling You.” Helpful.
Each of these examples amuse me. Shall we complain about Chinese surveillance apps?
These outfits are extending their perimeters as far as possible before the ever vigilant, lobbyist influenced political animals begin the great monopoly game.
Stephen E Arnold, September 9, 2020
Consumer Control of Personal Data: Too Late, Chums
September 3, 2020
“The Economics of Social Data” is an interesting write up by a Yale graduate student (how much time did you put into this work, Tan Gan?), a Yale professor (George Bush’s stomping grounds), and an MIT professor (yes, the outfit that accepted money from an alleged human trafficker and then stumbled through truth thickets).
What did these esteemed individuals discover? I like this sentence:
Platforms focuses on ensuring consumers’ control over their individual data. Regulators hope that ownership and control over one’s own data will result in appropriate compensation for the data one chooses to reveal. However, economists need to consider the social aspect of data collection. Because an individual user’s data is predictive of the behavior of others, individual data is in practice social data. The social nature of data leads to an externality: an individual’s purchase on Amazon, for example, will convey information about the likelihood of purchasing a certain product among other consumers with similar purchase histories.
Does this imply that a light bulb has flickered to life in the research cubbies of these influential scholars? Let’s grind forward:
While consumers can experience positive externalities, such as real-time traffic information, very little curbs the platform from trading data for profit in ways that harm consumers. Therefore, data ownership is insufficient to bring about the efficient use of information, since arbitrarily small levels of compensation can induce a consumer to relinquish her personal data.
Remember. I reside in rural Kentucky and most of my acquaintances go bare foot or wear work boots. It seems that after decades of non-regulation, governmental hand waving, and sitting on the porch watching monopolies thrive — a problem?
The fix? Here you go:
In terms of policy implications, our results on the aggregation of consumer information suggest that privacy regulation must move away from concerns over personalized prices at the individual level. Most often, firms do not set prices in response to individual-level characteristics. Instead, segmentation of consumers occurs at the group level (e.g. as in the case of Uber) or at the temporal and spatial levels (e.g. Staples, Amazon). Thus, our analysis points to the significant welfare effects of group-based price discrimination and of uniform prices that react in real time to changes in market-level demand.
Translation: Too late, chums.
Stephen E Arnold, September 3, 2020
Bad Actors Rejoice: Purrito Is Kitten with Claws
September 1, 2020
The Internet has taught us many things about people, particularly tech geeks. Technology geeks love challenging themselves with hacking tricks, possess off base senses of humor, and love their fur babies. They particularly love cats.
A “purrito” is a term coined by the animal rescue community for tiny kittens swaddled in tiny blankets, ergo like burritos. It goes without saying that burritos are adorable.
It is also not surprising that potential bad actors, who love cats, would purloin the Purrito for an “ultra fast, minimalistic, encrypted command line paste-bin.” Purrito Bin even uses characters to make a tabby kitten face: (=???=).
Reading through the instructions for Purrito, the developer made it even cuter by calling the standard client “meow” and a companion client “purr.” Purrito Bin is a simple way to encrypt files :
“In a encrypted storage setting, the paste is encrypted before sending it to the server.
Now the server will only be used as a storage bin and even in case of a non-https connection, you are guaranteed that no one else will be able to read the data that you have sent.
How does it work?
Steps automatically done by the provided clients, on the client side:
• Randomly generate an encryption key.
• Encrypt your data using said key, the encrypted data is called the cipher.
• Send the cipher to PurritoBin and get a standard paste url as above, which will be converted to the form”
The concept of Purrito Bin is itself genius, but is it a good idea for it to be posted publicly where bad actors can use it?
Whitney Grace, September 1, 2020