Google Books Is Not Violating Copyright

November 12, 2015

Google Books was controversial the moment it was conceived.  The concept is simple and effective though: books in academic libraries are scanned and snippets are made available online.  People have the ability to search Google Books for specific words or phrases, then they are shown where it is contained within a book.  The Atlantic wrote, “After Ten Years, Google Books Is Legal” about how a Second Circuit judge panel ruled in favor of Google Books against the Authors Guild.

The panel ruled that Google Books fell under the terms of “Fair Use,” which as most YouTubers know, is the ability to use a piece of copyrighted content within a strict set of rules.  Fair usage includes works of parody, academic works, quotations, criticism, or summarization.

The Authors Guild argued that Google Books was infringing upon its members copyrights and stealing potential profits, but anyone knows that too much of a copyright is a bad thing.  It places too many limitations on how the work can be used, harming the dissemination of creative and intellectual thought.

“’It gives us a better senses of where fair use lies,” says Dan Cohen, the executive director of the Digital Public Library of America. They “give a firmer foundation and certainty for non-profits…Of all the parts of Judge Leval’s decision, many people I talked to were happiest to see that it stressed that fair use’s importance went beyond any tool, company, or institution. ‘To me, I think a muscular fair use is an overall benefit to society, and I think it helps both authors and readers,’ said Cohen.”

Authors do have the right to have their work copyright and make a profit off it, which should be encouraged and a person’s work should not be given away for free.  There is a wealth of information out there, however, that is kept under lock and key and otherwise would not be accessed with a digital form.  Google Books only extends a book’s reach, speaking from one who has relied on it for research.

Whitney Grace, November 12, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

On the Prevalence of Open Source

November 11, 2015

Who would have thought, two decades ago, that open source code was going to dominate the software field? Vallified’s Philip O’Toole meditates on “The Strange Economics of Open-Source Software.” Though  the industry gives so much away for free, it’s doing quite well for itself.

O’Toole notes that closed-source software is still in wide use, largely in banks’ embedded devices and underpinning services. Also, many organizations are still attached to their Microsoft and Oracle products. But the tide has been turning; he writes:

“The increasing dominance of open-source software seems particularly true with respect to infrastructure software.  While security software has often been open-source through necessity — no-one would trust it otherwise — infrastructure is becoming the dominant category of open-source. Look at databases — MySQL, MongoDB, RethinkDB, CouchDB, InfluxDB (of which I am part of the development team), or cockroachdb. Is there anyone today that would even consider developing a new closed-source database? Or take search technology — elasticsearch, Solr, and bleve — all open-source. And Linux is so obvious, it is almost pointless to mention it. If you want to create a closed-source infrastructure solution, you better have an enormously compelling story, or be delivering it as part of a bigger package such as a software appliance.”

It has gotten to the point where developers may hesitate to work on a closed-source project because it will do nothing for their reputation.  Where do the profits come from, you may ask? Why in the sale of services, of course. It’s all part of today’s cloud-based reality.

Cynthia Murrell, November 11, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Drone and Balloon WiFi Coming to the Sky near You

November 10, 2015

Google and Facebook have put their differences aside to expand Internet access to four billion people.  Technology Review explains in “Facebook;s Internet Drone Team Is Collaborating With Google’s Stratospheric Balloons Project” how both companies have filed documented with the US Federal Communications Commission to push international law to make it easier to have aircraft fly 12.5 miles or 20 kilometers above the Earth, placing it in the stratosphere.

Google has been working on balloons that float in the stratosphere that function as aerial cell towers and Facebook is designing drones the size of aircraft that are tethered to the ground that serve the same purpose.  While the companies are working together, they will not state how.  Both Google and Facebook are working on similar projects, but the aerial cell towers marks a joint effort where they putting aside their difference (for the most part) to improve information access.

“However, even if Google and Facebook work together, corporations alone cannot truly spread Internet access as widely as is needed to promote equitable access to education and other necessities, says Nicholas Negroponte, a professor at MIT’s Media Lab and founder of the One Laptop Per Child Project.  ‘I think that connectivity will become a human right,’ said Negroponte, opening the session at which Facebook and Google’s Maguire and DeVaul spoke. Ensuring that everyone gets that right requires the Internet to be operated similar to public roads, and provided by governments, he said.”

Quality Internet access not only could curb poor education, but it could also improve daily living.  People in developing countries would be able to browse information to remedy solutions and even combat traditional practices that do more harm than good.

Some of the biggest obstacles will be who will maintain the aerial cell towers and also if they will pose any sort of environmental danger.

Whitney Grace, November 10, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

CEM Platform Clarabridge 7 Supports Silo Elimination

November 10, 2015

The move to eliminate data silos in the corporation has gained another friend, we learn in Direct Marketing News’ piece, “Clarabridge Joins the Burn-Down-the-Silos Movement.” With their latest product release, the customer experience management firm hopes to speed their clients’ incorporation of business intelligence and feedback. The write-up announces:

“Clarabridge today released Clarabridge 7, joining the latest movement among marketing tech companies to speed actionability of data intelligence by burning down the corporate silos. The new release’s CX Studio promises to provide users a route to exploring the full customer journey in an intuitive manner. A new dashboard and authoring capability allows for “massive rollout,” in Clarabridge’s terms, across an entire enterprise.

“Also new are role-based dashboards that translate data in a manner relevant to specific roles, departments, and levels in an organization. The company claims that such personalization lets users take intelligence and feedback and put it immediately into action. CX Engagor expedites that by connecting business units directly with consumers in real time.”

We have to wonder whether this rush to “burn the silos” will mean that classified information will get out; details germane to a legal matter, for example, or health information or financial data. How can security be applied to an open sea of data?

Clarabridge has spent years developing its sentiment and text analytics technology, and asserts it is uniquely positioned to support enterprise-scale customer feedback initiatives. The company maintains offices in Barcelona, London, San Francisco, Singapore, and Washington, DC. They also happen to be hiring as of this writing.

Cynthia Murrell, November 10, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Photo Farming in the Early Days

November 9, 2015

Have you ever wondered what your town looked like while it was still urban and used as farmland?  Instead of having to visit your local historical society or library (although we do encourage you to do so), the United States Farm Security Administration and Office Of War Information (known as  FSA-OWI for short) developed Photogrammer.  Photogrammer is a Web-based image platform for organizing, viewing, and searching farm photos from 1935-1945.

Photogrammer uses an interactive map of the United States, where users can click on a state and then a city or county within it to see the photos from the timeline.  The archive contains over 170,000 photos, but only 90,000 have a geographic classification.  They have also been grouped by the photographer who took the photos, although it is limited to fifteen people.  Other than city, photographer, year, and month, the collection c,an be sorted by collection tags and lot numbers (although these are not discussed in much detail).

While farm photographs from 1935-1945 do not appear to need their own photographic database, the collection’s history is interesting:

“In order to build support for and justify government programs, the Historical Section set out to document America, often at her most vulnerable, and the successful administration of relief service. The Farm Security Administration—Office of War Information (FSA-OWI) produced some of the most iconic images of the Great Depression and World War II and included photographers such as Dorothea Lange, Walker Evans, and Arthur Rothstein who shaped the visual culture of the era both in its moment and in American memory. Unit photographers were sent across the country. The negatives were sent to Washington, DC. The growing collection came to be known as “The File.” With the United State’s entry into WWII, the unit moved into the Office of War Information and the collection became known as the FSA-OWI File.”

While the photos do have historical importance, rather than creating a separate database with its small flaws, it would be more useful if it was incorporated into a larger historical archive, like the Library of Congress, instead of making it a pet project.

Whitney Grace, November 9, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Banks Turn to Blockchain Technology

November 9, 2015

Cryptocurrency has come a long way, and now big banks are taking the technology behind Bitcoin very seriously, we learn in “Nine of the World’s Biggest Banks Form Blockchain Partnership” at Re/code. Led by financial technology firm R3, banks are signing on to apply blockchain tech to the financial markets. A few of the banks involved so far include Goldman Sacks, Barclays, JP Morgan, Royal Bank of Scotland, Credit Suisse, and Commonwealth Bank of Australia. The article notes:

“The blockchain works as a huge, decentralized ledger of every bitcoin transaction ever made that is verified and shared by a global network of computers and therefore is virtually tamper-proof. The Bank of England has a team dedicated to it and calls it a ‘key technological innovation.’ The data that can be secured using the technology is not restricted to bitcoin transactions. Two parties could use it to exchange any other information, within minutes and with no need for a third party to verify it. [R3 CEO David] Rutter said the initial focus would be to agree on an underlying architecture, but it had not yet been decided whether that would be underpinned by bitcoin’s blockchain or another one, such as one being built by Ethereum, which offers more features than the original bitcoin technology.”

Rutter did mention he expects this tech to be used post-trade, not directly in exchange or OTC trading, at least not soon. It is hoped the use of blockchain technology will increase security while reducing security and errors.

Cynthia Murrell, November 9, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Data Analytics Is More Than Simple Emotion

November 6, 2015

Hopes and Fears posted the article, “Are You Happy Now? The Uncertain Future Of Emotion Analytics” discusses the possible implications of technology capable of reading emotions.  The article opens with a scenario from David Collingridge explaining that the only way to truly gauge technology’s impact is when it has become so ingrained into society that it would be hard to change.  Many computing labs are designing software capable of reading emotions using an array of different sensors.

The biggest problem ahead is not how to integrate emotion reading technology into our lives, but what are the ethical concerns associated with it?

Emotion reading technology is also known as affective computing and the possible ethical concerns are more than likely to come from corporation to consumer relationships over consumer-to-consumer relationships.  Companies are already able to track a consumer’s spending habits by reading their Internet data and credit cards, then sending targeted ads.

Consumers should be given the option to have their emotions read:

“Affective computing has the potential to intimately affect the inner workings of society and shape individual lives. Access, an international digital rights organization, emphasizes the need for informed consent, and the right for users to choose not to have their data collected. ‘All users should be fully informed about what information a company seeks to collect,’ says Drew Mitnick, Policy Counsel with Access, ‘The invasive nature of emotion analysis means that users should have as much information as possible before being asked to subject [themselves] to it.’”

While the article’s topic touches on fear, it ends on a high note that we should not be afraid of the future of technology.  It is important to discuss ethical issues right now, so groundwork will already be in place to handle affective computing.

Whitney Grace, November 6, 2015

Pew Report Compares News Sources: Twitter and Facebook

November 6, 2015

As newspapers fall, what is rising to take their place? Why, social media, of course. The Pew Research Center discusses its recent findings on the subject in, “The Evolving Role of News on Twitter and Facebook.” The number of Americans getting their news from these platforms continues to rise, across almost all demographic groups. The article informs us:

“The new study, conducted by Pew Research Center in association with the John S. and James L. Knight Foundation, finds that clear majorities of Twitter (63%) and Facebook users (63%) now say each platform serves as a source for news about events and issues outside the realm of friends and family. That share has increased substantially from 2013, when about half of users (52% of Twitter users, 47% of Facebook users) said they got news from the social platforms.”

The write-up describes some ways the platforms differ in their news delivery. For example, more users turn to Twitter for breaking news, while Facebook now features a  “Trending” sidebar, filterable by subject. The article notes that these trends can have an important impact on our society:

“As more social networking sites recognize and adapt to their role in the news environment, each will offer unique features for news users, and these features may foster shifts in news use. Those different uses around news features have implications for how Americans learn about the world and their communities, and for how they take part in the democratic process.”

Indeed. See the article for more differences between Facebook and Twitter news consumers, complete with some percentages. You can also see the data’s barebones results in the report’s final topline. Most of the data comes from a survey conducted across two weekends last March, among 2,035 Americans aged 18 and up.

Cynthia Murrell, November 6, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Machine Learning Is Play for Children

November 5, 2015

I heard an interesting idea the other idea.  Most parents think that when their toddler can figure out how to use a tablet that he or she is a genius, but did you ever consider that the real genius is the person who actually designed the tablet’s interface?  Soon a software developer will be able to think their newest cognitive system is the next Einstein or Edison says Computer World in the article, “Machines Will Learn Just Like A Child, Says IBM CEO.”

IBM’s CEO Virginia Rometty said that technology is to the point where machines are almost close to reasoning.  Current cognitive systems are now capable of understanding unstructured data, such as images, videos, songs, and more.

” ‘When I say reason it’s like you and I, if there is an issue or question, they take in all the information that they know, they stack up a set of hypotheses, they run it against all that data to decide, what do I have the most confidence in, ‘ Rometty said. The machine ‘can prove why I do or don’t believe something, and if I have high confidence in an answer, I can show you the ranking of what my answers are and then I learn.’ ”

The cognitive systems learn more as they are fed more data.  There is a greater demand for machines that can process more data and are “smarter” and handle routines that make it useful.

The best news about machines gaining the learning capabilities of a human child is that they will not replace an actual human being, but rather augment our knowledge and current technology.

Whitney Grace, November 5, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Journalists Use Dark Web Technology to Protect Source Privacy

November 4, 2015

Canada’s paper the Globe and Mail suggests those with sensitive information to reveal some Dark Web tech: “SecureDrop at the Globe and Mail.” As governments get less squeamish about punishing whistleblowers, those with news the public deserves to know must  be increasingly careful how they share their knowledge. The website begins by informing potential SecureDrop users how to securely connect through the Tor network. The visitor is informed:

“The Globe and Mail does not log any of your interactions with the SecureDrop system, including your visit to this page. It installs no tracking cookies or tracking software of any kind on your computer as part of the process. Your identity is not exposed to us during the upload process, and we do not know your unique code phrase. This means that even if a code phrase is compromised, we cannot comply with demands to provide documents that were uploaded by a source with that code phrase. SecureDrop itself is an open-source project that is subject to regular security audits, reducing the risk of bugs that could compromise your information. Information provided through SecureDrop is handled appropriately by our journalists. Journalists working with uploaded files are required to use only computers with encrypted hard drives and follow security best practices. Anonymous sources are a critical element of journalism, and The Globe and Mail has always protected its sources to the best of its abilities.

The page closes with a warning that no communication can be perfectly secure, but that this system is closer than most. Will more papers take measures to ensure folks can speak up without being tracked down?

Cynthia Murrell, November 4, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta