Free Employees? Yep, Smart Software Saves Jobs Too
May 31, 2023
If you want a “free employee,” navigate to “100+ Tech Roles Prompt Templates.” The service offers:
your secret weapon for unleashing the full potential of AI in any tech role. Boost productivity, streamline communication, and empower your AI to excel in any professional setting.
The templates embrace:
- C-Level Roles
- Programming Roles
- Cybersecurity Roles
- AI Roles
- Administrative Roles
How will an MBA makes use of this type of capability? Here are a few thoughts:
First, terminate unproductive humans with software. The action will save time and reduce (allegedly) some costs.
Second, trim managerial staff who handle hiring, health benefits (ugh!), and administrative work related to humans.
Third, modify one’s own job description to yield more free time in which to enjoy the bonus pay the savvy MBA will receive for making the technical unit more productive.
Fourth, apply the concept to the company’s legal department, marketing department, and project management unit.
Paradise.
Stephen E Arnold, May 2023
The Data Sharing of Healthcare
December 8, 2016
Machine learning tools like the artificial intelligence Watson from IBM can and will improve healthcare access and diagnosis, but the problem is getting on the road to improvement. Implementing new technology is costly, including the actual equipment and training staff, and there is always the chance it could create more problems than resolving them. However, if the new technology makes a job easier and resolves situations then you are on the path to improvement. The UK is heading that way says TechCrunch in, “DeepMind Health Inks New Deal With UK’s NHS To Deploy Streams App In Early 2017.”
London’s NHS Royal Free Hospital will employ DeepMind Health in 2017, taking advantage of its data sharing capabilities. Google owns DeepMind Health and it focuses on driving the application of machine learning algorithms in preventative medicine. The NHS and DeepMind Health had a prior agreement in the past, but when the New Scientist made a freedom of information request their use of patients’ personal information came into question. The information was used to power the Streams app to sent alerts to acute kidney injury patients. However, ICO and MHRA shut down Streams when it was discovered it was never registered as a medical device.
The eventual goal is to relaunch Streams, which is part of the deal, but DeepMind has to repair its reputation. DeepMind is already on the mend with the new deal and registering Streams as a medical device also helped. In order for healthcare apps to function properly, they need to be tested:
The point is, healthcare-related AI needs very high-quality data sets to nurture the kind of smarts DeepMind is hoping to be able to build. And the publicly funded NHS has both a wealth of such data and a pressing need to reduce costs — incentivizing it to accept the offer of “free” development work and wide-ranging partnerships with DeepMind…
Streams is the first step towards a healthcare system powered by digital healthcare products. As already seen is the stumbling block protecting personal information and powering the apps so they can work. Where does the fine line between the two end?
Whitney Grace, December 8, 2016
Google Shifts Development Emphasis to Artificial Intelligence
December 2, 2016
The article on The American Genius titled Google’s Ambitious Plans to Change Every Device on the Planet explains the focus on A.I. innovation by Sundar Pichai, a Google CEO. If you think Google is behind when it comes to A.I., you haven’t been paying close enough attention. Google has dipped its feet in voice recognition and machine translation as well as language understanding, but the next step is Google Home. The article states,
This device seems to be a direct answer to Amazon’s Echo. Google Home isn’t the only product set to launch, however. They also plan to launch a messaging app called Allo. This is likely a direct response to WhatsApp, Kik, and other popular messaging platforms… Google may be hoping Allo is the answer for what this particular platform is lacking. Allo and Google Home will both be powered by a “Google assistant” (a bit like Siri), but in their eyes, more engaging.
So what will the future landscape of A.I. technology look like? Depends on who you believe. Microsoft, Apple, and Amazon can all point to an existing product, but Google can mention AlphaGo, the computer program developed by Google DeepMind, in response. Pichai recognizes that Google must be all about the long game when it comes to A.I., because so far, we have only scratched the surface. What role will Google play in the much-feared A.I. arms race? All we know right now is that more Google is good for Google.
Chelsea Kerwin, December 2, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Wisdom from the First OReilly AI Conference
November 28, 2016
Forbes contributor Gil Press nicely correlates and summarizes the insights he found at September’s inaugural O’Reilly AI Conference, held in New York City, in his article, “12 Observations About Artificial Intelligence from the O’Reily AI Conference.” He begins:
At the inaugural O’Reilly AI conference, 66 artificial intelligence practitioners and researchers from 39 organizations presented the current state-of-AI: From chatbots and deep learning to self-driving cars and emotion recognition to automating jobs and obstacles to AI progress to saving lives and new business opportunities. … Here’s a summary of what I heard there, embellished with a few references to recent AI news and commentary.
Here are Press’ 12 observations; check out the article for details on any that spark your interest: “AI is a black box—just like humans”; “AI is difficult”; “The AI driving driverless cars is going to make driving a hobby. Or maybe not”; “AI must consider culture and context”; “AI is not going to take all our jobs”; “AI is not going to kill us”; “AI isn’t magic and deep learning is a useful but limited tool”; “AI is Augmented Intelligence”; “AI changes how we interact with computers—and it needs a dose of empathy”; “AI should graduate from the Turing Test to smarter tests”; “AI according to Winston Churchill”; and “AI continues to be possibly hampered by a futile search for human-level intelligence while locked into a materialist paradigm.”
It is worth contemplating the point Press saved for last—are we even approaching this whole AI thing from the most productive angle? He ponders:
Is it possible that this paradigm—and the driving ambition at its core to play God and develop human-like machines—has led to the infamous ‘AI Winter’? And that continuing to adhere to it and refusing to consider ‘genuinely new ideas,’ out-of-the-dominant-paradigm ideas, will lead to yet another AI Winter? Maybe, just maybe, our minds are not computers and computers do not resemble our brains? And maybe, just maybe, if we finally abandon the futile pursuit of replicating ‘human-level AI’ in computers, we will find many additional–albeit ‘narrow’–applications of computers to enrich and improve our lives?
I think Press is on to something. Perhaps we should admit that anything approaching Rosie the Robot is still decades away (according to conference presenter Oren Etzioni). At this early date, we may do well to accept and applaud specialized AIs that do one thing very well but are completely ignorant of everything else. After all, our Roombas are unlikely to attempt conquering the world.
Cynthia Murrell, November 28, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Neural-Net AI Service Echobox Manages Newspaper Presences on Social Media
November 18, 2016
An article at Bloomberg Technology, titled “It Took Robots for This French Newspaper to Conquer Twitter,” introduces Echobox, a startup that uses a neural-network approach to managing clients’ social media presences. The newspaper mentioned in the title is the esteemed Liberation, but Echobox also counts among its clients the French Le Monde, Argentinia’s La Nacion, and The Straits Times out of Singapore, among many others. Apparently, the site charges by the page view, though more pricing details are not provided. Writer Jeremy Kahn reports that Echobox:
… Determines the most opportune time to post a particular story to drive readership, can recommend what headline or tweet to send out, and can select the best photograph to illustrate the post. Using the software to post an average of 27 articles per day, Grainger [Liberation’s CTO] said that Liberation had seen a 37 percent increase in the number of people it reached on Facebook and a 42 percent boost in its reach on Twitter. ‘We have way more articles being seen by 100,000 people or more than before,’ Grangier said. He also said it made life easier for his digital editors, allowing them to spend more time curating the stories they wanted to publish to social media and less on the logistics of actually posting that content.
So, it seems like the service is working. Echobox’s CTO Marc Fletcher described his company’s goal—to create a system that could look at content from an editor’s point of view. The company tailors their approach to each customer, of course. There are competitors in the social-media-management space, like SocialFlow and Buffer, but Kahn says Echobox goes further. He writes:
Echobox professes to offer a fuller range of automation than those services, with its software able to alter a posting schedule to adjust to breaking news, posting content related to that event, and delaying publication of less relevant stories. Echobox uses a neural network, a type of machine learning that is designed to mimic the way parts of the human brain works. This system first learns the audience composition and reading habits for each publication and then makes predictions about the best way to optimize a particular story for social media. Over time, the predictions should get more accurate as it ‘learns’ the nuances of the brand’s audience.
This gives us one more example of how AI capabilities are being put to practical use. Founded in 2013, Echobox is based in London and maintains an office in New York City. The company also happens to be hiring as I write this.
Cynthia Murrell, November 18, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Big Data Teaches Us We Are Big Paranoid
November 18, 2016
I love election years! Actually, that is sarcasm. Election years bring out the worst in Americans. The media runs rampant with predictions that each nominee is the equivalent of the anti-Christ and will “doom America,” “ruin the nation,” or “destroy humanity.” The sane voter knows that whoever the next president is will probably not destroy the nation or everyday life…much. Fear, hysteria, and paranoia sells more than puff pieces and big data supports that theory. Popular news site Newsweek shares that, “Our Trust In Big Data Shows We Don’t Trust Ourselves.”
The article starts with a new acronym: DATA. It is not that new, but Newsweek takes a new spin on it. D means dimensions or different datasets, the ability to combine multiple data streams for new insights. A is for automatic, which is self-explanatory. T stands for time and how data is processed in real time. The second A is for artificial intelligence that discovers all the patterns in the data.
Artificial intelligence is where the problems start to emerge. Big data algorithms can be unintentionally programmed with bias. In order to interpret data, artificial intelligence must learn from prior datasets. These older datasets can show human bias, such as racism, sexism, and socioeconomic prejudices.
Our machines are not as objectives as we believe:
But our readiness to hand over difficult choices to machines tells us more about how we see ourselves.
Instead of seeing a job applicant as a person facing their own choices, capable of overcoming their disadvantages, they become a data point in a mathematical model. Instead of seeing an employer as a person of judgment, bringing wisdom and experience to hard decisions, they become a vector for unconscious bias and inconsistent behavior. Why do we trust the machines, biased and unaccountable as they are? Because we no longer trust ourselves.”
Newsweek really knows how to be dramatic. We no longer trust ourselves? No, we trust ourselves more than ever, because we rely on machines to make our simple decisions so we can concentrate on more important topics. However, what we deem important is biased. Taking the Newsweek example, what a job applicant considers an important submission, a HR representative will see as the 500th submission that week. Big data should provide us with better, more diverse perspectives.
Whitney Grace, November 18, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
AI to Profile Gang Members on Twitter
November 16, 2016
Researchers from Ohio Center of Excellence in Knowledge-enabled Computing (Kno.e.sis) are claiming that an algorithm developed by them is capable of identifying gang members on Twitter.
Vice.com recently published an article titled Researchers Claim AI Can Identify Gang Members on Twitter, which claims that:
A deep learning AI algorithm that can identify street gang members based solely on their Twitter posts, and with 77 percent accuracy.
The article then points out the shortcomings of the algorithm or AI by saying this:
According to one expert contacted by Motherboard, this technology has serious shortcomings that might end up doing more harm than good, especially if a computer pegs someone as a gang member just because they use certain words, enjoy rap, or frequently use certain emojis—all criteria employed by this experimental AI.
The shortcomings do not end here. The data on Twitter is being analyzed in a silo. For example, let us assume that few gang members are identified using the algorithm (remember, no location information is taken into consideration by the AI), what next?
Is it not necessary then to also identify other social media profiles of the supposed gang members, look at Big Data generated by them, analyze their communication patterns and then form some conclusion? Unfortunately, none of this is done by the AI. It, in fact, would be a mammoth task to extrapolate data from multiple sources just to identify people with certain traits.
And most importantly, what if the AI is put in place, and someone just for the sake of fun projects an innocent person as a gang member? As rightly pointed out in the article – machines trained on prejudiced data tend to reproduce those same, very human, prejudices.
Vishal Ingole, November 16, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Partnership Aims to Establish AI Conventions
October 24, 2016
Artificial intelligence research has been booming, and it is easy to see why— recent advances in the field have opened some exciting possibilities, both for business and society as a whole. Still, it is important to proceed carefully, given the potential dangers of relying too much on the judgement of algorithms. The Philadelphia Inquirer reports on a joint effort to develop some AI principles and best practices in its article, “Why This AI Partnership Could Bring Profits to These Tech Titans.” Writer Chiradeep BasuMallick explains:
Given this backdrop, the grandly named Partnership on AI to Benefit People and Society is a bold move by Alphabet, Facebook, IBM and Microsoft. These globally revered companies are literally creating a technology Justice League on a quest to shape public/government opinion on AI and to push for friendly policies regarding its application and general audience acceptability. And it should reward investors along the way.
The job at hand is very simple: Create a wave of goodwill for AI, talk about the best practices and then indirectly push for change. Remember, global laws are still obscure when it comes to AI and its impact.
Curiously enough, this elite team is missing two major heavyweights. Apple and Tesla Motors are notably absent. Apple Chief Executive Tim Cook, always secretive about AI work, though we know about the estimated $200 million Turi project, is probably waiting for a more opportune moment. And Elon Musk, co-founder, chief executive and product architect of Tesla Motors, has his own platform to promote technology, called OpenAI.
Along with representatives of each participating company, the partnership also includes some independent experts in the AI field. To say that technology is advancing faster than the law can keep up with is a vast understatement. This ongoing imbalance underscores the urgency of this group’s mission to develop best practices for companies and recommendations for legislators. Their work could do a lot to shape the future of AI and, by extension, society itself. Stay tuned.
Cynthia Murrell, October 24, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Microsoft Considers next Generation Artificial Intelligence
August 24, 2016
While science fiction portrays artificial intelligence in novel and far-reaching ways, certain products utilizing artificial intelligence are already in existence. WinBeta released a story, Microsoft exec at London conference: AI will “change everything”, which reminds us of this. Digital assistants like Cortana and Siri are one example of how mundane AI can appear. However, during a recent AI conference, Microsoft UK’s chief envisioning officer Dave Choplin projected much more impactful applications. This article summarizes the landscape of concerns,
Of course, many also are suspect about the promise of artificial intelligence and worry about its impact on everyday life or even its misuse by malevolent actors. Stephen Hawking has worried AI could be an existential threat and Tesla CEO Elon Musk has gone on to create an open source AI after worrying about its misuse. In his statements, Choplin also stressed that as more and more companies try to create AI, ‘We’ve got to start to make some decisions about whether the right people are making these algorithms.
There is much to consider in regards to artificial intelligence. However, such a statement about “the right people” cannot stop there. Choplin goes on to refer to the biases of people creating algorithms and the companies they work for. Because organizational structures must also be considered, so too must their motivator: the economy. Perhaps machine learning to understand the best way to approach AI would be a good first application.
Megan Feil, August 24, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Superior Customer Service Promised through the Accenture Virtual Agent Amelia
August 17, 2016
The article titled Accenture Forms New Business Unit Around IPsoft’s Amelia AI Platform on ZDNet introduces Amelia as a virtual agent capable of providing services in industries such as banking, insurance, and travel. Amelia looks an awful lot like Ava from the film Ex Machina, wherein an AI robot manipulates a young programmer by appealing to his empathy. Similarly, Accenture’s Amelia is supposed to be far more expressive and empathetic than her kin in the female AI world such as Siri or Amazon’s Alexa. The article states,
“Accenture said it will develop a suite of go-to-market strategies and consulting services based off of the Amelia platform…the point is to appeal to executives who “are overwhelmed by the plethora of technologies and many products that are advertising AI or Cognitive capabilities”…For Accenture, the formation of the Amelia practice is the latest push by the company to establish a presence in the rapidly expanding AI market, which research firm IDC predicts will reach $9.2 billion by 2019.”
What’s that behind Amelia, you ask? Looks like a parade of consultants ready and willing to advise the hapless executives who are so overwhelmed by their options. The Amelia AI Platform is being positioned as a superior customer service agent who will usher in the era of digital employees.
Chelsea Kerwin, August 17, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden /Dark Web meet up on August 23, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233019199/

