AI and Increasing Inequality: Smart Software Becomes the New Dividing Line

August 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Will AI Be an Economic Blessing or Curse?” engages is prognosticative “We will be sorry” analysis. Yep, I learned about this idea in Dr. Francis Chivers’ class about Epistemology at Duquesne University. Wow! Exciting. The idea is that knowing is phenomenological. Today’s manifestation of this mental process is in the “fake data” and “alternative facts” approach to knowledge.

8 8 cruising ai highway

An AI engineer cruising the AI highway. This branch of the road does not permit boondocking or begging. MidJourney disappointed me again. Sigh.

Nevertheless, the article makes a point I find quite interesting; specifically, the author invites me to think about the life of a peasant in the Middle Ages. There were some technological breakthroughs despite the Dark Ages and the charmingly named Black Death. Even though plows improved and water wheels were rediscovered, peasants were born into a social system. The basic idea was that the poor could watch rich people riding through fields and sometimes a hovel in pursuit of fun, someone who did not meet meet their quota of wool, or a toothsome morsel. You will have to identify a suitable substitute for the morsel token.

The write up points out (incorrectly in my opinion):

“AI has got a lot of potential – but potential to go either way,” argues Simon Johnson, professor of global economics and management at MIT Sloan School of Management. “We are at a fork in the road.”

My view is that the AI smart software speedboat is roiling the data lakes. Once those puppies hit 70 mph on the water, the casual swimmers or ill prepared people living in houses on stilts will be disrupted.

The write up continues:

Backers of AI predict a productivity leap that will generate wealth and improve living standards. Consultancy McKinsey in June estimated it could add between $14 trillion and $22 trillion of value annually – that upper figure being roughly the current size of the U.S economy.

On the bright side, the write up states:

An OECD survey of some 5,300 workers published in July suggested that AI could benefit job satisfaction, health and wages but was also seen posing risks around privacy, reinforcing workplace biases and pushing people to overwork.
“The question is: will AI exacerbate existing inequalities or could it actually help us get back to something much fairer?” said Johnson.

My view is not populated with an abundance of happy faces. Why? Here are my observations:

  1. Those with knowledge about AI will benefit
  2. Those with money will benefit
  3. Those in the right place at the right time and good luck as a sidekick will benefit
  4. Those not in Groups one, two, and three will be faced with the modern equivalent of laboring as a peasant in the fields of the Loire Valley.

The idea that technology democratizes is not in line with my experience. Sure, most people can use an automatic teller machine and a mobile phone functioning as a credit card. Those who can use, however, are not likely to find themselves wallowing in the big bucks of the firms or bureaucrats who are in the AI money rushes.

Income inequality is one visible facet of a new data flyway. Some get chauffeured; others drift through it. Many stand and marvel at rushing flows of money. Some hold signs with messages like “Work needed” or “Homeless. Please, help.”

The fork in the road? Too late. The AI Flyway has been selected. From my vantage point, one benefit will be that those who can drive have some new paths to explore. For many, maybe orders of magnitude more people, the AI Byway opens new areas for those who cannot afford a place to live.

The write up assumes the fork to the AI Flyway has not been taken. It has, and it is not particularly scenic when viewed from a speeding start up gliding on neural networks.

Stephen E Arnold, August 16, 2023

Wanna Be an AI Entrepreneur: Part 1, A How To from Crypto Experts

August 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

For those looking to learn more about AI, venture capital firm Andreessen Horowitz has gathered resources from across the Internet for a course of study it grandly calls the “AI Canon.” It is a VCs dream curriculum in artificial intelligence. Naturally, the authors include a link to each resource. The post states:

“Research in artificial intelligence is increasing at an exponential rate. It’s difficult for AI experts to keep up with everything new being published, and even harder for beginners to know where to start. So, in this post, we’re sharing a curated list of resources we’ve relied on to get smarter about modern AI. We call it the ‘AI Canon’ because these papers, blog posts, courses, and guides have had an outsized impact on the field over the past several years. We start with a gentle introduction to transformer and latent diffusion models, which are fueling the current AI wave. Next, we go deep on technical learning resources; practical guides to building with large language models (LLMs); and analysis of the AI market. Finally, we include a reference list of landmark research results, starting with ‘Attention is All You Need’ — the 2017 paper by Google that introduced the world to transformer models and ushered in the age of generative AI.”

Yes, the Internet is flooded with articles about AI, some by humans and some by self-reporting algorithms. Even this curated list is a bit overwhelming, but at least it narrows the possibilities. It looks like a good place to start learning more about this inescapable phenomenon. And while there, one can invest in the firm’s hottest prospects we think.

Cynthia Murrell, August 16, 2023

Does Information Filtering Grant the Power to Control People and Money? Yes, It Does

August 15, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read an article which I found interesting because it illustrates how filtering works. “YouTube Starts Mass Takedowns of Videos Promoting Harmful or Ineffective Cancer Cures.” The story caught my attention because I have seen reports that the US Food & Drug Administration has been trying to explain its use of language in the midst of the Covid anomaly. The problematic word is “quips.” The idea is that official-type information was not intended as more than a “quip.” I noted the explanations as reported in articles similar to “Merely Quips? Appeals Court Says FDA Denunciations of Iv$erm#ctin Look Like Command, Not Advice.” I am not interested in either the cancer or FDA intentions per se.

7 22 digital delphi

Two bright engineers built a “filter machine.” One of the engineers (the one with the hat) says, “Cool. We can accept a list of stop words or a list of urls on a watch list and block the content.” The other says, “Yes, and I have added a smart module so that any content entering the Info Shaper is stored. We don’t want to lose any valuable information, do we?” The fellow with the hat says, “No one will know what we are blocking. This means we can control messaging to about five billion people.” The co-worker says, “It is closer to six billion now.” Hey, MidJourney, despite your troubles with the outstanding Discord system, you have produced a semi-useful image a couple of weeks ago.

The idea which I circled in True Blue was:

The platform will also take action against videos that discourage people from seeking professional medical treatment as it sets out its health policies going forward.

I interpreted this to mean that Alphabet Google is now implementing what I would call editorial policies. The mechanism for deciding what content is “in bounds” and what content is “out of bounds” is not clear to me. In the days when there were newspapers and magazines and non-AI generated books, there were people of a certain type and background who wanted to work in departments responsible for defining and implementing editorial policies. In the days before digital online services destroyed the business models upon which these media depended were destroyed, the editorial policies operated as an important component of information machines. Commercial databases had editorial policies too. These policies helped provide consistent content based on the guidelines. Some companies did not make a big deal out of the editorial policies. Other companies and organizations did. Either way, the flow of digital content operated like a sandblaster. Now we have experienced 25 years of Wild West content output.

Why do II  — a real and still alive dinobaby — care about the allegedly accurate information in “YouTube Starts Mass Takedowns of Videos Promoting Harmful or Ineffective Cancer Cures”? Here are three reasons:

  1. Control of information has shifted from hundreds of businesses and organizations to a few; therefore, some of the Big Dogs want to make certain they can control information. Who wants a fake cancer cure? Like other types of straw men, most people say yes to this type of filtering. A B testing can “prove” that people want this type of filtering I would suggest.
  2. The mechanisms to shape content have been a murky subject for Google and other high technology companies. If the “Mass Takedowns” write up is accurate, Google is making explicit its machine to manage information. Control of information in a society in which many people lack certain capabilities in information analysis and the skills to check the provenance of information are going to operate in a “frame” defined by a commercial enterprise.
  3. The different governmental authorities appear to be content to allow a commercial firm to become the “decider in chief” when it comes to information flow. With concentration and consolidation comes power in my opinion.

Is there a fix? No, because I am not sure that independent thinking individuals have the “horsepower” to redirect the direction the big machine is heading.

Why did I bother to write this? My hope is that someone start thinking about the implications of a filtering machine. If one does not have access to certain information like a calculus book, most people cannot solve calculus problems. The same consequence when information is simply not available. Ban books? Sure, great idea. Ban information about a medication? Sure, great idea. Ban discourse on the Internet? Sure, great idea.

You may see where this type of thinking leads. If you don’t, may I suggest you read Alexis de Tocqueville’s Democracy in America. You can find a copy at this link. (Verified on August 15, 2023, but it may be disappeared at any time. And if you can’t read it, you will not know what the savvy French guy spelled out in the mid 19th century.) If you don’t know something, then the information does not exist and will not have an impact on one’s “thinking.”

One final observation to young people, although I doubt I have any youthful readers: “Keep on scrolling.”

Stephen E Arnold, August 15, 2023

 

Sam AI-Man: A Big Spender with Trouble Ahead?

August 15, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

$700,000 per day. That’s an interesting number if it is accurate. “ChatGPT In Trouble: OpenAI May Go Bankrupt by 2024, AI Bot Costs Company $700,000 Every Day” states that the number is the number. What’s that mean? First, forget salaries, general and administrative costs, the much-loved health care for humans, and the oddments one finds on balance sheets. (What was that private executive flight to Tampa Bay?)

81 cannt pay ees

A young entrepreneur realizes he cannot pay his employees. Thanks, MidJourney, whom did you have in your digital mind?

I am a dinobaby, but I can multiply. The total is $255,500,000. I want to ask about money (an investment, of course) from Microsoft, how the monthly subscription fees are floating the good ship ChatGPT, and the wisdom of hauling an orb to scan eyeballs from place to place. (Doesn’t that take away from watching the bourbon caramel cookies reach their peak of perfection? My hunch is, “For sure.”)

The write up reports:

…the shift from non-profit to profit-oriented, along with CEO Sam Altman’s lack of equity ownership, indicates OpenAI’s interest in profitability. Although Altman might not prioritize profits, the company does. Despite this, OpenAI hasn’t achieved profitability; its losses reached $540 million since the development of ChatGPT.

The write up points out that Microsoft’s interest in ChatGPT continues. However, the article observes:

Complicating matters further is the ongoing shortage of GPUs. Altman mentioned that the scarcity of GPUs in the market is hindering the company’s ability to enhance and train new models. OpenAI’s recent filing for a trademark on ‘GPT-5’ indicates their intention to continue training models. However, this pursuit has led to a notable drop in ChatGPT’s output quality.

Another minor issue facing Sam AI-Man is that legal eagles are circling. The Zuck dumped his pet Llama as open source. And the Google and Googley chugs along and Antropic “clawed” into visibility.

Net net: Sam AI-Man may find that he will an opportunity to explain how the dial on the garage heater got flipped from Hot to Fan Only.

Stephen E Arnold, August 15, 2023

Will the US Take Action against Google? Yes, Just Gentle Action It Seems

August 15, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

After several years of preparation, the DOJ has finally gotten its case against Google before the US District Court for DC only to have the judge drastically narrow its scope.

8 12 boy confronts dinosaur

A brave young person confronts a powerful creature named Googzilla. The beastie just lumbers forward. MidJourney does nice dinosaurs.

Ars Technica reports, “In Win for Google, Judge Dismisses Many Claims in DOJ Monopoly Case.” We learn:

“In his opinion unsealed Friday, Judge Amit Mehta dismissed one of the more significant claims raised in the case brought by the Justice Department and the attorneys general from 38 states that alleges that Google rigged search results to boost its own products over those of competitors like Amazon, OpenTable, Expedia, or eBay. Mehta said that these claims were ‘raised only by the Colorado plaintiffs’ and failed to show evidence of anticompetitive effects, relying only on the ‘opinion and speculation’ of antitrust legal expert Jonathan Baker, who proposed a theory of anticompetitive harm.”

Hmm, interesting take. Some might assert the anticompetitive harm is self-evident here. But wait, there’s more:

“On top of dropping claims about the anticompetitive design of Google search results, the court ‘also dismissed allegations about Google’s Android Compatibility Agreements, Anti-Fragmentation Agreements, Google Assistant, Internet of Things Devices, and Android Open Source Project,’ Google’s blog noted.”

So what is left? Just the allegedly anticompetitive agreements with Android and certain browsers to make Google their default search engine which, of course, helped secure a reported 94 percent of the mobile search market for the company. Despite Judge Mehta’s many dismissals, Colorado Attorney General Phil Weiser is just pleased Google was unable to stop the case altogether. Now all that remains to be seen is whether Google will receive a slap on the wrist or a pat on the back for its browser shenanigans.

Cynthia Murrell, August 15, 2023

Killing Horses? Okay. Killing Digital Information? The Best Idea Ever!

August 14, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Fans at the 2023 Kentucky Derby were able to watch horses killed. True, the sport of kings parks vehicles and has people stand around so the termination does not spoil a good day at the races. It seems logical to me that killing information is okay too. Personally I want horses to thrive without brutalization with mint juleps, and in my opinion, information deserves preservation. Without some type of intentional or unintentional information, what would those YouTuber videos about ancient technology have to display and describe?

In the Age of Culling” — an article in the online publication tedium.co — I noted a number of ideas which resonated with me. The first is one of the subheads in the write up; to wit:

CNet pruning its content is a harbinger of something bigger.

The basic idea in the essay is that killing content is okay, just like killing horses.

The article states:

I am going to tell you right now that CNET is not the first website that has removed or pruned its archives, or decided to underplay them, or make them hard to access. Far from it.

The idea is that eliminating content creates an information loss. If one cannot find some item of content, that item of content does not exist for many people.

I urge you to read the entire article.

I want to shift the focus from the tedium.co essay slightly.

With digital information being “disappeared,” the cuts away research, some types of evidence, and collective memory. But what happens when a handful of large US companies effectively shape the information training smart software. Checking facts becomes more difficult because people “believe” a machine more than a human in many situations.

8 13 library

Two girls looking at a museum exhibit in 2028. The taller girl says, “I think this is what people used to call a library.” The shorter girl asks, “Who needs this stuff. I get what I need to know online. Besides this looks like a funeral to me.” The taller girl replies, “Yes, let’s go look at the plastic dinosaurs. When you put on the headset, the animals are real.” Thanks MidJourney for not including the word “library” or depicting the image I requested. You are so darned intelligent!

Consider the power information filtering and weaponizing conveys to those relying on digital information. The statement “harbinger of something bigger” is correct. But if one looks forward, the potential for selective information may be the flip side of forgetting.

Trying to figure out “truth” or “accuracy” is getting more difficult each day. How does one talk about a subject when those in conversation have learned about Julius Caesar from a TikTok video and perceive a problem with tools created to sell online advertising?

This dinobaby understands that cars are speeding down the information highway, and their riders are in a reality defined by online. I am reluctant to name the changes which suggest this somewhat negative view of learning. One believes what one experiences. If those experiences are designed to generate clicks, reduce operating costs, and shape behavior — what’s the information landscape look like?

No digital archives? No past. No awareness of information weaponization? No future. Were those horses really killed? Were those archives deleted? Were those Shakespeare plays removed from the curriculum? Were the tweets deleted?

Let’s ask smart software. No thanks, I will do dinobaby stuff despite the efforts to redefine the past and weaponize the future.

Stephen E Arnold, August 14, 2023

Microsoft and Russia: A Convenient Excuse?

August 14, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

In the Solarwinds’ vortex, the explanation of 1,000 Russia hackers illuminated a security with the heat of a burning EV with lithium batteries. Now Russian hackers have again created a problem. Are these Russians  cut from the same cloth as the folks who have turned a special operation into a noir Laurel & Hardy comedy routine?

Russia-Linked Hackers Behind Recent Wave of Microsoft Teams Phishing Attacks: Microsoft” reports:
In late May, the hacker team began its attempts to steal login credentials by engaging

users in Microsoft Teams chatrooms, pretending to be from technical support. In a blog post [August 2, 2023], Microsoft researchers called the campaign a “highly targeted social engineering attack” by a Russia-based hacking team dubbed Midnight Blizzard. The hacking group, which was previously tracked as Nobelium, has been attributed by the U.S. and UK governments as part of the Foreign Intelligence Service of the Russian Federation.

Isn’t this the Russia producing planners who stalled a column of tanks in its alleged lightning strike on the capital of Ukraine? I think this is the country now creating problems for Microsoft. Imagine that.

The write up continues:

For now, the fake domains and accounts have been neutralized, the researchers said. “Microsoft has mitigated the actor from using the domains and continues to investigate this activity and work to remediate the impact of the attack,” Microsoft said. The company also put forth a list of recommended precautions to reduce the risk of future attacks, including educating users about “social engineering” attacks.

Let me get this straight. Microsoft deployed software with issues. Those issues were fixed after the Russians attacked. The fix, if I understand the statement, is for customers/users to take “precautions” which include teaching obviously stupid customers/users how to be smart. I am probably off base, but it seems to me that Microsoft deployed something that was exploitable. Then after the problem became obvious, Microsoft engineered an alleged “repair.” Now Microsoft wants others to up their game.

Several observations:

  1. Why not cut and paste the statements from Microsoft’s response to the SolarWinds’ missteps. Why write the same old stuff and recycle the tiresome assertion about Russia? ChatGPT could probably help out Microsoft’s PR team.
  2. The bad actors target Microsoft because it is a big, overblown system/products with security that whips some people into a frenzy of excitement.
  3. Customers and users are not going to change their behaviors even with a new training program. The system must be engineered to work in the environment of the real-life users.

Net net: The security problem can be identified when Microsofties look in a mirror. Perhaps Microsoft should train its engineers to deliver security systems and products?

Stephen E Arnold, August 14, 2023

Blue Chip Consulting Firms: A Malfunctioning System?

August 14, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Blue chip consulting companies require engagements from organizations willing to pay for “expertise.” Generative software can provide an answer quickly. Instead of having many MBAs and people with “knowledge” of a technology provide senior partners with filtered information, software can do this work quickly and at lower cost.

8 12 broken machine

A senior consultant looks at a malfunctioning machine. The information he used to recommend the system resulted in the problem which has turned into an unanticipated problem. Instead of a team of young MBAs with engineering degrees, he has access to smart software. Obviously someone will notice this problem. “Now what?” he asks himself.

Consulting Firms Like Accenture Are Giving Recent Grads $25,000 Stipends to Push Back Their Start Dates” suggests that blue chip consulting firms are changing the approach to bringing new human resources on board. The write up reports:

Work has been slow at many top consulting firms over the past year. New hires straight out of business school are running errands and watching Netflix — to the tune of $175,000 a year — because there’s not enough work to go around. Others are being offered tens of thousands dollars to push their start dates back to next year.

Let’s assume that this report about “not enough work to go around” is accurate. What does this suggest to, a person who has worked at a blue chip consulting firm and provided services to blue chip consulting firms in my work career?

  1. The pipeline of work to be done is not filled or overflowing. Without engagements, billing is difficult. Without engagements, scope changes are impossible. Perhaps the costs of blue chip consultants is too high?
  2. Are clients turning to lower-cost options for traditional management consulting services? Outfits like Gerson Lehrman Group sells access to experts at a lower cost per contact than a blue chip firm? Has the gig economy crimped the sales pipeline?
  3. Is technology like ChatGPT-type services provide “good enough” information so companies can eliminate the cost and risk of hiring a blue chip consulting firm? (I think the outfits probably should be conservative in their use ChatGPT-type outputs, but today the “good enough” approach is the norm.)

Net net: Blue chip consulting firms are in the influence game. The delayed “start work” information indicates that changes are taking place in the market which supports these firms. The firms themselves are making changes. The signal summarized by the cited article may be a glitch. On the other hand, perhaps there is a malfunction in the machinery of what has been a smoothly-running machine for more than a century?

Stephen E Arnold, August 14, 2023

A Hacker Recommends Hacking Books

August 11, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Hacxx, a self-identified posting freak, has published a list of “20 Best Free Hacking Books 2023.” I checked the post on Sinister.ly and noted that the list of books did not include links to the “free” versions. I asked one of my research team to do a quick check to see if these books were free. Not surprisingly most were available for sale. O’Reilly titles were free if one signed up for that publisher’s services. A couple were posted on a PDF download site. We think the list is helpful. For those interested in the list and where the books Hacxx says are “the best”, we have arranged them in alphabetical order. Authors should be compensated for their work even if the subject is one that some might view as controversial. Right, Hacxx?

  1. Advanced Penetration Testing https://www.amazon.com/Advanced-Penetration-Testing-Hacking-Networks/dp/1119367689 [Less than $30US]
  2. Basics of Hacking and Penetration Testing https://www.amazon.com/Basics-Hacking-Penetration-Testing-Ethical/dp/0124116442?tag=50kft00-20
  3. Black Hat Python: Python Programming for Hackers and Pentesters https://www.amazon.com/Black-Hat-Python-Programming-Pentesters/dp/1593275900?tag=50kft00-20 [Less than $33US]
  4. Blue Team Handbook: Incident Response Edition https://www.amazon.com/Blue-Team-Handbook-condensed-Responder/dp/1500734756?tag=50kft00-20 [Less than $17]
  5. CISSP All-In-One Exam Guide https://www.amazon.com/CISSP-All-One-Guide-Ninth/dp/1260467376?tag=50kft00-20 [Less than $60US]
  6. Computer Hacking Beginners Guide https://www.amazon.com/Computer-Hacking-Beginners-Guide-Penetration-ebook/dp/B01N4FFHMW/ref=sr_1_1?crid=2TKYVD64M3NLS&keywords=.+Computer+Hacking+Beginners+Guide&qid=1691702342&sprefix=computer+hacking+beginners+guide%2Caps%2C91&sr=8-1 [$1US for Kindle edition]
  7. Ghost in the Wires https://www.amazon.com/Ghost-Wires-Adventures-Worlds-Wanted/dp/0316037729?tag=50kft00-20 [Less than $20US]
  8. Gray Hat Hacking: The Ethical Hacker’s Handbook, Sixth Edition https://www.amazon.com/Gray-Hat-Hacking-Ethical-Handbook/dp/1264268947?tag=50kft00-20 [Less than $46US]
  9. Hackers Playbook 2 https://www.amazon.com/Hacker-Playbook-Practical-Penetration-Testing/dp/1980901759/ref=sr_1_2?crid=3OWZ8UCLX5ANU&keywords=.+The+Hackers+Playbook+2&qid=1691701682&sprefix=the+hackers+playbook+2%2Caps%2C85&sr=8-2 [Less than $30]
  10. Hacking: Computer Hacking Beginners Guide https://pdfroom.com/books/hacking-computer-hacking-beginners-guide/p0q2J8GodxE [Free download]
  11. Hacking: The Art of Exploitation, 2nd Edition https://www.amazon.com/Hacking-Art-Exploitation-Jon-Erickson/dp/1593271441/ref=sr_1_1?crid=BY25O5JGDY95&keywords=Hacking%3A+The+Art+of+Exploitation%2C+2nd+Edition&qid=1691702542&sprefix=hacking+the+art+of+exploitation%2C+2nd+edition%2Caps%2C116&sr=8-1  [Less than $30US]
  12. Hash Crack: Password Cracking Manual https://www.amazon.com/Hash-Crack-Password-Cracking-Manual/dp/1793458618?tag=50kft00-20 [Less than $15]
  13. Kali Linux Revealed: Mastering the Penetration Testing Distribution https://www.amazon.com/Kali-Linux-Revealed-Penetration-Distribution/dp/0997615605?tag=50kft00-20 [Less than $40US]
  14. Mastering Metasploit https://github.com/PacktPublishing/Mastering-Metasploit-Third-Edition [No charge as of August 10, 2023]
  15. Nmap Network Scanning at https://nmap.org
  16. Practical Malware Analysis: The Hands-on Guide https://www.amazon.com/Practical-Malware-Analysis-Hands-Dissecting/dp/1593272901?tag=50kft00-20 [Less than $45US]
  17. RTFM: Red Team Field Manual https://www.amazon.com/RTFM-Red-Team-Field-Manual/dp/1075091837/ref=sr_1_2?crid=16SFXUJRL3LMR&keywords=RTFM%3A+Red+Team+Field+Manual&qid=1691701596&sprefix=rtfm+red+team+field+manual%2Caps%2C104&sr=8-2 [This version is about $12US]
  18. Social Engineering: The Science of Human Hacking https://www.amazon.com/Social-Engineering-Science-Human-Hacking-dp-111943338X/dp/111943338X/ref=dp_ob_title_bk [Less than $21US]
  19. Web Application Hacker’s Handbook https://edu.anarcho-copy.org/Against%20Security%20-%20Self%20Security/Dafydd%20Stuttard,%20Marcus%20Pinto%20-%20The%20web%20application%20hacker’s%20handbook_%20finding%20and%20exploiting%20security%20flaws-Wiley%20(2011).pdf [This is the second edition]
  20. Web Hacking 101 https://pdfroom.com/books/web-hacking-101/E1d4DO6ydOb [Allegedly free]

Stephen E Arnold, August 11, 2023

MBAs Want to Win By Delivering Value. It Is Like an Abstraction, Right?

August 11, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Is it completely necessary to bring technology into every aspect of one’s business? Maybe, maybe not. But apparently some believe such company-wide “digital transformation” is essential for every organization these days. And, of course, there are consulting firms eager to help. One such outfit, Third Stage Consulting Group, has posted some advice in, “How to Measure Digital Transformation Results and Value Creation.” Value for whom? Third Stage, perhaps? Certainly, if one takes writer Eric Kimberling on his invitation to contact him for a customized strategy session.

Kimberling asserts that, when embarking on a digital transformation, many companies fail to consider how they will keep the project on time, on budget, and in scope while minimizing operational disruption. Even he admits some jump onto the digital-transformation bandwagon without defining what they hope to gain:

“The most significant and crucial measure of success often goes overlooked by many organizations: the long-term business value derived from their digital transformation. Instead of focusing solely on basic reasons and justifications for undergoing the transformation, organizations should delve deeper into understanding and optimizing the long-term business value it can bring. For example, in the current phase of digital transformation, ERP [Enterprise Resource Planning] software vendors are pushing migrations to new Cloud Solutions. While this may be a viable long-term strategy, it should not be the sole justification for the transformation. Organizations need to define and quantify the expected business value and create a benefits realization plan to achieve it. … Considering the significant investments of time, money, and effort involved, organizations should strive to emerge from the transformation with substantial improvements and benefits.”

So companies should consider carefully what, if anything, they stand to gain by going through this process. Maybe some will find the answer is “nothing” or “not much,” saving themselves a lot of hassle and expense. But if one decides it is worth the trouble, rest assured many consultants are eager to guide you through. For a modest fee, of course.

Cynthia Murrell, August 11, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta