Employee Time App Leaks User Information
May 22, 2025
Oh boy! Security breaches are happening everywhere these days. It’s not scary unless your personal information is leaked, like what happened to, “Top Employee Monitoring App Leaks 21 Million Screenshots On Thousands Of Users,” reports TechRadar. The app in question is called WorkComposer and it’s described as an “employee productivity monitoring tool.” Cybernews cybersecurity researchers discovered an archive of millions of WorkComposer-generated real time screenshots. These screenshot showed what the employee worked on, which might include sensitive information.
The sensitive information could include intellectual property, passwords, login portals, emails, proprietary data, etc. These leaked images are a major privacy violation, meaning WorkComposer is in boiling water. Privacy organizations and data watchdogs could get involved.
Here is more information about the leak:
“Cybernews said that WorkComposer exposed more than 21 million images in an unsecured Amazon S3 bucket. The company claims to have more than 200,000 active users. It could also spell trouble if it turns out that cybercriminals found the bucket in the past. At press time, there was no evidence that it did happen, and the company apparently locked the archive down in the meantime.”
WorkComposer was designed for companies to monitor the work of remote employees. It allows leads to track their employees’ work and captures an image every twenty seconds.
It’s a useful monitoring application but a scary situation with the leaks. Why doesn’t the Cybernews people report the problem or fix it? That’s a white hat trick.
Whitney Grace, May 22, 2025
IBM CEO Replaces Human HR Workers with AskHR AI
May 21, 2025
An IBM professional asks the smart AI system, “Have I been terminated?” What if the smart software hallucinates? Yeah, surprise!
Which employees are the best to replace with AI? For IBM, ironically, it is the ones with “Human” in their title. Entrepreneur reports, “IBM Replaced Hundreds of HR Workers with AI, According to Its CEO.” But not to worry, the firm actually hired workers in other areas. We learn:
“IBM CEO Arvind Krishna told The Wall Street Journal … that the tech giant had tapped into AI to take over the work of several hundred human resources employees. However, IBM’s workforce expanded instead of shrinking—the company used the resources freed up by the layoffs to hire more programmers and salespeople. ‘Our total employment has actually gone up, because what [AI] does is it gives you more investment to put into other areas,’ Krishna told The Journal. Krishna specified that those ‘other areas’ included software engineering, marketing, and sales or roles focused on ‘critical thinking,’ where employees ‘face up or against other humans, as opposed to just doing rote process work.’”
Yes, the tech giant decided to dump those touchy feely types in personnel. Who need human sensitivity with issues like vacations, medical benefits, discrimination claims, or potential lawsuits? That is all just rote process work, right? The AskHR agent can handle it.
According to Wedbush analyst Dan Ives, IBM is just getting started on its metamorphosis into an AI company. What does that mean for humans in other departments? Will their jobs begin to go the way of their former colleagues’ in HR? If so, who would they complain to? Watson, are you on the job?
Cynthia Murrell, May 21, 2025
Google Makes a Giant, Huge, Quantumly Supreme Change
May 19, 2025
No AI, just the dinobaby expressing his opinions to Zellenials.
I read “Google’s G Logo Just Got Prettier.” Stunning news. The much loved, intensely technical Google has invented blurring colors. The decision was a result of DeepMind’s smart software and a truly motivated and respected group of artistically-inclined engineers.
Image. The old logo has been reinvented to display a gradient. Was the inspiration the hallucinatory gradient descent in Google’s smart software? Was it a result of a Googler losing his glasses and seeing the old logo as a blend of colors? Was it a result of a chance viewing of a Volvo marketing campaign with a series of images like this:
Image is from Volvo, the automobile company. You can view the original at this link. Hey, buy a Volvo.
The write up says:
Google’s new logo keeps the same letterform, as well as the bright red-yellow-green-blue color sequence, but now those colors blur into each other. The new “G” is Google’s biggest update to its visual identity since retiring serfs for its current sans-serif font, Product Sans, in 2015.
Retiring serifs, not serfs. I know it is just an AI zellenial misstep, but Google is terminating wizards so they can find their future elsewhere. That is just sol helpful.
What does the “new” and revolutionary logo look like. The image below comes from Fast Company which is quick on the artistic side of US big technology outfits. Behold:
Source: Fast Company via the Google I think.
Fast Company explains the forward-leaning design decision:
A gradient is a safe choice for the new “G.” Tech has long been a fan of using gradients in its logos, apps, and branding, with platforms like Instagram and Apple Music tapping into the effect a decade ago. Still today, gradients remain popular, owing to their middle-ground approach to design. They’re safe but visually interesting; soft but defined. They basically go with anything thanks to their color wheel aesthetic. Other Google-owned products have already embraced gradients. YouTube is now using a new red-to-magenta gradient in its UI, and Gemini, Google’s AI tool, also uses them. Now it’s bringing the design element to its flagship Google app.
Yes, innovative.
And Fast Company wraps up the hard hitting design analysis with some Inconel wordsmithing:
it’s not a small change for a behemoth of a company. We’ll never knows how many meetings, iterations, and deliberations went into making that little blur effect, but we can safely guess it was many.
Yep, guess.
Stephen E Arnold, May 19, 2025
Grok and the Dog Which Ate the Homework
May 16, 2025
No AI, just the dinobaby expressing his opinions to Zillennials.
I remember the Tesla full self driving service. Is that available? I remember the big SpaceX rocket ship. Are those blowing up after launch? I now have to remember an “unauthorized modification” to xAI’s smart software Grok. Wow. So many items to tuck into my 80 year old brain.
I read “xAI Blames Grok’s Obsession with White Genocide on an Unauthorized Modification.” Do I believe this assertion? Of course, I believe everything I read on the sad, ad-choked, AI content bedeviled Internet.
Let’s look at the gems of truth in the report.
First, what is an unauthorized modification of a complex software humming along happily in Silicon Valley and— of all places — Memphis, a lovely town indeed. The unauthorized modification— whatever that is— caused a “bug in its AI-powered Grok chatbot.” If I understand this, a savvy person changed something he, she, or it was not supposed to modify. That change then caused a “bug.” I thought Grace Hopper nailed the idea of a “bug” when she pulled an insect from one of the dinobaby’s favorite systems, the Harvard Mark II. Are their insects at the X shops? Are these unauthorized insects interacting with unauthorized entities making changes that propagate more bugs? Yes.
Second, the malfunction occurs when “@grok” is used as a tag. I believe this because the “unauthorized modification” fiddled with the user mappings and jiggled scripts to allow the “white genocide” content to appear. This is definitely not hallucination; it is an “unauthorized modification.” (Did you know that the version of Grok available via x.com cannot return information from X.com (formerly Twitter) content. Strange? Of course not.
Third, I know that Grok, xAI, and the other X entities have “internal policies and core values.” Violating these is improper. The company — like other self regulated entities — “conducted a thorough investigation.” Absolutely. Coders at X are well equipped to perform investigations. That’s why X.com personnel are in such demand as advisors to law enforcement and cyber fraud agencies.
Finally, xAI is going to publish system prompts on Microsoft GitHub. Yes, that will definitely curtail the unauthorized modifications and bugs at X entities. What a bold solution.
The cited write up is definitely not on the same page as this dinobaby. The article reports:
A study by SaferAI, a nonprofit aiming to improve the accountability of AI labs, found xAI ranks poorly on safety among its peers, owing to its “very weak” risk management practices. Earlier this month, xAI missed a self-imposed deadline to publish a finalized AI safety framework.
This negative report may be expanded to make the case that an exploding rocket or a wonky full self driving vehicle is not safe. Everyone must believe X outfits. The company is a paragon of veracity, excellent engineering, and delivering exactly what it says it will provide. That is the way you must respond.
Stephen E Arnold, May 16, 2025
An Agreeable Google: Will It Write Checks with a Sad, Wry Systemic Smile?
May 14, 2025
No AI, just the dinobaby expressing his opinions to Zellenials.
Did you see the news about Google’s probable check writing?
“Google Settles Black Employees’ Racial Bias Lawsuit for $50 Million” reports:
According to the complaint, Black employees comprised only 4.4% of Google’s workforce and 3% of its leadership in 2021. The plaintiff April Curley, hired to expand outreach to historically Black colleges, said Google denied her promotions, stereotyped her as an “angry” Black woman, and fired her after six years as she prepared a report on its alleged racial bias. Managers also allegedly denigrated Black employees by declaring they were not “Googley” enough or lacked “Googleyness,” which the plaintiffs called racial dog whistles.
The little news story includes the words “racially biased corporate culture” and “systemic racial bias.” Is this the beloved “do no evil” company with the cheerful kindergarten colored logo? Frankly, this dinobaby is shocked. This must be an anomaly in the management approach of a trusted institution based on advertising.
Well, there is this story from Bloomberg, the terminal folks: “Google to Pay Texas $1.4 Billion to End Privacy Cases.” As I understand it,
Google will pay the state of Texas $1.375 billion to resolve two privacy lawsuits claiming the tech giant tracks Texans’ personal location and maintains their facial recognition data, both without their consent. Google announced the settlement Friday, ending yearslong battles with Texas Attorney General Ken Paxton (R) over the state’s strict laws on user data.
Remarkable.
The Dallas Morning News reports that Google’s position remains firm, resolute, and Googley:
The settlement doesn’t require any new changes to Google’s products, and the company did not admit any wrongdoing or liability. “This settles a raft of old claims, many of which have already been resolved elsewhere, concerning product policies we have long since changed,” said José Castañeda, a Google spokesperson. “We are pleased to put them behind us, and we will continue to build robust privacy controls into our services.”
Absolutely.
Imagine a company with those kindergarten colors in its logos finding itself snared in what seem to me grade school issues. Google must be misunderstood like one of those precocious children who solve math problems without showing their work. It’s just system perhaps?
Stephen E Arnold, May 14, 2025
Ask Siri: How Does Apple Avoid a Tariff Catastrophe
May 7, 2025
Visualize Tim Apple. He asks Siri, “Siri, how do I guarantee Apple’s dominance in the mobile device sector?”
Siri answers, “Just ignore reality.”
The only problem is that Siri is one example of Apple’s outstanding technology, management decision making, and financial wizardry. Too bad the outputs are incorrect.
Let’s look at one good example:
Apple’s immense success is underpinned by the global supply chain it has spent decades cultivating. Now, President Trump may have turned that asset into a liability with the stroke of a pen. The BBC explains, “Designed in US, Made in China: Why Apple is Stuck.” Though the president backtracked a bit and exempted smartphones and computers from the tariffs, those final products are just the last step of Apple’s production infrastructure. Reporter Annabelle Liang writes:
“While the sleek rectangle that runs many of our lives is indeed designed in the United States, it is likely to have come to life thousands of miles away in China: the country hit hardest by US President Donald Trump’s tariffs, now rising to 245% on some Chinese imports. Apple sells more than 220 million iPhones a year and by most estimates, nine in 10 are made in China. From the glossy screens to the battery packs, it’s here that many of the components in an Apple product are made, sourced and assembled into iPhones, iPads or Macbooks. Most are shipped to the US, Apple’s largest market. Luckily for the firm, Trump suddenly exempted smartphones, computers and some other electronic devices from his tariffs last week. But the comfort is short-lived. The president has since suggested that more tariffs are coming: ‘NOBODY is getting ‘off the hook’,’ he wrote on Truth Social, as his administration investigated ‘semiconductors and the WHOLE ELECTRONICS SUPPLY CHAIN’.”
Such as stable genius. Meanwhile, Apple is vulnerable to competition from Chinese firms that benefit from the infrastructure Apple fostered. We learn:
“‘Now that ‘Apple has cultivated China’s electronic manufacturing capabilities, Huawei, Xiaomi, Oppo and others can reuse Apple’s mature supply chain,’ according to Mr. Lin. Last year, Apple lost its place as China’s biggest smartphone seller to Huawei and Vivo.”
Way to kick a billionaire when he is down. It seems Tim Cook may now face Apple sauce, not Apple success. Did he not kiss the ring sufficiently? The firm now plans to invest $500 billion in the US, but we doubt even that sum will relocate much of Apple’s entrenched network to these shores. Or do much to placate the tariffer-in-chief. I want to write about ignoring the court decision regarding its online store. That’s another example of Ask Siri wisdom.
Cynthia Murrell, May 7, 2025
AI-Fueled Buggy Whip Executive Cannot Be Replaced by AI: A Case Study
May 2, 2025
I read about a very optimistic executive who owned buggy whip companies in the US. One day a horseless carriage, today known as a Tesla, raced past this office. The person telling me the story remembered the anecdote from her required reading in her first year MBA strategic thinking class. The owner of the buggy whip company, she said. “Those newfangled machines will not replace the horse.”
The modern version of this old chestnut appears in “Marc Andreessen Says One Job Is Mostly Safe From AI: Venture Capitalist.” I hope Mr. Andreessen is correct. The write up states:
In the future, AI will apparently be able to do everybody’s job—except Marc’s.
Here’s the logic, according to the write up:
Andreessen described his job as a nuanced combination of “intangible” skills,
including psychological analysis of the entrepreneurs he works with: “A lot of it
is psychological analysis, like, ‘Who are these people?’ ‘How do they react under
pressure?’ ‘How do you keep them from falling apart?’ ‘How do you keep them
from going crazy?’ ‘How do you keep from going crazy yourself?’ You know, you
end up being a psychologist half the time.” “So, it is possible—I don’t want to be definitive—but it’s possible that that is quite literally timeless. And when, you know, when the AI is doing everything else, that may be one of the last remaining fields that people are still doing.”
I found this paragraph from the original story one that will spark some interest; to wit:
Andreessen’s powers of self-delusion are well known. His Techno-
Optimist’s Manifesto, published a few years ago, was another great window into
a mind addled by too much cash and too little common sense. If you’re one of
Silicon Valley’s Masters of the Universe, I guess having weird, self-serving views
just comes with the territory.
Several observations:
- In my opinion, some VCs will continue to use AI. Through use and greater familiarity, the technology will gain some traction. At some point, AI will handle jobs once done by wild-eyed people hungry for riches.
- Start up VCs may rely upon AI for investment decisions, not just gaining through the business plans of fund seekers. If those “experiments” show promise, whoever owns the smart VC may develop a next generation VC business. Ergo: Marc can stay, but he won’t do anything.
- Someone may stumble upon an AI VC workflow process that works faster, better, and more efficiently. If that firm emerges, Mr. Andreessen can become the innovator identified with digital horse accelerators.
How does one say “Giddy up” in AI-system-to-AI-system communication lingo? Answer maybe: Dweep, dweep, dupe?
Stephen E Arnold, May 2, 2025
The 10X Engineer? More Trouble Than They Are Worth
April 25, 2025
Dinobaby, here. No smart software involved unlike some outfits. I did use Sam AI-Man’s art system to produce the illustration in the blog post.
I like it when I spot a dinobaby fellow traveler. That happened this morning (March 28, 2025) when I saw the headline “In Praise of Normal Engineers: A Software Engineer Argues Against the Myth of the 10x Engineer.”
The IEEE Spectrum article states:
I don’t have a problem with the idea that there are engineers who are 10 times as productive as other engineers. The problems I do have are twofold.
Everyone is amazed that the 10X engineer does amazing things. Does the fellow become the model for other engineers in the office? Not for the other engineers. But the boss loves this super performer. Thanks, OpenAI, good enough.
The two “problems” — note the word “problems” are:
- “Measuring productivity.” That is an understatement, not a problem. With “engineers” working from home or in my case a far off foreign country, a hospital waiting room, or playing video games six fee from me productivity is a slippery business.
- “Teams own software.” Alas, that is indeed true. In 1962, I used IBM manuals to “create” a way to index. The professor who paid me $3 / hour was thrilled. I kept doing this indexing thing until the fellow died when I started graduate school. Since then, whipping up software confections required “teams.” Why? I figured out that my indexing trick was pure good fortune. After that, I made darned sure there were other eyes and minds chugging along by my side.
The write up says:
A truly great engineering organization is one where perfectly normal, workaday software engineers, with decent skills and an ordinary amount of expertise, can consistently move fast, ship code, respond to users, understand the systems they’ve built, and move the business forward a little bit more, day by day, week by week.
I like this statement. And here’s another from the article:
The best engineering orgs are not the ones with the smartest, most experienced people in the world. They’re the ones where normal software engineers can consistently make progress, deliver value to users, and move the business forward. Places where engineers can have a large impact are a magnet for top performers. Nothing makes engineers happier than building things, solving problems, and making progress.
Happy workers are magnets.
Now let’s come back to the 10X idea. I used to work at a company which provided nuclear engineering services to the US government and a handful of commercial firms engaged in the nuclear industry. We had a real live 10X type. He could crank out “stuff” with little effort. Among the 600 nuclear engineers employed at this organization, he was the 10X person. Everyone liked him, but he did not have much to say. In fact, his accent made what he said almost impenetrable. He just showed up every day in a plaid coat, doodled on a yellow pad, and handed dot points, a flow chart, or a calculation to another nuclear engineer and went back to doodling.
Absolutely no one at the nuclear engineering firm wanted to be a 10X engineer. From my years of working at this firm, he was a bit of a one-off. When suits visited, a small parade would troop up to his office on the second floor. He shared that with my close friend, Dr. James Terwilliger. Everyone would smile and look at the green board. Then they would troop out and off to lunch.
I think the presence of this 10X person was a plus for the company. The idea of trying to find another individual who could do the nuclear “stuff” like this fellow was laughable. For some reason, the 10X person liked me, and I got the informal job of accompanying to certain engagements. I left that outfit after several years to hook up with a blue chip consulting firm. I lost track of the 10X person, but I had the learnings necessary to recognize possible 10X types. That was a useful addition to my bag of survival tips as a minus 3 thinker.
Net net: The presence of a 10X is a plus. Ignoring the other 599 engineers is a grave mistake. The errors of this 10X approach are quite evident today: Unchecked privacy violations, monopolistic behaviors enabled by people who cannot set up a new mobile phone, and a distortion of what it means to be responsible, ethical, and moral.
The 10X concept is little more than a way to make the top one percent the reason for success. Their presence is a positive, but building to rely on 10X anything is one of the main contributing factors to the slow degradation of computer services, ease of use, and, in my opinion, social cohesion.
Engineers are important. The unicorn engineers are important. Balance is important. Without out balance “stuff” goes off the rails. And that’s where we are.
Stephen E Arnold, April xx, 2025
Management? Never Been Easier
April 4, 2025
Age-Old Management Techniques That Still Happen
Millennials and Gen Z focus on prioritizing mental health and not dealing with the same BS as earlier generations. Some might see these priorities as infantile, but even past generations didn’t like them. INC breaks down “4 Really Bad Management Behaviors That Drive Good Employees Away” and the article really drives home how these management techniques have been in place for eons.
There article’s author Marcel Schwantes starts with a very wise statement: “Not everyone in management should be there.” He then breaks down four poor management techniques…er…behaviors that employees have been fed up with since forever. The first is shooting down their workers’ ideas. This happens on a daily basis and no one in the modern workforce won’t experience this at some point in their careers. Sometimes ideas need to be shot down, but other times a different way of doing something is just what the doctor ordered.
The second is treating people like numbers. This is a totalitarian perspective that runs rampant in corporate America, China, and North Korea. Businesses want measurable results and bodies in places to fill quotas. This doesn’t always lead to good results and the number one thing companies learn is how many people they’ll turn over in a year.
Micromanaging is number three. Micromanagers believe everything has to be done by the book or no one can do anything better than them. They watch their employees like hawks, creating a stifling environment. This creates stress and more mistakes on part of the employees. No one thrives with micromanagers.
The last poor management behavior is hoarding information and only sharing on a need-to-know basis if any. When this technique thrives in a work environment, employees are working in the dark with a flash light that only turns on once in a while.
Schwantes finishes with:
“Want to keep your best people? Treat them like people. Connect with them. Support them. Give them what they need to thrive. Because when you invest in your team, serve their needs, and remove obstacles from their path, they’ll run through walls for you—and that’s how you grow into the leader everyone wants to work for.”
No matter how far technology and human advances, we’ll always be dealing with these management behaviors. Dan anyone else imagine cavemen…er…people dealing with these? They probably didn’t put ump them and whacked their managers over the head with a club.
Whitney Grace, April 4, 2025
Original Research: Not-So-Original Assertions about Content Appropriation
April 2, 2025
No AI. Just a dinobaby sharing an observation about younger managers and their innocence.
The Social Science Research Council published the 30-plus page report “Beyond Public Access in LLM Pre-Training Data.” The subtitle reveals the principal finding: “Non-Public Non-Public Book Content in OpenAI’s Models.”
The write up states:
Using a legally obtained dataset of 34 copyrighted O’Reilly Media books, we apply the DE-COP membership inference attack method to investigate whether OpenAI’s large language models were trained on copyrighted content without consent. Our AUROC scores show that GPT-4o, OpenAI’s more recent and capable model, demonstrates strong recognition of paywalled O’Reilly book content (AUROC = 82%), compared to OpenAI’s earlier model GPT-3.5 Turbo. In contrast, GPT-3.5 Turbo shows greater relative recognition of publicly accessible O’Reilly book samples. GPT-4o Mini, as a much smaller model, shows no knowledge of public or non-public O’Reilly Media content when tested (AUROC ? 50%). Testing multiple models, with the same cutoff date, helps us account for potential language shifts over time that might bias our findings. These results highlight the urgent need for increased corporate transparency regarding pre-training data sources as a means to develop formal licensing frameworks for AI content training.
I want to mention that the DE-COP method provide one way to determine whether a specific data record was part of the training dataset for a machine learning model. The result of the SSRC’s research suggests that as OpenAI enhanced its models, the OpenAI organization appears to have used non-public content. Smaller (less capable OpenAI) models seemed to know less about the test set; the bigger models knew more.
The write up concludes that developers of smart software are sucking down data without obtaining permission or entering into licensing deals with publishers, copyright holders, or entities working on content not yet available as public content like this dinobaby blog post.
To sum up: A not-to-original assertion that certain organizations just take content and ignore rights is apparently accurate in this set of research. OpenAI accelerates. Speed is a positive. Breaking speed limits is believed to be no big deal.
A bit of a flashback: Over 40 years ago, I participated in a series of meetings to license the right to create abstracts of original articles from more than 1000 different publications. Our efforts were successful. Most publishers of academic business and management publications knew zero about online information services. The idea of summarizing an eight page article about executive compensation was unfamiliar to most of the people with whom our organization interacted over a period of years. Nevertheless, the owner of the database unit of the Courier Journal & Louisville Times Company insisted that we get signed agreements. The process was time consuming and expensive. Some outfits had representatives. So a negotiation went through layers of bureaucracy. Other publishers wanted a sit down meeting and a dog-and-pony show to explain why we wanted to boil down a 600 word article to about 150 words. There were worries about the impact of online access. There were worries about control of editorial content. There were worries about signing away rights. We couldn’t assuage worries, but we chugged along. The reputations of the Courier Journal and Barry Bingham Jr. helped but our small database operation had to follow the rules.
If the data in the cited document are accurate, the attitude of Barry Bingham Jr. would not have gained much traction at OpenAI. Instead of my getting a pat on the head for setting up the licensing deals with the able help of other Courier Journal professionals, I would have gotten kicked on the shin or worse.
Several observations:
- Observing content ownership remains important despite the acceleration toward illegal behaviors in the online sector
- Using show-and-tells and explaining the benefits of participating in digital innovations remains important. Knowledge minimizes anger and other negative reactions to pre-emptive behavior
- Managing the process is as important as demonstrating an unwillingness to conform to industry, social, and legal norms.
Let’s assume that the findings in the SSRC are off by 50 percent. Does that change what appears to be a strong possibility of some fancy dancing at the OpenAI disco?
My conclusion is that taking short cuts is part of the Silicon Valley approach of doing whatever and just saying, “Wow, I am sorry.”
I asked ChatGPT 4 via You.com this question:
Should I create an online product and just use content from the Harvard Business Review without permission?
Here’s the response:
No, you should not create an online product using content from the Harvard Business Review (HBR) without permission. Doing so would likely violate copyright laws and HBR’s usage policies. Here’s why:
HBR’s Copyright and Usage Policies
- HBR Material is Copyrighted: Harvard Business Review explicitly states that its content is protected by copyright, and no part of its publications may be reproduced without permission.
- Usage Restrictions: HBR has strict policies regarding how its content can be used. For example, even in academic settings, HBR articles cannot be freely distributed unless done through specific channels like creating course packs via "Harvard Business Publishing for Educators"
Permission is Required: If you want to use HBR content, you must obtain explicit permission. This includes reproducing, distributing, or using their material in any form.
Legal and Ethical Implications
- Copyright Infringement: Using HBR content without permission would likely constitute copyright infringement, which can result in legal action, fines, or other penalties.
- Ethical Concerns: Beyond legal issues, using someone else’s intellectual property without authorization is unethical and could harm your reputation or business.
What You Should Do Instead
- Seek Permission: Contact Harvard Business Review to request permission to use their content. They may have licensing options available for your intended use.
- Create Original Content: Instead of using HBR’s material, consider creating your own original content inspired by their ideas (but without copying their text or proprietary materials).
- Use Public Domain or Licensed Content: Look for content that is in the public domain or available under a license (e.g., Creative Commons) that allows commercial use.
Conclusion
Using HBR content without permission is not only illegal but also against their explicit policies. To avoid legal and ethical issues, either seek proper licensing or create your own original material.
In this case, the OpenAI’s AI may be more intelligent that some people engaged in content acquisition at the firm. But I am a dinobaby. I try to follow the social, legal, and ethical rules as my parents who had only high school educations taught me. I am a dinobaby. A sad dinobaby.
Stephen E Arnold, April 2, 2025