Teams Tracking: Are You Working at Triple Peak?

April 14, 2022

I installed a new version of Microsoft Office. I had to spend some time disabling the Microsoft Cloud, Outlook, and Teams, plus a number of other odds and ends. Who in my office uses Publisher? Sorry, not me. In fact, I knew only one client who used Publisher and that was years ago. We converted that lucky person to an easier to use and more stable product.

We have tried to participate in Teams meetings. Unfortunately the system crashes on my Mac Mini, my Intel workstation, and my AMD workstation. I know the problem is obviously the fault of Apple, Intel, and AMD, but it would be nice if the Teams software would allow me to participate in a meeting. The workaround in my office is to use Zoom. It plays nice with my machines, my mostly secure set up, and the clumsy finger of my 77 year old self.

I provide the context so that you will understand my reaction to “Microsoft Discovers Triple Peak Work Day for Its Remote Employees.” As you may know, Microsoft has been adding features to Teams since the pandemic lit a fire under what was once a software service reserved for financial meetings and some companies that wanted everyone no matter what to be in a digital face to face meeting. Those were super. I did some work for an early video conferencing player. I think it was called Databeam. Yep, perfect for kids who wanted to take a virtual class, not a presentation about the turbine problems at Lockheed Martin.

Microsoft’s featuritis has embraced surveillance. I won’t run down the tools available to an “administrator” with appropriate access to a Teams’ set up for a company. I want to highlight the fact that Microsoft shared with ExtremeTech some information I find fascinating; to wit:

… when employees were in the office, it found “knowledge workers” usually had two periods of peak productivity: before lunch and after lunch. However, with everyone working from home there’s now a third period: late at night, right before bedtime.

My workday has for years begun about 6 am. I chug along until lunch. I then chug along until dinner. Then I chug along until I go to sleep at 10 pm. I like to think that my peak times are from 6 am to 9 am, from 10 am to noon, from 1 30 pm to 3 pm, and from 330 to 6 pm. I have been working for more than 50 years, and I am happy to admit that I am an old fashioned Type A person. Obviously Microsoft does not have many people like me in its sample. The morning, as I recall from my Booz, Allen & Hamilton days, the productive in the morning crowd was a large cohort, thousands in fact. But not in the MSFT sample. These are lazy dogs its seems.

Let’s imagine your are a Type A manager. You have some employees who work from home or from a remote location like a client’s office in Transnistia which you may know as the Pridnestrovian Moldavian Republic. How do you know your remotes are working at their peak times? You monitor the wily creatures: Before lunch, after lunch, and before bed or maybe to a disco in downtown Tiraspol.

How does this finding connect with Teams? With everyone plugged in from morning to night, the Type A manager can look at meeting attendance, participation, side talks, and other detritus sucked up by Teams’ log files. Match up the work with the times. Check to see if there are three ringing bells for each employee. Bingo. Another HR metric to use to reward or marginalize a human personnel asset.

I will just use Zoom and forget about people who do not work when I do.

Stephen E Arnold, April 14, 2022

A Question about Robot Scientist Methods

April 13, 2022

I read “Robot Scientist Eve Finds That Less Than One Third of Scientific Results Are Reproducible.” The write up makes a big deal that Eve (he, her, it, them) examined in a semi automated way 12,000 research papers. From that set 74 were “found” to be super special. Now of the 74, 22 were “found” to be reproducible. I think I am supposed to say, “Wow, that’s amazing.”

I am not ready to be amazed because one question arose:

Can Eve’s (her, her, it, them) results be replicated. What about papers about Shakespeare, what about high energy physics, and what about SAIL Snorkel papers?

Answers, anyone.

I have zero doubt that peer reviewed, often wild and crazy research results were from one of these categories:

  1. Statistics 101 filtered through the sampling, analytic, and shaping methods embraced by the researcher or researchers.
  2. A blend of some real life data with synthetic data generated by a method prized at a prestigious research university.
  3. A collection of disparate data smoothed until suitable for a senior researcher to output a useful research finding.

Why are data from researchers off the track? I believe the quest for grants, tenure, pay back to advisors, or just a desire to be famous at a conference attended by people who are into the arcane research field for which the studies are generated.

I want to point out that one third being sort of reproducible is a much better score than the data output from blue chip and mid tier consulting firms about mobile phone usage, cyber crime systems, and the number of computers sold in the last three month period. Much of that information is from the University of the Imagination. My hunch is that quite a few super duper scholars have a degree in marketing or maybe an MBA.

Stephen E Arnold, April 13, 2022

Online Advertising: A Trigger Warning May Be Needed

March 18, 2022

I read “How Can We Know If Paid Search Advertising Works?” The write up is about Google but it is not about Google in my opinion. A number of outfits selling messages may be following a well worn path: Statistical mumbo jumbo and fear of missing out on a big sale.

Advertising executives once relied on the mostly entertaining methods captured in “Mad Men.” In the digital era, the suits have been exchanged for khakis, shorts, and hoodies. But the objective is the same: Find an advertiser, invoke fear of missing out on a sale, and hauling off the cash. Will a sale happen? Yeah, but one never really knows if it was advertising, marketing, or the wife’s brother in law helping out an very odd younger brother who played video games during the Thanksgiving dinner.

The approach in the article is a mix of common sense and selective statistical analysis. The selective part is okay because the online advertisers engage in selective statistical behavior 24×7.

Here’s a statement from the article I found interesting:

It was almost like people were using the paid links, not to learn about products, but to navigate to the site. In other words, it appeared like selection bias with respect to paid click advertising and arrival at the site was probably baked into their data.

The observation that search sucks or that people use ads because they are lazy are equally valid. The point is that online advertisers a fearful of missing a sale. These lucky professionals will, therefore, buy online ads and believe that sales are a direct result. But there may be some doubt enhanced by the incantations of the Web marketing faction of the organization who say, “Ads are great, but we have to do more search engine optimization.”

A two-fer. The Web site and our products/services are advertised and people buy or “know” about our brand or us. By promoting the Web site we get the bonus sales from the regular, non paid search findability. This argument makes many people happy, particularly the online ad sales team and probably the SEO consulting experts. The real payoff is that the top dog’s anxiety level decreases. He/she/them is/are happier campers.

Identifying causal effects does not happen with wishes.

I am no expert in online advertising. I think the write up suggests that the data used to prove the value of online advertising is shaped. Wow, what a surprise? Why would the leaders in selling online advertising craft a message which may not be anchored in much more than “wishes”.

Money? Yep, money.

Stephen E Arnold, March 18, 2022

What Google Knows about the Honest You

December 10, 2021

I read this quote in a Kleenex story about Google’s lists of popular searches:

“You’re never as honest as you are with your search engine. You get a sense of what people genuinely care about and genuinely want to know — and not just how they’re presenting themselves to the rest of the world.”

The alleged Googler crafting this statement is a data editor. You can read more about the highly selective and unverified Google search trends in “What Google’s Trending Searches Say about America in 2021.”

For me, the statement allows several observations:

  1. A person acting in an unguarded way reveals information not usually disseminated in “guarded” settings; for example, a job interview
  2. The word “honest” implies an unvarnished look at the psycho-social factors within a single person
  3. A collection of data points about the psycho-social aspects of a single person makes it possible to tag, classify, and relate that individual to others. Numerical procedures allow a person or system with access to those data to predict certain behaviors, predispositions, or actions.

Thus, the collection of searches, clicks, and items created by an individual using Google services such as Gmail and YouTube create a palette of color from which a data maestro can paint a picture.

Predestination has never been easier, more automatable, or cheaper to convert into an actionable knowledgebase for smart software. Yep, just simple queries. Useful indeed.

Stephen E Arnold, December 10, 2021

More AI Foibles: Inheriting Biases

December 7, 2021

Artificial intelligence algorithms are already implemented in organizations, but the final decisions are still made by humans. It is fact that algorithms are unfortunately programmed with biases towards minorities and marginalized communities. It might appear that these are purposefully built into the AI, it is not. The problem is that the AI designers lack sufficient diverse data to feed algorithms. Biases are discussed in The Next Web’s article, “Worried About AI Ethics? Worry About Developers’ Ethics First.”

The article cites Asimov’s famous three laws of robotics and notes that ethics change depending on the situation and human individual. AI are unable to distinguish these variables like humans, so they must be taught. The question is what ethics are AI developers “teaching” to their creations.

Autonomous cars are a great example, because they rely on human and AI input to make decisions to avoid accidents. Is there a moral obligation to program autonomous cars to override a driver’s decision to prevent collisions? Medicine is another worrisome field. Doctors still make critical choices, but will AI remove the human factor in the not too distant future? There are also weaponized drones and other military robots that could prolong warfare or be hacked.

The philosophical trolley problem is cited, followed by this:

People often struggle to make decisions that could have a life-changing outcome. When evaluating how we react to such situations, one study reported choices can vary depending on a range of factors including the respondent’s age, gender and culture.

When it comes to AI systems, the algorithms training processes are critical to how they will work in the real world. A system developed in one country can be influenced by the views, politics, ethics and morals of that country, making it unsuitable for use in another place and time.

If the system was controlling aircraft, or guiding a missile, you’d want a high level of confidence it was trained with data that’s representative of the environment it’s being used in.”

The United Nations has called for a “a comprehensive global standard-setting instrument” for a global ethical AI network. It is a step in the right direction, especially when it comes to ethnic diversity problems. AI that does not take into account eye shape, skin color, or other biological features are understandably overlooked by developers without them. These can be fixed with broadened data collections.

A bigger problem would be differentials between sexes and socioeconomic background. Women are viewed as less than second class citizens in many societies and socioeconomic status determines nearly everything in all countries. How are developers going to address these ethical issues? How about a deep dive with a snorkel to investigate?

Whitney Grace, December 7, 2021

Counter Intuitive or Unaware of Costco?

November 30, 2021

I try to sidestep arguments with academics cranking out silly or addled reports that are supposed to be impactful. I read “Shopping Trolleys Save Shoppers Money As Pushing Reduces Spending, Finds New Study.” This research gem asserts:

Psychology research has proven that triceps activation is associated with rejecting things we don’t like – for example when we push or hold something away from us – while biceps activation is associated with things we do like – for example when we pull or hold something close to our body. When testing the newly designed trolley on consumers at a supermarket, report authors Professor Zachary Estes and Mathias Streicher found that those who used shopping trolleys with parallel handles bought more products and spent 25 per cent more money than those using the standard trolley.

A couple of thoughts:

  1. A shopping cart or trolley with square wheels would do the trick too, right?
  2. A shopping cart weighing more than 50 kilos would do the trick, particularly in small shops near retirement facilities?
  3. An ALDI style approach, just with a cart use fee of $100 might inhibit shopping?

But the real proof is a visit to Costco. Here’s a snap of what I see when my wife and I visit out local big box store in rural Kentucky:

image

If the person can’t push it, there are motor driven carts.

Stephen E Arnold, November 30, 2021

Facebook and Smoothing Data

November 26, 2021

I like this headline: “The Thousands of Vulnerable People Harmed by Facebook and Instagram Are Lost in Meta’s Average User Data.” Here’s a passage I noticed:

consider a world in which Instagram has a rich-get-richer and poor-get-poorer effect on the well-being of users. A majority, those already doing well to begin with, find Instagram provides social affirmation and helps them stay connected to friends. A minority, those who are struggling with depression and loneliness, see these posts and wind up feeling worse. If you average them together in a study, you might not see much of a change over time.

The write up points out:

The tendency to ignore harm on the margins isn’t unique to mental health or even the consequences of social media. Allowing the bulk of experience to obscure the fate of smaller groups is a common mistake, and I’d argue that these are often the people society should be most concerned about. It can also be a pernicious tactic. Tobacco companies and scientists alike once argued that premature death among some smokers was not a serious concern because most people who have smoked a cigarette do not die of lung cancer.

I like the word “pernicious.” But the keeper is “cancer.” The idea is, it seems to me, that Facebook – sorry, meta — is “cancer.” Cancer is A term for diseases in which abnormal cells divide without control and can invade nearby tissues. Cancer evokes a particularly sonorous word too: Malignancy. Indeed the bound phrase when applied to one’s great aunt is particularly memorable; for example, Auntie has a malignant tumor.

Is Facebook — sorry, Meta — is smoothing numbers the way the local baker applies icing to a so-so cake laced with a trendy substances like cannabutter and cannaoil? My hunch is that dumping outliers, curve fitting, and subsetting data are handy little tools.

What’s the harm?

Stephen E Arnold, November 26, 2021

Survey Says: Facebook Is a Problem

November 11, 2021

I believe everything I read on the Internet. I also have great confidence in surveys conducted by estimable news organizations. A double whammy for me was SSRS Research Refined CNN Study. You can read the big logo version at this link.

The survey reports that Facebook is a problem. Okay, who knew?

Here’s a snippet about the survey:

About one-third of the public — including 44% of Republicans and 27% of Democrats — say both that Facebook is making American society worse and that Facebook itself is more at fault than its users.

Delightful.

Stephen E Arnold, November 11, 2021

The Business Intelligence You Know Is Changing

November 11, 2021

I read “This Is the Future of Intelligence.” I have been keeping my researchers on their toes because I have an upcoming lecture about “intelligence,” not getting grades in schools which have discarded Ds and Fs. The talk is about law enforcement and investigator centric intelligence. That’s persons of interest, events, timelines, and other related topics.

This article references a research report from a mid tier consulting firm. That may ring your chimes or make you chuckle. Either way, here are three gems from the write up. I leave it to you to discern the wheat and the chaff.

How about this statement:

Prediction 1: By 2025, 10% of F500 companies will incorporate scientific methods and systematic experimentation at scale, resulting in a 50% increase in product development and business planning projects — outpacing peers.

In 36 months half of the Fortune 500 companies! I wonder how many of these outfits will be able to pay for the administrative overhead hitting this target will require. Revenue, not hand waving strike me as more important.

And this chunky Wheaties flake:

By 2026, 30% of organizations will use forms of behavioral economics and AI/ML-driven insights to nudge employees’ actions, leading to a 60% increase in desired outcomes.

If we look at bellwether outfits like Amazon and Google, I wonder if the employee push back and internal tension will deliver “desired outcomes.” What seems to be delivered are reports of management wonkiness, discrimination, and legal matters.

And finally, a sparkling Sugar Pop pellet:

By 2026, advances in computing will enable 10% of previously unsurmountable problems faced by F100 organizations to be solved by super-exponential advances in complex analytics.

I like the “previously unsurmountable problems” phrase. I don’t know what a super-exponential advance in complex analytics means. Oh, well. The mid tier experts do, I assume.

Read the list of ten findings. I had a good chuckle with a snort thrown in for good measure.

Stephen E Arnold, November 11, 2021

Research? Sure. Accurate? Yeah, Sort Of

October 19, 2021

Facebook is currently under scrutiny unlike any it has seen since the 2018 Cambridge Analytica scandal. Ironically, much of the criticism cites research produced by the company itself. The Verge discusses “Why These Facebook Research Scandals Are Different.” Reporter Casey Newton tells us about a series of stories about Facebook published by The Wall Street Journal collectively known as The Facebook Files. We learn:

“The stories detail an opaque, separate system of government for elite users known as XCheck; provide evidence that Instagram can be harmful to a significant percentage of teenage girls; and reveal that entire political parties have changed their policies in response to changes in the News Feed algorithm. The stories also uncovered massive inequality in how Facebook moderates content in foreign countries compared to the investment it has made in the United States. The stories have galvanized public attention, and members of Congress have announced a probe. And scrutiny is growing as reporters at other outlets contribute material of their own. For instance: MIT Technology Review found that despite Facebook’s significant investment in security, by October 2019, Eastern European troll farms reached 140 million people a month with propaganda — and 75 percent of those users saw it not because they followed a page but because Facebook’s recommendation engine served it to them. ProPublica investigated Facebook Marketplace and found thousands of fake accounts participating in a wide variety of scams. The New York Times revealed that Facebook has sought to improve its reputation in part by pumping pro-Facebook stories into the News Feed, an effort known as ‘Project Amplify.’”

Yes, Facebook is doing everything it can to convince people it is a force for good despite the negative press. This includes implementing “Project Amplify” on its own platform to persuade users its reputation is actually good, despite what they may have heard elsewhere. Pay no attention to the man behind the curtain. We learn the company may also stop producing in-house research that reveals its own harmful nature. Not surprising, though Newton argues Facebook should do more research, not less—transparency would help build trust, he says. Somehow we doubt the company will take that advice.

A legacy of the Cambridge Analytica affair is the concept that social media algorithms, perhaps Facebook’s especially, is reshaping society. And not in a good way. We are still unclear how and to what extent each social media company works to curtail false and harmful content. Is Facebook finally facing a reckoning, and will it eventually extend to social media in general? See the article for more discussion.

Cynthia Murrell October 19, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta