Stop Emotional AI

Microsoft Isn’t the Only Company that Needs to Stop Emotional AI Programs

Tim O’Connor – CPH President – 6/28/2022

Have you though about buying a new car within the next year? Have you bought a car in the last 6 months? Have you heard of OnStar? Almost 25 years ago I used to give this survey. If the respondent answered no to the final question, the survey was over. 25 years ago I didn’t think about the capabilities in putting a kill switch into an automobile, I only thought about the assistance rendered if someone got into a crash. It’s all about the selling points. The selling point is always about convenience – even if that convenience comes from being able to contact police and EMS at the push of a button in the event of an accident.

Most vehicles today, especially the modern grocery-getters, have cameras hooked up for reverse. That same camera can sit dormant and record everywhere you go. So can GPS systems. OnStar can track your vehicle’s movements as well. Insurance companies try to entice people to get lower rates by getting customers to put the insurance company’s transponders into their vehicles to monitor their driving habits and reward ‘good’ behavior.’ All of this generates data which can be sold at a premium.

Those buying such data are usually involved with some form of real-time geospatial surveillance. This data informs not only an artificial intelligence (AI), but also their consumers to know where traffic jams are, what streets are under construction, and the quickest routes to get from point A to point B. It is surveillance and data for convenience. Too many of us have fallen for this ruse, and too many of us are carrying on with this ruse. Stop it. Let me explain why we need to stop it.

Microsoft is in the news because they are ending their current round of what is known as affective computing. What affective computing focuses on is human emotions based upon facial expressions. It is a global industry valued at over $20 billion. It is expected to grow by 33% a year between now and 2028. Microsoft is not the only player in this game; but they are the largest which has ended (for now) their affective computing AI. Why did they do this?

According to Natasha Crampton, Microsoft’s chief responsible AI officer:

“‘Experts inside and outside the company have highlighted the lack of scientific consensus on the definition of 'emotions,' the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability[.]’”

Please note that Crampton mentioned a scientific consensus having to exist before Microsoft will again decide to proceed in their endeavor. Science is not supposed to operate based upon consensus, it is supposed to operate on making a hypothesis, doing tests which yield data, evaluating the data, and seeing if the hypothesis is confirmed, and if the hypothesis is confirmed, being able to replicate the experiment. Crampton is saying that because science has not given a concrete definition of emotion, they can no longer proceed with their endeavor. Then she pays lip service to the obvious privacy concerns Microsoft has violated by having already collected and tried to detect through an AI.

Senior research fellow at Oxford University, Sandra Wachter, addressed the real issue here:

“‘Even if we were to find evidence that AI is reliably able to infer emotions, that alone would still not justify its use,’ she said. ‘Our thoughts and emotions are the most intimate parts of our personality and are protected by human rights such as the right to privacy.’”

Microsoft has put a pause on their program. I have no illusions that they will go find more data, from more diverse people, to get better, more accurate results, despite Wachter’s concerns. Microsoft is hardly the only company which has engaged in this type of thing. Cognito, Class, and Affectiva are some of the companies dedicated to creating software to monitor emotions for various purposes. Apple, IBM, Facebook, Zoom, Twitter, and TikTok are also likely candidates using this type of AI as well as selling the data they have collected from billions of feeds. I’m convinced Microsoft will reconfigure it’s AI, increase its data set, and relaunch their affective AI.

You may be wondering how this affects you and why I started off talking about cars. Those who seek control over you want to put them into our cars, buses, airplanes, grocery stores, shopping malls and everywhere else they can think of. Particularly, the companies mentioned above, as well as all of their competitors, would love nothing more than to get their software into our cars. And they will end up succeeding because normal PEOPLE are not paying attention. The convenience of these surveillance systems will be sold as never getting into an accident. Why won’t anyone get into an accident? Because anytime the system decides a driver is angry, sleepy, agitated, or otherwise distracted the car will just stop moving. Wonderful, huh? Dogtown Media states it as:

“Affectiva and numerous other companies in the affective computing space, such as Eyeris, Xperi, and Cerence all have plans to partner with auto industry titans to install emotion-reading AI in their new car models. Recent regulations in Europe and bills introduced in the US senate are helping to normalize the concept of “driver monitoring” by focusing on the safety benefits they present. For example, some systems can preemptively warn if a driver is falling asleep at the wheel or staring at their phone screen.”

About that bill in the US – it is sitting in the Senate Committee on Commerce, Science, and Transportation, currently chaired by Maria Cantwell, a democrat out of Washington. The name of the Bill is S. 1406 Stay Aware For Everyone Act of 2021. It was introduced by Ed Markey and cosponsored by two of his fellow demonic entities, Dick Blumenthal and Amy Klobuchar, all Marxist destroyers of the Republic of the United States. The bill is intended to make the Secretary of Transportation (currently Pete Buttigieg) submit a report on how to implement ‘driver monitoring’ systems into newly manufactured automobiles. After that report, the Secretary of Transportaion fill have 4 years from the date of passage of the bill to create a final rule which will mandate automobiles for use in the United States must have ‘driver monitoring’ systems installed within them within 2 years.

While the bill is sitting there doing nothing in a Senate Committee, it can be pulled out of Committee at pretty much anytime and voted on in the full Senate. It is a possibility this pile of garbage becomes law – especially considering who cosponsored the thing. What it really does is allow AI to create surveillance data for the US government to use in its own AI systems to further surveil the US populace. I hope the bill sits there forever, dies, and never gets reintroduced. The fact that the bill exists makes me want to find a Constitutional reason to defund the Department of Transportation, and, thinking about it, there is no Constitutional mandate the Department exists, so, it would be possible to get rid of it.

My point is that our cars, should we be able to afford them, will have these systems in them unless people demand that the US surveillance state be dismantled. The time is short, but the time is absolutely now to begin demanding the open air prison being erected around us be halted and the parts which have already been erected, be torn down.

Previous
Previous

Lockdown 2.0, It’s Only a Matter of Time

Next
Next

CNN is a Terrorist Organization