The Harai Discussion


The Harai Discussion


james.morris@cmu.edu <james.morris@cmu.edu>

Mon, Feb 15, 2021 at 1:06 PM

To: zuck@fb.com

Cc: eschmidt@schmidtfutures.com, future-2030 <future-2030@googlegroups.com>



Mark:

I have been avoiding Facebook since the Cambridge Analytica fiasco and reading Zuboff's book. I wasn't worried about Russian trolls influencing me, but suspected someone like Rachel Maddow could.

Your discussion with Harai was excellent; and, as he said at the end, you are courageous to deal with his challenging questions. I won't ask you to make a video with Zuboff, she's too adamant.

As you and Harari were puzzling how to control the incredible engine that you and Google have demonstrated, I had a simple idea: Allow the user to set the goals for your ML algorithm and don't insert any of Facebook's goals. For example, I would fill out a menu of items like controlling my weight, reducing stress, finding love, etc. And you would stop telling the ML algorithm to increase engagement.

I understand that the ML algorithms depend upon strong behavioral signals, and engagement is very easy compared to stress, but I would happily strap on all sorts of biometric devices (e.g. to measure cortisol levels) if I believe that only I was setting the goals in consultation with my doctors. In other words, I would totally embrace surveillance and behavioral nudging if I was in control of the goals. I would also be willing to pay for this service and accept advertisements clearly labeled with who pays for them.

Is this idea naive?

As you said, doing the right thing may end up being good for business in the long run.

--

James H. Morris

http://www.cs.cmu.edu/~jhm