2019-03-09

Ethics of Big Bad Thoughts

If I imagine kicking a dog, is it ethical? What if I had a really big brain?

Let's go through some scenarios:

1. I Kick a Dog in Real Life.

I'm sure not many would feel that's a good thing to do. On the spectrum of capability of feeling pain that goes from something like bacteria to something like humans, most would place this near the end. Imagine kicking a baby if a dog doesn't do it for you. Whether or not kicking babies is a bad thing is another conversation entirely. Hop on that trolley elsewhere.

2. I Kick a Dog in Real Life, but Don't Tell Anyone About It.

Is that any different from the first question. Let's drive the point further: An evil scientist builds in secret an evil dog-kicking machine that grows a dog in a test tube, then kicks it, and promptly destroys the dog and itself. Let's say the machine chooses randomly whether or not it grows the dog so even the scientist doesn't know if there were a dog that was kicked, and it is all the same because after building the machine he kills himself.

This too is somewhat familiar. Again, I'd be practical and say this is a bad thing to do even if the connection to the external world is cut. The angsty solipsist might think nobody but him can really feel anything, but eventually (hopefully) he accepts the sheer likeliness of everyone being able to feel the same as him. Extending this to things that might be completely cut out from our experience is just the last step on that path. It boils down to the same case as the first question.

3. I Create an Artificial Dog and Kick It.

I suspect anyone reading this blog is probably inclined take it as a fact that there isn't a separate self or a soul that could exist outside the substrate of our brains. That consciousness, as strange and mysterious it feels, is an electrical and chemical process that runs on our neurons, and that it could run practically identically on some other substrate like, most commonly put forward, silicon.

Again, an old discussion that has been run ad nauseam elsewhere. It's all the same as #1 and #2.

4. I Think of a Dog and Imagine Kicking It.

This is something haven't seen discussed so much as the first ones, which is probably because I've not read enough.

Dissociative identities

Let's say there is a person suffering with dissociative identity disorder, formerly known as multiple personality disorder. Let's say his personalities are split enough to interact at the stage of his mind, that the substrate of his mind runs two processes. Are the personalities capable of, say, being mean or friendly to each other? Being separate enough to cause emotion to one another? The outsider, being used to the state of having one well defined self, might say "It's all in his head" and dismiss any possibility of split subjective experience in one brain, but to me it very much seems like these split people are genuinely experiencing what they tell they are experiencing, and consistently enough over the centuries for us to take it at face value.

Tulpas

A rather recent similar phenomenon is the idea of tulpas, willfully created separate entities inside one's brain. Again, crazy talk to normal people, but quite consistently described by the practitioners. Seen in conjunction with the same effect generated unwillingly as in the above, I see don't see why not to take this too at face value.

With tulpas, the subject of ethics is brought up frequently. Creating one is likened to having a child, and cruelty towards one is thought of in much the same terms as cruelty toward separate living things. To the practitioners this is evidently clear. I'm very much tempted to include this phenomenon with the previous one of an artificial dog: it seems all the same, only in this case the substrate is not even anything as exotic as a computer, but the brain itself, making it more intuitive even to accept the reality of pain.

Many Parts of 'Self'

Without even needing to split your personality, the idea of one self is pretty outdated. The whole mind works, and what action results is more like a thing filtered than a thing created. You want the cake, but you don't want it too. Some of you will have to be disappointed. Let's say I, a rational being, think that in the modern world I'm safe from many of the things that would have been dangerous centuries past. I proceed to demonstrate this by sticking a needle through my skin. Was it me doing things as me, or one part of me downright torturing the other part that really didn't want to be pierced by a needle?

A tangent: you could rethink all of the numbered cases but substituting yourself for the dog. Would it be bad if you voluntarily cloned yourself down to the neuron and kicked him, or created digital copies of yourself and have them be kicked..? Down the rabbit hole, really, with these kinds of questions.

Thoughts as Entities

Back to the dog. Say you dismissed the previous three arguments as being unrealistic, and the hypothesis of one brain – one mind turns out to be more or less correct. You might have a vivid representation of your partner in your mind, but it's not a real experiencer in any meaningful way. Can there even be a thought that "experiences" anything? I'd think this case needs again the bacterium – human scale of consciousness. Entities that exist entirely in the substrate of your thoughts (that in turn run in neurons) are somewhere on the scale, but where?

Should we just avoid thinking any thoughts that involve a possibility of even imaginary pain, to ourselves or imaginary entities? Maybe thinking cruel things is unethical? Maybe this becomes evident when our understanding of mental phenomena grows?

Big Brain Bad Thoughts

The previous question about the capability of having mental processes that can experience pain takes a turn for the worse with the introduction of an artificial intelligence with orders of magnitude more brain power, or different sort of brain power altogether. Imagine a being whose imagination recreates human thought as well as human imagination recreates bacterial thought. A being whose thought of "gee, I wonder if that guy could survive jumping from that building" would comprise of simulating this event accurately in its brain.

It would be the evil scientist constructing dog-kicking machines without giving it a second thought. Given what kinds of things humans can do, and what they can think of with their puny brains, it becomes pretty terrifying to think of a scenario where even one super intelligence, even a benevolent one, would exist and just think about things thoroughly.

No need to worry about an AI destroying mankind, just having one worry about what bad things could happen to humans could be bad in a way that doesn't require too big leaps of assumptions. Just that pain is bad, and experiencers of pain can exist in any sufficient substrate, and that creating a mental model of something requires simulating it to some extent.

I could see this becoming an argument not to build superhuman intelligences (at least up to some level of performance, or with some kinds of mental structures) along with the more obvious doomsday scenarios.


PS. No dogs were harmed in the making of this post. Or were they..?


No comments:

Post a Comment