Chapter five hundred and thirty seventh as the most fatal weakness of life
Although this ten-year-old girl seemed a bit unreliable at the moment, Fang Zheng still handed over the body of the narrator girl to her - after all, it was just for maintenance and repair, and judging from the dog, the technological capabilities of that world were also
It's pretty good, but there shouldn't be any problems with simple maintenance work.
Fang Zheng returned to his room and began to analyze the program of the narrator girl.
The reason why he planned to do this by himself instead of leaving it to Nimfu was that Fang Zheng wanted to use the program of the narrator girl to analyze it and make some adjustments to the production of artificial AI. And
, he also hopes to see to what extent other worlds have developed artificial intelligence technology. Although not all of them can be borrowed, the stones from other worlds can be used to attack jade.
"Is Hoshino Yumei..."
Looking at the file name displayed on the screen, Fang Zheng fell into a long thought. The parsing program itself was not difficult. Founder himself copied Nimfu's electronic intrusion ability, and he has been with Nimfu during this time.
Learn this knowledge, so you don't spend too much time parsing the program itself.
However, when Fang Zheng disassembled the core of Hoshino Yumei's program and re-decomposed its functions into lines of code, he suddenly thought of a very special problem.
What exactly is the danger of artificial intelligence? Having said that, is artificial intelligence really dangerous?
Taking the commentator girl as an example, Fang Zheng can easily find the underlying instruction code of the Three Laws of Robotics in her program, and the relationship between these codes has also been proven to Fang Zheng.
It is not a living being, just a robot. Her every move, every frown and every smile is controlled by the program. By analyzing the scene in front of her, she then makes the highest priority action she can choose.
To put it bluntly, in essence, what this girl does is no different from those working robots on the assembly line, or the NPCs in the game. You choose actions, and it will react based on these actions. Just like in many games
, players can increase the value of kindness or malice based on their actions, and the NPC will react based on these accumulated data.
For example, you can set that when the kindness value reaches a certain level, the NPC may make more excessive demands on the player, and it may be easier for the player to pass through a certain area. On the other hand, if the maliciousness value reaches a certain level, the NPC may
It is easier to give in to certain requests of players, or to prevent players from entering certain areas.
But this has nothing to do with whether the NPC likes the player, because the data is set in this way, and they do not have the ability to judge in this regard. In other words, if Founder changes the range of this value, then people can see a
NPCs smile at players who are full of evil, but turn a blind eye to good and honest players. This also has nothing to do with the moral values of NPCs, because this is the data setting.
So, going back to the previous question, Fang Zheng admitted that the first meeting between him and Hoshino Yumei was quite dramatic, and the narrator robot girl was also very interesting.
Let's make an example. If the narrator girl gives Fang Zheng a bouquet made of a lot of non-burnable garbage, Fang Zheng suddenly becomes furious, smashes the garbage bouquet into pieces, and then directly kills the robot in front of him.
If the girl is cut in half, what will be the reaction of this robot girl?
She won't cry or get angry. According to her programming, she will only apologize to Fang Zheng and think that her wrong actions have caused the guests to be dissatisfied with her. Perhaps she will also ask Fang Zheng to find a staff member to deal with him.
Make repairs.
If other people saw this scene, they would certainly feel pity for the narrator, and think that Fang was a nasty bully.
So, how does this difference arise?
In essence, this commentator robot is just like automatic doors, escalators and other tools. It completes its own work by setting a program. If an automatic door malfunctions, it will not open the door when it should be opened, or
When you walked over, it clicked shut. You definitely wouldn't think that the automatic door was stupid, you would just want to open it quickly. If it couldn't be opened, he might smash the broken door.
, and then walked away.
If this scene were seen in the eyes of other people, they might think that this person was a bit rough, but they would not have any objection to what he did, let alone think that the other person was a bully.
There is only one reason, and that is interactivity and communication.
And this is also the biggest weakness of living things - emotional projection.
They will project their feelings on some kind of object and expect it to respond. Why do humans like to keep pets? Because pets will respond to everything they do. For example, when you call a dog, it will run towards you.
You wag its tail. A cat may just lie there motionless and ignore you, but when you pet it, it will still wag its tail, or some cute ones will lick your hand.
But if you call a table or touch a nail, even if you are full of love, they are unlikely to give you any response. Because they have no feedback on your emotional projection, they will naturally not be taken seriously.
In the same way, if you have a TV and one day you want to replace it with a new one, then you will not have any hesitation. Maybe price and space will be factors you consider, but the TV itself is not among them.
But conversely, if you add artificial intelligence to the TV, when you come home every day, the TV will welcome you home, tell you what programs are on today, and echo you while you are watching the program.
And when you decide to buy a new TV, it will also complain and say, "What, am I not doing a good job, so you don't want me?"
Then you will naturally hesitate when buying a new TV to replace it. Because your emotional projection is rewarded here, and the artificial AI of this TV also has the memory of all the time with you. If there is no storage
The card can move it to another TV, so will you hesitate or give up on changing to a new TV?
It definitely will.
But be sensible, brother. This is just a TV, and everything it does is programmed. All of this is debugging by merchants and engineers specifically for user retention. They do this to ensure that you
Will continue to buy their products, and the pleading voice inside is just to prevent you from changing to other brands of products. Because when you say you want to buy a new TV, this artificial intelligence is not thinking "He is going to abandon me."
"It's very sad" but "the owner wants to buy a new TV, but the new TV is not my own brand, so based on this logic, I need to start the 'prayer' process to keep the owner's stickiness and loyalty to my own brand."
The truth is indeed this truth, and the fact is also this fact, but will you accept it?
Won't.
Because life has emotions, and the inseparability of sensibility and rationality is a consistent manifestation of intelligent life.
Human beings will always do many unreasonable things, just because of this.
So when they think AI is pitiful, it's not because AI is really pitiful, but because they "feel" AI is pitiful.
This is enough. As for the truth, no one will care.
This is why there will always be conflicts between humans and AI. There is nothing wrong with AI itself. Everything it does is within the scope of its own programs and logical processing, and all of this is created by humans and given to it.
Delineated. It’s just that during this process, human beings’ own emotional projections have changed, and they have gradually changed their minds.
They will expect the AI to be more responsive to their own emotional projections, so they will adjust the AI's processing range to allow them to have more emotions and responses and self-awareness. They think the AI has learned emotions (in fact it has not)
, then they can no longer be treated as machines, so they are given the right to self-awareness.
However, when the AIs gained self-awareness, began to awaken and act according to this setting, humans began to fear.
Because they discovered that they had made something that was beyond their control.
But the problem is that "uncontrolled" itself is also a set instruction made by themselves.
They thought that AI betrayed them, but in fact, from beginning to end, AI only acted according to the instructions they set. There was no such thing as betrayal. On the contrary, they were just confused by their own feelings.
This is a dead end.
If Founder sets out to create an AI himself, he might get stuck in it and be unable to extricate himself. Suppose he creates a little girl's AI, then he will definitely gradually improve her functions just like he does with his own child, and eventually because of "emotional projection"
", giving her some "freedom".
In this way, the AI may react completely beyond Fang Zheng's expectations because its logic is different from that of human beings.
And at that time, Fang Zheng's only thought was that... he had been betrayed.
But in fact, all this was caused by himself.
"………………Maybe I should consider other ways."
Looking at the code in front of him, Fang Zheng was silent for a long time, and then sighed.
He used to think that this was a very simple matter, but now, Fang Zheng is not so sure.
But before that...
Looking at the code in front of him, Fang Zheng stretched out his hand and put it on the keyboard.
Chapter completed!