Artificial intelligence is a black box
In a less rigorous but easy to understand way, deep learning is the process of tagging existing data (large amounts of data), and then the system itself summarizes the relationship between the data and the results (that is, the tag being tagged), in the face of new data. , you can give judgment based on the rules of your own summary. For Go, regardless of the historical game or the self-game, AlphaGo knows the face and knows the ending (also a label). The system will summarize the rules and judge the probability of winning the game when facing the new face. But what the AI system finds is what characteristics of the data, and how it relates to the results, even the engineers who created the AI do not know.
So, the current artificial intelligence system is a black box. We know that the correct rate of AI judgment is high, but I don’t know why, I don’t know how to judge.
The same is true for AI in the search algorithm. The Baidu search engineer’s statement is rarely seen, just know that Baidu is now All In AI. Google engineers have made it clear that they are not sure how the RankBrain works. In this case, it is more troublesome to use artificial intelligence extensively in the algorithm. Once an abnormal result occurs, I don’t know what the reason is and cannot debug.
I wrote this post because I saw a New York Times article ” Can AI learn to explain it yourself? ” a few days ago , very interesting. A psychologist, Michal Kosinski, entered photos and personal information (including a lot of content, such as sexual orientation) of 200,000 social network accounts (which are dating sites) into the facial recognition artificial intelligence system, and found that artificial intelligence judged only when the photos were seen. Sexual accuracy is high. The accuracy of judging whether a person is homosexual by photograph is 60% higher than that of throwing a coin, but artificial intelligence judges whether a male homosexuality rate is as high as 91%, and a female is lower, and 83%.
From the photos, you can’t see the information that helps to judge the tone, posture, daily behaviour, and interpersonal relationship. Is homosexuality a feature of pure appearance? My personal experience is that it is not reliable to judge by appearance. I used to know a pair of gay men. They are all very man-like. They are fitness all the year round, and they are polite but not feminine. They can’t be seen from the outside. May also rely on a certain clothing characteristics? expression? background? From the photos, artificial intelligence sees features that we humans are likely to ignore, or features that humans simply cannot see, and achieve 91% accuracy? I don’t know, anyway, I just know that AI looks right.
Can’t explain that my AI can’t be trusted
This black box feature sometimes does not matter, as if to judge the sexual orientation. Sometimes you can’t be so hasty, like seeing a doctor. Although the correct rate of AI system diagnosis of certain cancers has reached the level of human doctors, the final conclusion is that doctors are still required, especially when AI can’t tell us what the reason for its diagnosis is. Unless the AI can explain why it made this diagnosis later, it would be a big psychological disorder for humans to trust AI 100%.
Just a few days ago, the Singapore government began testing unmanned buses. This is obviously the right direction, and I believe that it will become a reality in the near future. Although the accident rate of self-driving cars is lower than that of people, we all know that it is safer in terms of rationality. However, when crossing the road, the bus parked next to it has no driver. Will I be a little worried, afraid that it will start suddenly? Turning your head while driving, the Bus next to it has no driver, will I be scared and subconsciously away from it? At least in the early days. Talking to a few friends about this matter is rationally convinced and emotionally guilty.
The previous program relied on deterministic and causal relationships. For example, which page features in the search algorithm are ranking factors, and how much weight each has. This is the one that the engineer picks out and determines. Although it may be decided by the head, However, after monitoring results and adjusting parameters, a satisfactory balance will be achieved. Artificial intelligence systems do not rely on the engineer’s given cause and effect, but are better at finding connections in probabilities and correlations. For people, judgments characterized by probability and correlation are often difficult to explain reasons, such as perhaps looking at the mood, perhaps optimistic about not looking good.
Asking the AI system to explain its own judgment is not only a psychological problem, but may become an ethical and legal issue in the future, like seeing a doctor. For example, things involving the interests of users, like loans, artificial intelligence to make a decision to refuse loans based on a large amount of data, the bank can not explain why the rejection, how to account for the user? This year the EU may have to enact regulations that require the machine to make decisions that must be explained. This is a pressure on global companies such as Google and Facebook. In many fields, such as military, legal, and financial, all decisions are made by someone to take responsibility. If a decision cannot explain the reason, no one will dare to assume this responsibility.
Another reason why AI needs to explain the reason is that, as mentioned earlier, artificial intelligence sees probabilities and correlations, but seeing correlations can sometimes lead to serious errors. The New York Times article gives an example. The data-trained artificial intelligence system assisted the hospital emergency room triage, overall the effect is good, but the researchers still dare not really use it, because the correlation in the data may mislead artificial intelligence to make a wrong judgment. For example, the data show that the final condition of asthmatic patients with pneumonia is better than the average, and this correlation is real. If the AI system gives a lower treatment level to an asthma patient with pneumonia because of this data, it may be an accident. Because these patients are in good condition because they are given the highest level and get the best and fastest treatment. Therefore, sometimes the real reason cannot be seen from the correlation.
Interpretable artificial intelligence
XAI (Explainable AI) can explain artificial intelligence, is a field that has just emerged, the purpose is to let AI explain its own judgments, decisions and processes. Last year, the US Defense Advanced Research Projects Agency (Darpa) launched the XAI program led by Dr. David Gunning. Google is still a leader in this field, and Deep Dream seems to be a by-product of this research:
Back to the search algorithm and SEO, the reason why the search engine can not fully apply artificial intelligence, one of the reasons may be that the judgment of artificial intelligence is not explained or understood. If the algorithm uses the current artificial intelligence, once the ranking is abnormal, the engineers will not be able to Knowing what the reason is, it is even more difficult to know how to adjust.
I think that autonomous driving is one of the first areas of practical use of AI, and it has a certain relationship with whether it can be explained. Most decisions about self-driving cars are not much to explain, or the explanation is clear at a glance. If you are too close to the front car, you need to slow down or brake. This kind of judgment should not require further explanation.
SEOs probably have the same doubts. The pages of a competitor look nothing special, the content is not very good, the visual design is general, the external chain is normal, the page optimization is the same, why is the ranking so good? The current search algorithm can also explore the reasons, and search engineers probably have internal tools to see the rationality of the rankings. If the search engineer looks at a bad page that is in front, but does not know the reason, and can’t find it, their heart may be anxious.
XAI research has just begun, which gives SEOs the final buffer period. From the performance of artificial intelligence systems in other areas of crushing humans, once applied to search on a large scale, cheating and black hat SEO may become a thing of the past, and now the regular SEO work may become insignificant, SEOs need to return to the essence of the website: There is no other way to provide useful information or products.