Research Report: Huaan Securities: Artificial Intelligence Industry Event Comments: Alfa Dog Win Kejie AI Further Development

Alpha Dog is next to the next city, DeepMind is committed to building a generic model.

May 27, 2017 man-machine war three times the end of the third game, Ke Jie Jun 209 hands lost to AlphaGo, man-made war second round of the outcome of the finalized in the 0: 3. Alpha dog company DeepMind by Gemis Hassabis (DemisHassabis) founded in 2010 in London, 2014 for the acquisition of Google, and in the 2016 World War II champion Li Shishi battle on the fame.

DeepMind is currently working to create the world's first general learning machine, trying to enhance the learning on the basis of the development of universal learning model. Alpha dog captured by the game 3000 years ago originated in China, the use of square grid checkerboard and black and white color round pieces to play. There are 19 straight lines on the board to divide the board into 361 crossings, and the pieces are at the cross, and the two sides alternate chess to win over the surroundings. Go game rules are very simple, but its complexity is beyond imagination, a total of 3 361 times the possibility of a day is a number of figures.

The theory of artificial intelligence has gone through the accumulation of symbolism and connectionism in the last century.

Machine learning is the inevitable result of the development of artificial intelligence (Artificial Intelligence). From the 1950s to the early 1970s, artificial intelligence was in the 'reasoning period: when people thought that the machine could be intelligent as long as it could give the machine logical reasoning ability, and from the mid-1970s, artificial intelligence The study went into the 'knowledge period', and a large number of expert systems came out during this period, and achieved important results in many applications. Turing's article on Turing's test in 1950 referred to the possibility of machine learning. In the late 1960s, the 'connectionist' learning based on the neural network began to emerge, and the representative work was perceptual and Adaline, etc. In the 1960s and 1970s, the symbolic 'symbolism' learning technique began to flourish and the representative work was structured Learning system, logic - based induction learning system, concept - based learning system.

In the mid-1990s, 'statistical learning' appeared and quickly became mainstream. Representation technology is a support vector machine SVM and a more general 'nuclear approach.' Depth learning has made breakthrough progress in several major areas of speech recognition, image recognition, and natural language processing.

In 2006, GeoffervHinton, a professor at the University of Toronto, published an article in Science, proposed that an unsupervised layer-by-layer training algorithm based on a deep belief network could provide hope for training deep neural networks. Although the lack of rigorous theoretical basis, but it significantly reduces the threshold of machine learning applications. Depth learning hot has two main reasons, one is the accumulation of large data easier, and second, the hardware computing power significantly improved. Large data age can be a good solution to the model 'over-fitting' problem, and huge models and massive data, must have high-speed computing equipment to complete. However, as Microsoft Research Institute Dr. Qin Tao said, big calculation is easy to say, it is not easy to do it. Baidu to do the neural machine translation system, with 32 K40 GPU with ten days to do training, Google's machine translation system with more, with a 96 K80 GPU training for six days. AlphaGo the entire training process roughly spent 50 CPU four weeks, almost a month. (Source: Huaan Securities Editor: China Electronic Commerce Research Center)

[DBNETLIB][ConnectionOpen (Connect()).]SQL Server 不存在或拒绝访问。

Related Research Papers on research report

Internet Research Papers