Deep Learning
Thanks for Dr. Chan's talk on deep learning (DL) today.
Personally, I summarized deep learning as 1. Huge input from big data, 2. Better computing model, i.e., GPU or ASIC, for making deeper and deeper neural network possible, and 3. improved init points for training/building the model. (of course, I had asked whether or not my summary is correct.)
Deep learning may not be a new idea but an inevitable outcome of the trend. However, people can find some negative comments on this research directions. Mike Jordan, as a master of machine learning area, argued that solution like this model training is in lack of solid theoretical foundation.
It is like genetic algorithm, model is trained and tuned but hard to be explained in a good way. However, the applications of deep learning, such as image recognition (used by Google Image, Baidu, Flickr), speech recognition (Siri, Google Now, Baidu), have proved the usability of DL. Research on DL undoubtedly will be going on.
Personally, I summarized deep learning as 1. Huge input from big data, 2. Better computing model, i.e., GPU or ASIC, for making deeper and deeper neural network possible, and 3. improved init points for training/building the model. (of course, I had asked whether or not my summary is correct.)
Deep learning may not be a new idea but an inevitable outcome of the trend. However, people can find some negative comments on this research directions. Mike Jordan, as a master of machine learning area, argued that solution like this model training is in lack of solid theoretical foundation.
It is like genetic algorithm, model is trained and tuned but hard to be explained in a good way. However, the applications of deep learning, such as image recognition (used by Google Image, Baidu, Flickr), speech recognition (Siri, Google Now, Baidu), have proved the usability of DL. Research on DL undoubtedly will be going on.
留言