The intelligence of machines and the branch of computer science which aims to create it

Artificial Intelligence Journal

Subscribe to Artificial Intelligence Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Artificial Intelligence Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


by Joseph Rickert Last week,I had the opportunity to participate in the Second Academy of Science and Engineering (ASE) Conference on Big Data Science and Computing at Stanford University. Since the conference was held simultaneously with the two other conferences, one on Social Computing and the other on Cyber Security, it was definitely not an R crowd, and not even a  typical Big Data crowd. Talks from the three programs were intermixed throughout the day so at any given moment you could find yourself looking for common ground in a conversation with mostly R aware, but language impartial fellow attendees. I don’t know whether this method of organization was the desperate result of necessity or genius, but I thought it worked out very well and made for a stimulating interaction dynamic. The ASE conference must have been difficult program to set up. The organizers, however, did a wonderful job mashing talks and themes together to make for an excellent experience. There were several very good talks at the conference, however, the tutorial on Deep Learning and Natural Language Processing given by Richard Socher was truly outstanding. Richard is a PhD student in Stanford’s Computer Science Department studying under Chris Manning and Andrew Ng. Very rarely do you come across such a polished speaker with complete and casual command of complex material. And, while the delivery was impressive the content was jaw dropping. Richard walked through the Deep Learning methodology and tools being developed in Stanford’s AI lab and showed a number of areas where the Deep Learning techniques are yielding notable results; for example, a system for single sentence sentiment detection that improved positive/negative sentence classification by 5.4%. Have a look at Andrew Ng’s or Christopher Manning’s lists of publications to get a good idea of the outstanding work that is being done in this area. A key concept covered in the tutorial is the ability to represent natural language structures, parsing trees for example, in a finite dimensional vector space and to build the theoretical and software tools in such a way that same method can be use to deconstruct and represent other hierarchies. The following slide indicates how a structures build for Natural Language Processing (NLP) can also be used to represent images. This ability to bring a powerful, integrated set of tools to many different areas seems to be a key reason why neural nets and Deep Learning are suddenly getting so much attention. In a tutorial similar to the one Richard gave on Saturday, Richard and Chris Manning attribute the recent resurgence of Deep Learning to three factors: New methods for unsupervised pre-training: Restricted Boltzmann Machines (RBMs), autoencoders and contrastive estimation More efficient parameter estimation methods Better understanding of model regularization The software used in the NLP and Deep Learning work at Stanford seems to be mostly based on Python and C. (See theano and Senna for example.) So far, it does not appear that much Deep Learning work at all is being done with R. However, things are looking up. 0xdata’s H20 Deep Learning implementation is showing impressive results, and the this algorithm is available in the h20 R package. Also, the R package darch and the very recent deepnet package, both of which offer implementations of Restricted Boltzman Machines, indicate that Deep Learning researchers are working in R. Finally, to get a quick overview of the area have a look at  the book, Deep Learning: Methods and Applications by Li Deng and Dony Yu of Microsoft Research is available online.

Read the original blog entry...

More Stories By David Smith

David Smith is Vice President of Marketing and Community at Revolution Analytics. He has a long history with the R and statistics communities. After graduating with a degree in Statistics from the University of Adelaide, South Australia, he spent four years researching statistical methodology at Lancaster University in the United Kingdom, where he also developed a number of packages for the S-PLUS statistical modeling environment. He continued his association with S-PLUS at Insightful (now TIBCO Spotfire) overseeing the product management of S-PLUS and other statistical and data mining products.<

David smith is the co-author (with Bill Venables) of the popular tutorial manual, An Introduction to R, and one of the originating developers of the ESS: Emacs Speaks Statistics project. Today, he leads marketing for REvolution R, supports R communities worldwide, and is responsible for the Revolutions blog. Prior to joining Revolution Analytics, he served as vice president of product management at Zynchros, Inc. Follow him on twitter at @RevoDavid