source:
https://www.sciencedirect.com/science/article/pii/S1359644617303598#!
Highlights and Excerpts• Deep learning technology has gained remarkable success.
• We highlight the recent applications of deep learning in drug discovery research.
• Some popular deep learning architectures are introduced in the current study.
• Future development of deep learning in drug discovery is discussed.
A simple illustration of neural networks (NNs). (a) A NN is composed of input, hidden and output layers. (b) The output values of a hidden unit are calculated from input values via an activation function
Over the past decade, deep learning has achieved remarkable success in various artificial intelligence research areas. Evolved from the previous research on artificial neural networks, this technology has shown superior performance to other machine learning algorithms in areas such as image and voice recognition, natural language processing, among others. The first wave of applications of deep learning in pharmaceutical research has emerged in recent years, and its utility has gone beyond bioactivity predictions and has shown promise in addressing diverse problems in drug discovery. Examples will be discussed covering bioactivity prediction, de novo molecular design, synthesis prediction and biological image analysis.
Machine learning has been used since the late 1990s in drug discovery and has established itself as a useful tool in drug discovery. A recent extension of the machine learning toolbox is DL. In comparison with other methods, DL has a much more flexible architecture so it is possible to create a NN architecture tailor-made for a specific problem. A disadvantage is that DL in general needs very large training sets. A relevant question is: is DL is superior to other machine learning methods? We believe it is still too early to draw any firm conclusion, the results so far indicate that DL is superior for certain tasks like image analysis and very useful for
De novoWiki molecular design and reaction predictions.
Architecture of several popular neural networks: (a) fully connected deep neural network (DNN), (b) convolutional neural network (CNN), (c) recurrent neural network (RNN) and (d) autoencoder (AE).
Structure generation from recurrent neural networks (RNNs). The upper plot shows how the RNN model thinks when generating the structure on the bottom right. The y axis lists all possible tokens that can be chosen at each step, the color represents the conditional probability for the character to be chosen at the current step given the previously chosen characters, and the x axis shows the character that, in this instance, was sampled. The bottom left figure demonstrates how the RNN actually works in the structure-generation mode. At each step a character was sampled based on the conditional probability distribution calculated from the RNN model and the generated character will then be used as the input character for generation of the next character.
take the source link if you feel brave