20210401-DeepPerpose

A Deep Learning Library for Compound and Protein Modeling DTI, Drug Property, PPI, DDI, Protein Function Prediction

文章链接

github-DeepPerpose

作者开发的一种处理药物与蛋白功能预测的通用框架。

一、数据

Drug:['Cc1ccc(CNS(=O)(=O)c2ccc(s2)S(N)(=O)=O)cc1', ...],
Protein:['MSHHWGYGKHNGPEHWHKDFPIAKGERQSPVDIDTH...', ...]

二、处理的生物学问题:
  1. DTI (Drug Target Interaction),预测药物与蛋白是否有关系。
  2. Drug Property Prediction,药物性能预测,可是回归问题,可是分类问题。
  3. DDI (Drug-Drug Interaction Prediction)
  4. Protein-Protein Interaction Prediction
  5. Protein Function Prediction
  6. 药物重定位,例如
    Antiviral Drugs Repurposing for SARS-CoV2 3CLPro,Given a new target sequence (e.g. SARS-CoV2 3CL Protease), retrieve a list of repurposing drugs from a curated drug library of 81 antiviral drugs.
    Given a new target sequence (e.g. SARS-CoV 3CL Pro), training on new data (AID1706 Bioassay), and then retrieve a list of repurposing drugs from a proprietary library (e.g. antiviral drugs). The model can be trained from scratch or finetuned from the pretraining checkpoint!
    作者对以上问题,提供了Demo数据与使用范例
三、模型
m_btaa1005f1.png
(1) 数据编码:
  • Drug 八种方式
  1. Multi-Layer Perceptrons (MLP) on Morgan,

Morgan Fingerprint 1 is a 1024-length bits vector that encodes circular radius-2 substructures. A multi-layer perceptron is then applied on the binary fingerprint vector

  1. PubChem,

Pubchem 2 is a 881-length bits vector, where each bit corrresponds to a hand-crafted important substructures. A multi-layer perceptron is then applied on top of the vector

  1. Daylight

Daylight is a 2048-length vector that encodes path-based substructures. A multi-layer perceptron is then applied on top of the vector.

  1. RDKit 2D Fingerprint;

RDKit-2D is a 200-length vector that describes global pharmacophore descriptor. It is normalized to make the range of the features in the same scale using cumulative density function fit given a sample of the molecules.

  1. Convolutional Neural Network (CNN) on SMILES strings;

CNN 3 is a multi-layer 1D convolutional neural network. The SMILES characters are first encoded with an embedding layer and then fed into the CNN convolutions. A global max pooling layer is then attached and a latent vector describe the compound is generated.

6.Recurrent Neural Network (RNN) on top of CNN;

CNN+RNN 4,5 attaches a bidirectional recurrent neural network (GRU or LSTM) on top of the 1D CNN output to leverage the more global temporal dimension of compound. The input is also the SMILES character embedding.

  1. transformer encoders on substructure fingerprints;

Transformer 6 uses a self-attention based transformer encoder that operates on the sub-structure partition fingerprint

  1. message passing graph neural network on molecular graph

MPNN 8 is a message-passing graph neural network that operate on the compound molecular graph. It transmits latent information among the atoms and edges, where the input features incorporate atom/edge level chemical descriptors and the connection message. After obtaining embedding vector for each atom and edge, a readout function (mean/sum) is used to obtain a (molecular) graph-level embedding vector.

  • Protein 七种方式
  1. AAC

is a 8,420-length vector where each position correpsonds to an amino acid k-mers and k is up to 3.

  1. PseAAC

includes the protein hydrophobicity and hydrophilicity patterns information in addition to the composition.

  1. Conjoint Triad

uses the continuous three amino acids frequency distribution from a hand-crafted 7-letter alphabet.

  1. Quasi Sequence

takes account for the sequence order effect using a set of sequence-order-coupling numbers.

  1. CNN

is a multi-layer 1D convolutional neural network. The target amino acid is decomposed to each individual character and is encoded with an embedding layer and then fed into the CNN convolutions. It follows a global max pooling layer.

  1. CNN+RNN

attaches a bidirectional recurrent neural network (GRU or LSTM) on top of the 1D CNN output to leverage the sequence order information.

  1. Transformer

uses a self-attention based transformer encoder that operates on the sub-structure partition fingerprint 7 of proteins. Since transformer’s computation time and memory is quadratic on the input size, it is computational infeasible to treat each amino acid symbol as a token. The partition fingerprint decomposes amino acid sequence into protein substructures of moderate sized such as motifs and then each of the partition is considered as a token and fed into the model.

(2)Loss

For binding affinity score prediction, it uses mean squared error (MSE) loss
For binary interaction prediction, it uses binary cross entropy (BCE) loss

©著作权归作者所有,转载或内容合作请联系作者
【社区内容提示】社区部分内容疑似由AI辅助生成,浏览时请结合常识与多方信息审慎甄别。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

相关阅读更多精彩内容

  • 夜莺2517阅读 127,828评论 1 9
  • 版本:ios 1.2.1 亮点: 1.app角标可以实时更新天气温度或选择空气质量,建议处女座就不要选了,不然老想...
    我就是沉沉阅读 11,861评论 1 6
  • 我是黑夜里大雨纷飞的人啊 1 “又到一年六月,有人笑有人哭,有人欢乐有人忧愁,有人惊喜有人失落,有的觉得收获满满有...
    陌忘宇阅读 12,749评论 28 53
  • 兔子虽然是枚小硕 但学校的硕士四人寝不够 就被分到了博士楼里 两人一间 在学校的最西边 靠山 兔子的室友身体不好 ...
    待业的兔子阅读 7,534评论 2 9

友情链接更多精彩内容