Line 39: | Line 39: | ||
}} | }} | ||
<h2>Label selection</h2> | <h2>Label selection</h2> | ||
− | To harness the strenghts of convolutional networks in representation learning and feature extraction we implemented a fully convolutional architecture to classify protein sequences to functions. Function labels were thereby defined by the gene ontology (GO) annotation <x-ref>gene2004gene</x-ref>. The gene ontology annotation is hierarchical and best described as a directed acyclic graph (DAG). It contains labels providing information on the cellular location, pathway and molecular function of a particular protein. As we were interested solely in protein function classification, we considered on GO-labels in the molecular function sub-DAG. The molecular function sub-DAG has up to 12 levels and 11135 GO-terms | + | To harness the strenghts of convolutional networks in representation learning and feature extraction we implemented a fully convolutional architecture to classify protein sequences to functions. Function labels were thereby defined by the gene ontology (GO) annotation <x-ref>gene2004gene</x-ref>. The gene ontology annotation is hierarchical and best described as a directed acyclic graph (DAG). It contains labels providing information on the cellular location, pathway and molecular function of a particular protein. As we were interested solely in protein function classification, we considered on GO-labels in the molecular function sub-DAG. The molecular function sub-DAG has up to 12 levels and 11135 GO-terms. As the population between terms varies greatly and strongly depends on the terms' level in the DAG, with terms towards the roots being stronger populated than leaf terms.<br> |
+ | Thus, we thresholded the considered labels based on their minimum population, ending with a set of 1509 GO terms with a minimum population of 50 samples when considering the manually annotated SwissProt database <x-ref>apweiler2004uniprot</x-ref>. As the hierarchy in the DAG is fully inferable towards the root term, a classification by leaf terms is fully sufficient for a comprehensive classification, we further reduced the set of considered labels to all leaf nodes in the 1509-terms sub-DAG. Our final set of classes had 886 GO-terms of all levels.<br> | ||
<h2>Data preprocessing</h2> | <h2>Data preprocessing</h2> | ||
− | In order to convert the protein sequences into a machine readable format we preprocessed the whole UniProt database (release 08/17) as well as the SwissProt database (release 08/17) <x-ref>apweiler2004uniprot</x-ref>. For the classification task of 886 GO-labels we genererated a dataset containing 180774 sequences for SwissProt and ~7 million sequences for Uniprot respectively. | + | In order to convert the protein sequences into a machine readable format we preprocessed the whole UniProt database (release 08/17) as well as the SwissProt database (release 08/17) <x-ref>apweiler2004uniprot</x-ref>. For the classification task of 886 GO-labels we genererated a dataset containing 180774 sequences for the SwissProt and ~7 million sequences for Uniprot database respectively.<br> |
− | + | During datapreprocessing the full GO-annotation was infered through the DAG from the annotated Go-terms to facilitate the subsequent preprocessing steps. Sequences were then filtered for a minimum length of 175 amino acids and sequences containing non canonical amino acids were excluded.<br> | |
+ | To ensure a random distribution of sequences over in the validation and train sets for each label and at the same time account for the extreme class in-balance among the GO-terms, the validation set was created on the fly during generation of the training set. This was done by randomly sampling sequences from the preprocessing streams, ensuring the validation set to contain at least 5 sequences per GO-term. Training and validation sets were mutually exclusive.<br> | ||
+ | Prior to training the generated datasets were processed to a binary file format, to speed up the feed streams for the GPUs. Thereby sequences were one-hot encoded and clipped or zero padded to a window of 1000 residues. The labels were also one-hot encoded. In the uniprot dataset the average sequence had 1.3 labels assigned.<br> | ||
<h2>Results</h2> | <h2>Results</h2> | ||
The performance of the network was asserted on an exclusive validation set of 4425 sequences. For each GO-label the validation set contained at least 5 distinct samples. Our model achieved an area under the curve (AUC) for the reciever operating characteristic (ROC) of 99.8% with an average F1 score of 78% (Figure 3).<br> | The performance of the network was asserted on an exclusive validation set of 4425 sequences. For each GO-label the validation set contained at least 5 distinct samples. Our model achieved an area under the curve (AUC) for the reciever operating characteristic (ROC) of 99.8% with an average F1 score of 78% (Figure 3).<br> |
Revision as of 22:44, 1 November 2017
DeeProtein
Deep learning for protein sequences
Introduction
While the idea of applying a stack of layers composed of computational nodes to estimate complex functions origins in the 1960sBackground
Artificial neural networks are powerful function approximators, able to untangle complex relations in the input spaceProtein representation learning
The protein space is extremely complex. The amino acid alphabet knows 20 basic letters and an average protein has a length of 500 residues, making the combinatory complexity of the space tremendous. Comparable to images however, functional protein sequences reside on a thin manyfold within the total sequence space. Learning the properties of the protein distribution of a certain functionality would enable not only a decent classification of sequences into functions but also unlimited sampling from this distribution resulting in de novo protein sequence generation. Attempts for protein sequence classification have been made with CNNsAlso handwritten feature extractors exist for protein sequences
Network Architecture
To harness the strenghts of convolutional networks in representation learning and feature extraction we implemented a fully convolutional architecture to classify protein sequences to functions based on the ResNet architectureWe define a residual block as a set of two convolutional layers, a convolutional layer with kernel size 1 to squeeze the channels
Label selection
To harness the strenghts of convolutional networks in representation learning and feature extraction we implemented a fully convolutional architecture to classify protein sequences to functions. Function labels were thereby defined by the gene ontology (GO) annotationThus, we thresholded the considered labels based on their minimum population, ending with a set of 1509 GO terms with a minimum population of 50 samples when considering the manually annotated SwissProt database
Data preprocessing
In order to convert the protein sequences into a machine readable format we preprocessed the whole UniProt database (release 08/17) as well as the SwissProt database (release 08/17)During datapreprocessing the full GO-annotation was infered through the DAG from the annotated Go-terms to facilitate the subsequent preprocessing steps. Sequences were then filtered for a minimum length of 175 amino acids and sequences containing non canonical amino acids were excluded.
To ensure a random distribution of sequences over in the validation and train sets for each label and at the same time account for the extreme class in-balance among the GO-terms, the validation set was created on the fly during generation of the training set. This was done by randomly sampling sequences from the preprocessing streams, ensuring the validation set to contain at least 5 sequences per GO-term. Training and validation sets were mutually exclusive.
Prior to training the generated datasets were processed to a binary file format, to speed up the feed streams for the GPUs. Thereby sequences were one-hot encoded and clipped or zero padded to a window of 1000 residues. The labels were also one-hot encoded. In the uniprot dataset the average sequence had 1.3 labels assigned.
Results
The performance of the network was asserted on an exclusive validation set of 4425 sequences. For each GO-label the validation set contained at least 5 distinct samples. Our model achieved an area under the curve (AUC) for the reciever operating characteristic (ROC) of 99.8% with an average F1 score of 78% (Figure 3).Wet Lab Validation
To assert the value of DeeProtein in sequence activtiy evaluation context, we validated the correlation between the DeeProtein classification score and enzyme activity in the wetlab. First we predicted a set of 25 single and double mutant beta-Lactamase variants with both higher and lower scores as the wildtype and subsequently asserted the activity in the wetlab.In order to derive a measure for enzyme activity we investigated the minimum inhibitory concentration (MIC) of Carbenicillin for all predicted mutants. The MIC was asserted by OD600-measurement in Carbenicillin containing media. As the OD was measuren in a 96-well plate the values are not absolute. From the measurements the MIC-score was calculated as the first Carbenicillin concentration where the OD fell below a threshold of 0.08. Next the classification scores were averaged for each MIC-score and then plotted against the Carbenicilline concentration (Figure 2).
Protein Sequence Embedding
A protein representation first described by Asgari et al is prot2vecThus we intended to find a optimized word2vec approach for fast, reproducible and simple protein sequence embedding. Therefore we applied a word2vec model
Results
We visualized our 100 dimensional embedding through PCA dimensionality reduction as shown in Fig 5. Highlighted in sequence are all kmers containing a certain amino acid. Clear clusters can be observed for the aminoacids Cysteine (top right corner), Lysine (top left corner), Tryptophane (center right), Glutamate (center left), Proline (bottom) and Arginine (center) even after dimensionality reduction. In contrast, for aminoacids like Glycine, Serine and Valine are distributed over the whole kmer space.Application
Our kmer embedding provides a great base to explore the protein space for future research. The embedding may be applied in classification as demonstrated byNeural Networks 101
1. Neural Network Basics
The idea of neural networks origins in the late 1960 where Rosenblatt et al. first descriped the perceptron as a the functional unit of neural networksIn general a neural network can be seen as a function approximator mapping an input to an output function, in terms of a classifier for instance:
$$y = f(x)$$ In contrast to a "handdesigned" classification algorithm, a neural network learns the parameters \(\theta\) required for a successfull classification: $$y = f(x, \theta)$$ And when we consider the chain like structure of neural networks: $$y = f_3(f_2(f_1(x, \theta_1), \theta_2), \theta_3)$$ A single neuron thereby consists of a linear mapping and an non-linear activation function applied after the linear activation: $$z_j = w_{ij} i + b_j$$ $$y_j = a(z_j)$$ In the context of a layer with multiple neurons: $$ z_j = \sum w_{ij} i + b_j$$ $$ y_j = a(z_j)$$ The weights \(w\) and biases \(b\) thereby describe the trainable parameters of the layer.
2. Training - Backpropagation
Like other machine learning models neural networks are trained with gradient based methodsNormal vs. Convolutional Nets
A convolutional neural network (CNN) is a special neural network architecture first described by Yann LeCunA convolution is defined as: $$ s(t)=\int x(a)w(t-a) da $$ Where the input function \(x()\) is smoothed by the weighting function (or Kernel) \(w\), leading to the output \(s\). Typically a convolution is denoted with an asterix: $$ s(t)=(x \ast w)(t) $$ Applied on a two dimenstional discrete input, a convolution is described as: $$ S(i,j) = \sum_m \sum_n I(i-m,j-n)K(m,n) $$
A ConvNet in abstract representation could look like this:
$$Input -> Conv -> Conv2 -> Conv3 -> Fully Connected -> Fully Connected -> Output$$
The information is propagated from the input throught the convolutional layers and fully connected layers to generate an output. By repeated application of convolutions and different kernel sizes the size of the image is reduced as the information proceedes through the network. The kernels are thereby modular, and optimized during the training process. The result is a trainable feature extractor (Conv1-Conv3) that can be examined by a small fully connected neural network. Another advantage is that CNNs rely on parameter sharing. As a kernel is often much smaller as the image it is applied on, and a kernel is applied on the whole image, the size of parameters that needs to be restored is greatly reduced compared to conventional neural networks that rely on general matrix multiplication
More Ressources
We provide a brief overview on what a neural network does in our neuralnetworks101. For a comprehensice explanation please rely on these ressources:- colah's Blog provides decent explanations of virtually all neural network architectures
- Adit Deshpande's Blog is a very good resource for introductory explanations and overviews on applications
- The Deep Learning Book is a comprehensive work by three of the fields most prominent figures
Practical tips are given in these resources:
- Andrew Ng's tips on practical implementation
- Comprehensive TensoFlow tutorials by the Hvass labs