Difference between revisions of "Team:Heidelberg/Software/DeeProtein"

Line 108: Line 108:
 
       {{Heidelberg/templateus/Imagesection|
 
       {{Heidelberg/templateus/Imagesection|
 
       https://static.igem.org/mediawiki/2017/7/7f/T--Heidelberg--2017_DeeProtein_NERUALNET.svg |
 
       https://static.igem.org/mediawiki/2017/7/7f/T--Heidelberg--2017_DeeProtein_NERUALNET.svg |
Figure 6: A simple forward and backward pass through a neural network. |
+
Figure 6: A forward and backward pass through a neural network. |
 
+
A forward pass through a neural network with one hidden layer (left). First the input for each neuron is computed as a weighted sum of the outputs of the previous layer \(y\) or in case of the first hidden layer, the input \(x_{i}\) with an added trainable bias term. Next the \(z\) is squished through a non-linear function (activation) and the output of a layer \(y\) is obtained. The last layer (outlayer) is then compared to the target in order to estimate the error of the model in the lossfunction \(L\). This initializes the backward pass (right). Here the gradients of the lossfunction are backpropagated through the network by application of the chain rule. In order to do so, the error derivatives in each unit are calculatd with respect to each units output, denoted as a weighted sum of the derivatives with respect to the previous layers inputs \(z_{l}\). By multiplication with the gradient of the activation function \(\frac{\partial y_{z}}{\partial z}\) the gradient with respect to a layers output is converted in a gradient with respect to a layers input \(z_{l-1}\). Once the loss is known, the error derivatives for each weight can be computed as \( y_{l} \frac{\partial L}{\partial z_{l+1}}. Subsequently all weights are updated by their gradient value mutliplied with a learning rate. Then the next forward pass is performed.}}
}}
+
 
   The most common cost or loss functions for classification tasks include the mean squared error (MSE) and the cross entropy (CE):
 
   The most common cost or loss functions for classification tasks include the mean squared error (MSE) and the cross entropy (CE):
 
$$ MSE = \frac{1}{2}(y - \hat{y})^2 $$
 
$$ MSE = \frac{1}{2}(y - \hat{y})^2 $$

Revision as of 00:44, 2 November 2017


DeeProtein
Deep learning for protein sequences
{{{5}}}
}} }}

References