Line 3: | Line 3: | ||
− | {{Heidelberg/boxopen|Week 34|{{#tag:html|<h2>Optopace</h2> | + | <div class="page-heading" style="background-image: url(https://static.igem.org/mediawiki/2017/a/ae/T--Heidelberg--2017_Background_Tiger.jpg); height: 30vh;"> |
+ | </div> | ||
+ | <div class="page-title" style="color: #9d1c20 !important"> | ||
+ | <div class="t-container"> | ||
+ | <div class="t-col t-col_12"> | ||
+ | |||
+ | <span id="header-title">Notebook</span><br> | ||
+ | <span id="header-subtitle" style="white-space: pre-line;">Weekly reports on all project parts</span> | ||
+ | </div> | ||
+ | |||
+ | </div> | ||
+ | </div> | ||
+ | |||
+ | |||
+ | <div class="abstract-layout mdl-layout mdl-layout--fixed-header mdl-js-layout"> | ||
+ | |||
+ | <div class="abstract-ribbon mdl-shadow--4dp"></div> | ||
+ | |||
+ | <main class="abstract-main mdl-layout__content"> | ||
+ | <div class="abstract-container mdl-grid"> | ||
+ | |||
+ | <div class="t-container abstract-content mdl-color--white mdl-shadow--4dp "> | ||
+ | <h1>In a nutshell</h1> | ||
+ | <div class="container-fluid"> | ||
+ | <div class="row"> | ||
+ | <div class="col-lg-6 col-md-6 col-xs-12" style="padding: 30px;"><img src="https://www.mathworks.com/matlabcentral/answers/uploaded_files/40695/lena.jpg" style="width:100%; margin-bottom:10px" /></div> | ||
+ | <div class="col-lg-6 col-md-6 col-xs-12" style="padding: 30px; text-align: justify"><p> | ||
+ | With Interactive Modelling iGEM Heidelberg provides a comprehensive set | ||
+ | of tools that not only help to facilitate the implementation of PACE but | ||
+ | also give an intuitive understanding of underlying mechanisms. To control | ||
+ | highly complex processes such as PACE or PALE in a near-ideal way enables | ||
+ | to exploit as much of it's potential as possible. The most important | ||
+ | parameters were determined and examined with ODE systems, solved | ||
+ | analytically or numerically, [stochastic and | ||
+ | distributional] models. As far as possible the models are available | ||
+ | online to make them accessible to anyone interested. When useful, a [tool | ||
+ | for comparison of experimental data and the model] is available. | ||
+ | In addition the Interactive modelling helps to monitor parameters that | ||
+ | cannot be easily be interpreted from raw data, such as [] and combines | ||
+ | different parameters to make useful statements about an experiment. | ||
+ | </p></div> | ||
+ | </div> | ||
+ | |||
+ | </div> | ||
+ | |||
+ | </div> | ||
+ | </div> | ||
+ | |||
+ | </main> | ||
+ | </div>{{Heidelberg/boxopen|Week 34|{{#tag:html|<h2>Optopace</h2> | ||
No entry for this subproject this week.<h2>Software</h2> | No entry for this subproject this week.<h2>Software</h2> | ||
KW34 | KW34 |
Revision as of 21:33, 20 October 2017
Notebook
Weekly reports on all project parts
<main class="abstract-main mdl-layout__content">
In a nutshell
With Interactive Modelling iGEM Heidelberg provides a comprehensive set
of tools that not only help to facilitate the implementation of PACE but also give an intuitive understanding of underlying mechanisms. To control highly complex processes such as PACE or PALE in a near-ideal way enables to exploit as much of it's potential as possible. The most important parameters were determined and examined with ODE systems, solved analytically or numerically, [stochastic and distributional] models. As far as possible the models are available online to make them accessible to anyone interested. When useful, a [tool for comparison of experimental data and the model] is available. In addition the Interactive modelling helps to monitor parameters that cannot be easily be interpreted from raw data, such as [] and combines different parameters to make useful statements about an experiment.
</main>
Optopace
No entry for this subproject this week.Software
KW34 ===== Word2Vec Embeddings on Proteinsequences --------------------- We rewrote a word2vec implementation from tensorflows tutorials that implements Efficient Estimation of Word Representations in Vector Space, ICLR 2013 (Mikolov, et. al.). The model is a skipgram model with negative sample that uses custom ops written in C. The code was adapted to our needs, mainly by changing datatypes in the C kernels and writing a different evaluation function based on predicting the nearest words to the most frequent words instead of using analogies. Two new datasets were generated based on both swissprot and uniprot. Training of 4mer embeddings in 50, 100 and 200 dimensions were started but have not been calculated yet. Visualisation of the first checkpoints is possible via tensorboard [Visualisation of an example embedding via tensorboard](170820ai-vistestemb). IMPLEMENTATION OF SQUEEZENET Architecture --------------------------------- With implamentation of a new architecture based on Sequeeze-net (Forrest N. Iandola, 2017), relying on 1x1 convolutions we were able to grasp the 299 as well as the 637 classes dataset. The new model architecture looks the following: - InputLayer model_valid/input_layer_valid: (64, 20, 1000, 1) - PadLayer model_valid/block1/pad_layer_valid: paddings:[[0, 0], [0, 0], [3, 3], [0, 0]] mode:CONSTANT - Conv2dLayer model_valid/block1/cnn_layer_valid: shape:[20, 7, 1, 128] strides:[1, 5, 1, 1] pad:VALID act:prelu - Conv1dLayer model_valid/block2/cnn_layer_valid: shape:[6, 128, 128] stride:1 pad:SAME act:prelu - Conv1dLayer model_valid/1x1_I/1x1_valid: shape:[1, 128, 64] stride:1 pad:SAME act:prelu - BatchNormLayer model_valid/1x1_I/batchnorm_layer_valid: decay:0.900000 epsilon:0.000010 act:identity is_train:False - Conv1dLayer model_valid/block3/cnn_layer_valid: shape:[5, 64, 256] stride:1 pad:SAME act:prelu - PoolLayer model_valid/block3/pool_layer_valid: ksize:[2] strides:[2] padding:VALID pool:pool - BatchNormLayer model_valid/block3/batchnorm_layer_valid: decay:0.900000 epsilon:0.000010 act:identity is_train:False - Conv1dLayer model_valid/block4/cnn_layer_valid: shape:[5, 256, 256] stride:1 pad:SAME act:prelu - PoolLayer model_valid/block4/pool_layer_valid: ksize:[2] strides:[2] padding:VALID pool:pool - BatchNormLayer model_valid/block4/batchnorm_layer_valid: decay:0.900000 epsilon:0.000010 act:identity is_train:False - Conv1dLayer model_valid/1x1_II/1x1_valid: shape:[1, 256, 128] stride:1 pad:SAME act:prelu - BatchNormLayer model_valid/1x1_II/batchnorm_layer_valid: decay:0.900000 epsilon:0.000010 act:identity is_train:False - Conv1dLayer model_valid/block5/cnn_layer_valid: shape:[5, 128, 256] stride:1 pad:SAME act:prelu - PoolLayer model_valid/block5/pool_layer_valid: ksize:[2] strides:[2] padding:VALID pool:pool - BatchNormLayer model_valid/block5/batchnorm_layer_valid: decay:0.900000 epsilon:0.000010 act:identity is_train:False - Conv1dLayer model_valid/block6/cnn_layer_valid: shape:[5, 256, 512] stride:1 pad:SAME act:prelu - PoolLayer model_valid/block6/pool_layer_valid: ksize:[2] strides:[2] padding:VALID pool:pool - BatchNormLayer model_valid/block6/batchnorm_layer_valid: decay:0.900000 epsilon:0.000010 act:identity is_train:False - Conv1dLayer model_valid/1x1_III/1x1_valid: shape:[1, 512, 256] stride:1 pad:SAME act:prelu - BatchNormLayer model_valid/1x1_III/batchnorm_layer_valid: decay:0.900000 epsilon:0.000010 act:identity is_train:False - Conv1dLayer model_valid/block7/cnn_layer_valid: shape:[5, 256, 516] stride:1 pad:SAME act:prelu - PoolLayer model_valid/block7/pool_layer_valid: ksize:[2] strides:[2] padding:VALID pool:pool - BatchNormLayer model_valid/block7/batchnorm_layer_valid: decay:0.900000 epsilon:0.000010 act:identity is_train:False - Conv1dLayer model_valid/block8/cnn_layer_valid: shape:[5, 516, 1024] stride:1 pad:SAME act:prelu - PoolLayer model_valid/block8/pool_layer_valid: ksize:[2] strides:[2] padding:VALID pool:pool - BatchNormLayer model_valid/block8/batchnorm_layer_valid: decay:0.900000 epsilon:0.000010 act:identity is_train:False - Conv1dLayer model_valid/1x1_IV/cnn_layer_valid: shape:[1, 1024, 512] stride:1 pad:SAME act:prelu - BatchNormLayer model_valid/1x1_IV/batchnorm_layer_valid: decay:0.900000 epsilon:0.000010 act:identity is_train:False - Conv1dLayer model_valid/block9/cnn_layer_valid: shape:[5, 512, 1024] stride:1 pad:SAME act:prelu - PoolLayer model_valid/block9/pool_layer_valid: ksize:[2] strides:[2] padding:VALID pool:pool - BatchNormLayer model_valid/block9/batchnorm_layer_valid: decay:0.900000 epsilon:0.000010 act:identity is_train:False - Conv1dLayer model_valid/outlayer/cnn_layer_valid: shape:[1, 1024, 637] stride:1 pad:SAME act:prelu - BatchNormLayer model_valid/outlayer/batchnorm_layer_valid: decay:0.900000 epsilon:0.000010 act:identity is_train:False - MeanPool1d global_avg_pool: filter_size:[7] strides:1 padding:valid The architecture is fully convolutional, ending in an average pooling layer as outlayer, with the channels dimension corresponding to the number of classes. All inputs were 1-hot encoded and zero padded to a boxsize of 1000 positions.Optopace
No entry for this subproject this week.Software
KW35 ====== Performance of the Squeezenet Architecture - singlelabel 599 ------- The model was run successfully on the old 599 classes dataset. Parameters: lr = E-2, batchsize=64, epsilon=0.1 [ROC](DeeProtein_TFRECORDS_PURECONV_1x1tuned_750k_restored750kfull_sce_adam_1dconv637_1000_one_hot_padded_64_0.001_0.1.roc_16.svg) [Precision](DeeProtein_TFRECORDS_PURECONV_1x1tuned_750k_restored750kfull_sce_adam_1dconv637_1000_one_hot_padded_64_0.001_0.1.precision_16.svg) Performance of the Squeezenet Architecture - singlelabel 679 ------- The model was run successfully on the 679 classes dataset. Parameters: lr = E-5, batchsize=64, epsilon=0.1 [ROC](DeeProtein_TFRECORDS_PURECONV_1x1tuned_restored679_sce_adam_1dconv_EC_679_1000_one_hot_padded_64_0.001_0.1.roc_9.svg) [Precision](DeeProtein_TFRECORDS_PURECONV_1x1tuned_restored679_sce_adam_1dconv_EC_679_1000_one_hot_padded_64_0.001_0.1.precision_9.svg) Performance of the Squeezenet Architecture - Multilabel 1084 ------- The model was run successfully on the 1084 GO-classes dataset. Parameters: lr = E-3, batchsize=64, epsilon=0.1 [ROC](DeeProtein_TFRECORDS_PURECONV_1x1LARGE_MULTI_restored1084_sce_adam_1dconv_EC_1084_1000_one_hot_padded_64_0.0001_0.1.roc_39.svg) [Precision](DeeProtein_TFRECORDS_PURECONV_1x1LARGE_MULTI_restored1084_sce_adam_1dconv_EC_1084_1000_one_hot_padded_64_0.0001_0.1.precision_39.svg) Corrected datasets for missing classes, reworte ```eval()``` to enclude the whole validation set ------------------------ - Dataset 637, was missing 138 classes due to the min. length requirement in the ```DatasetGenerator``` class. The requirement was lowered to 175AA. Further the ```DatasetGenerator``` class was rewritten, to ensure to contain 5 samples from every class in the validation set. - the ```eval()``` function of ```DeeProtein``` was rewritten to perform the validaion on the _whole_ validation set at given steps. Performance on 679 classes with minlength 175: lr=0.01, e=0.1, batchsize=64 [ROC](DeeProtein_TFRECORDS_PURECONV_1x1tuned_restored637750kfull_sce_adam_1dconv679_1000_one_hot_padded_64_0.01_0.1.roc_32.svg) [Precision](DeeProtein_TFRECORDS_PURECONV_1x1tuned_restored637750kfull_sce_adam_1dconv679_1000_one_hot_padded_64_0.01_0.1.precision_32.svg) lr=0.001, e=0.1, batchsize=64 [ROC](DeeProtein_TFRECORDS_PURECONV_1x1tuned_restored637750kfull_sce_adam_1dconv679_1000_one_hot_padded_64_0.01_0.1.roc_32.svg) [Precision](DeeProtein_TFRECORDS_PURECONV_1x1tuned_restored637750kfull_sce_adam_1dconv679_1000_one_hot_padded_64_0.01_0.1.precision_32.svg) Reinitialization with pretrained parameters and lower learning rate allowed finetuning of the classifier. Especially as the validation set is uniformally distributed (in contrast to the training set) the classifier can be considered as trained. ROC/ACC/AUC-metrics -------- ROC and AUC was added to be calculated on the fly (after validation on the whole validation set.). Training models on the embedded sequences ------------------ We generated batches from the word embeddings (dim=100, kmer-length=3) for the 679(EC) and the 1084 mulilabel network. However training proceeds much more slowly as the parametersize is 5 times the size of the one-hot network. Multilabel-classification -------- In order to be able to perform multilabel classification, we rewrite the input pipeline (```DatasetGenerator, BatchGenerator, TFrecordsgenerator```) and generated two datasets with 339 and 1084 classes respectively. The considered labels were chosen solely based on their polulation. As the GO-term hierarchy follows a directed acyclic graph (DAG) we looked up all parent nodes for each leaf nodes and included the total set of annotations for each sequence. First models were run after extending the network for 2 convolutional and 2 1x1 layers on the 1084 classes dataset. Results were disenchanting. Comparison of datasets ---------------- Total seqs after filtering (EC): 220488 Total seqs after filtering (GO): 235767