Difference between revisions of "Team:ZJU-China/Model"

 
Line 540: Line 540:
 
                 <h2 id="vocclassification" class="H2Head">VOC Classification</h2>
 
                 <h2 id="vocclassification" class="H2Head">VOC Classification</h2>
 
                     <h3 id="overview" class="H3Head">Overview</h3>
 
                     <h3 id="overview" class="H3Head">Overview</h3>
                         <p class="PP">The VOC device is designed to judge whether the tobacco is heathy or gets infected. Since this is an inquiry experiment, algorithms in data analysis are widely use in our modeling. We do data preprocessing, data analysis, and algorithm optimization on the data collected by VOC device. Finally, we use Logistic regression and detect the infected tobacco with 91% confidence.</p>
+
                         <p class="PP">The VOC device is designed to tell whether the tobacco is heathy or infected. Since this is an inquiry experiment, algorithms in data analysis are widely used in our modeling. We did data preprocessing, data analysis, and algorithm optimization on the data collected by VOC device. Finally, we used Logistic regression and detected the infected tobacco with 91% confidence.</p>
  
 
                     <h3 id="datapreprocessing" class="H3Head">Data preprocessing</h3>
 
                     <h3 id="datapreprocessing" class="H3Head">Data preprocessing</h3>
                         <p class="PP">First we defragment the raw input data, and reorganize them into a matrix. 10 VOC factors are served as features, and the status(heathy or infected) is served as tag to be predicted.</p>
+
                         <p class="PP">First we defragmented the raw input data, and reorganized them into a matrix. 10 VOC factors were served as features, and the status(heathy or infected) was served as a tag to be predicted.</p>
 
                         <div class="imgdiv"><img class="textimg" src='https://static.igem.org/mediawiki/2017/4/49/ZJU_China_VOC_1.png' alt=''/></div>
 
                         <div class="imgdiv"><img class="textimg" src='https://static.igem.org/mediawiki/2017/4/49/ZJU_China_VOC_1.png' alt=''/></div>
                         <p class="PP">Then we analysis the data using box plot and discover that most data are normal, but some records are singular, whose box plot are show as folowing:</p>
+
                         <p class="PP">Then we analyzed the data using box plot and discovered that most data were normal, but some records were singular, whose box plot is shown as follows:</p>
 
                         <div class="imgdiv"><img class="textimg" src='https://static.igem.org/mediawiki/2017/9/97/ZJU_China_VOC_2.png' alt=''/></div>
 
                         <div class="imgdiv"><img class="textimg" src='https://static.igem.org/mediawiki/2017/9/97/ZJU_China_VOC_2.png' alt=''/></div>
                         <p class="PP">We remove those records with singular value, and the data left obey normal distribution:</p>
+
                         <p class="PP">We removed those records with singular value, it turned out that the data left obey the normal distribution:</p>
 
                         <div class="imgdiv col-md-6 col-sm-6"><img class="textimg" style="height: 230px !important; width:auto !important;" src='https://static.igem.org/mediawiki/2017/3/32/ZJU_China_VOC_3.png' alt=''/></div>
 
                         <div class="imgdiv col-md-6 col-sm-6"><img class="textimg" style="height: 230px !important; width:auto !important;" src='https://static.igem.org/mediawiki/2017/3/32/ZJU_China_VOC_3.png' alt=''/></div>
 
                         <div class="imgdiv col-md-6 col-sm-6"><img class="textimg" style="height: 230px !important; width:auto !important;" src='https://static.igem.org/mediawiki/2017/e/e0/ZJU_China_VOC_4.png' alt=''/></div>
 
                         <div class="imgdiv col-md-6 col-sm-6"><img class="textimg" style="height: 230px !important; width:auto !important;" src='https://static.igem.org/mediawiki/2017/e/e0/ZJU_China_VOC_4.png' alt=''/></div>
  
 
                     <h3 id="dataanalysis" class="H3Head">Data analysis</h3>
 
                     <h3 id="dataanalysis" class="H3Head">Data analysis</h3>
                         <p class="PP">Our target is to create a model and predict tobacco's status according to 10 input features. This is a classic two classification problem, and there are several algrithm to solve it. The sampling algorithm is cross validation and the scoring policy we apply is ridit test.</p>
+
                         <p class="PP">Our target was to create a model to predicted tobacco's status according to 10 input features. This is a classic two classification problem, which we had several algrithm to solve. The sampling algorithm is cross validation and the scoring policy we applied is ridit test</p>
 
                         <p class="PP"><strong>Decision Tree</strong></p>
 
                         <p class="PP"><strong>Decision Tree</strong></p>
                         <p class="PP">First we use decision tree based on information theory. ID3 decision tree is used to reduce the most information gain, and CART tree is used to reduce the GINI index. The performance of these two algorithm is almost the same. <strong>R = 0.83</strong></p>
+
                         <p class="PP">First we used decision tree, which is based on information theory. ID3 decision tree was used to reduce the most information gain, while CART tree was used to reduce the GINI index. The performance of these two algorithm is almost the same. <strong>R = 0.83</strong></p>
  
  
Line 561: Line 561:
  
 
                     <p class="PP"><strong>MLP</strong></p>
 
                     <p class="PP"><strong>MLP</strong></p>
                         <p class="PP">The second algorithm we apply is Multi-Layer Perception, also called neural network. In this model, we use more than 100 neurons in each layer and the activation function is relu.</p>
+
                         <p class="PP">The second algorithm we applied is Multi-Layer Perception, also called neural network. In this model, we used more than 100 neurons in each layer and the activation function is relu.</p>
 
                         <p class="PP">The result of MLP is much better than decision tree.<strong>R = 0.89</strong></p>
 
                         <p class="PP">The result of MLP is much better than decision tree.<strong>R = 0.89</strong></p>
  
Line 567: Line 567:
 
                         </p>
 
                         </p>
 
                         <p class="PP"><strong>Leaner Model</strong></p>
 
                         <p class="PP"><strong>Leaner Model</strong></p>
                         <p class="PP">Although the performance of MLP has been good enough, it&#39;s difficult to extract konwledge
+
                         <p class="PP">Although the performance of MLP had been good enough, it's difficult to extract konwledge learnt by algorithm, which means the interpretability is weak. Why not try a simple model with high interpretability? First we tried LDA algorithm to compress the 10 dimensions data into 2 dimensions.</p>
                            learn by algorithm, the interpretability is weak. Why don&#39;t we try a simple model with
+
                            high interpretability? First we try LDA algorithm to compress the 10dimensions data into 2 dimensions.</p>
+
 
                         <p class="PP" style="text-align: center !important;"><span class="MathJax_Preview"></span><span class="MathJax_SVG_Display"
 
                         <p class="PP" style="text-align: center !important;"><span class="MathJax_Preview"></span><span class="MathJax_SVG_Display"
 
                                                                       style="text-align: center;"><span
 
                                                                       style="text-align: center;"><span
Line 899: Line 897:
 
                             <script type="math/tex" id="MathJax-Element-9">J</script>
 
                             <script type="math/tex" id="MathJax-Element-9">J</script>
 
                         </p>
 
                         </p>
                         <p class="PP">The result of LDA algorithm is as following and <span class="MathJax_Preview"></span><span
+
                         <p class="PP">The result of LDA algorithm is as follows:<span class="MathJax_Preview"></span><span
 
                                 class="MathJax_SVG" id="MathJax-Element-10-Frame" tabindex="-1"
 
                                 class="MathJax_SVG" id="MathJax-Element-10-Frame" tabindex="-1"
 
                                 style="font-size: 100%; display: inline-block;"><svg
 
                                 style="font-size: 100%; display: inline-block;"><svg
Line 927: Line 925:
 
                         <div class="imgdiv"><img class="textimg" src='https://static.igem.org/mediawiki/2017/6/61/ZJU_China_VOC_8.png' alt=''/></div>
 
                         <div class="imgdiv"><img class="textimg" src='https://static.igem.org/mediawiki/2017/6/61/ZJU_China_VOC_8.png' alt=''/></div>
  
                         <p class="PP">This result prove the data are linear separable, then we choose logistics regression
+
                         <p class="PP">This result proved the data are linear separable, which enabled us to chose logistics regression algorithm.</p>
                            algorithm.</p>
+
 
                         <p class="PP">We difine <span class="MathJax_Preview"></span><span class="MathJax_SVG"
 
                         <p class="PP">We difine <span class="MathJax_Preview"></span><span class="MathJax_SVG"
 
                                                                                 id="MathJax-Element-11-Frame"
 
                                                                                 id="MathJax-Element-11-Frame"
Line 1,145: Line 1,142:
 
                         </p>
 
                         </p>
 
                         <p class="PP">Then we can apply maximum likelihood method algorithm to estimate the paramaters.</p>
 
                         <p class="PP">Then we can apply maximum likelihood method algorithm to estimate the paramaters.</p>
                         <p class="PP">The result is as following:</p>
+
                         <p class="PP">The result is as follows:</p>
 
                         <figure class="codes"><pre>
 
                         <figure class="codes"><pre>
 
                             Weight:
 
                             Weight:
Line 1,169: Line 1,166:
 
                         </pre></figure>
 
                         </pre></figure>
 
                 <h2 id="algorithmoptimization" class="H2Head">Algorithm optimization</h2>
 
                 <h2 id="algorithmoptimization" class="H2Head">Algorithm optimization</h2>
                 <p class="PP">From the result of logistics regression, factor C and I and etc. are with less important weight,
+
                 <p class="PP">From the result of logistics regression, factor C and I and etc. are with less important weight, these factors may disturb the classifaction. We tried to reduce insigfinicant factors to simplify the model.</p>
                    these factors maybe disturb the classifaction. We try to reduce unimportant factors and simplify the
+
                 <p class="PP">Finally, we reserved 4 factors with which we can predict the tobacco's status with 91% confidence and also reduced the VOC device.</p>
                    model.</p>
+
                 <p class="PP">Finally, we reserve 4 factors with which we can predict the tobacco in 91% confidence and also reduce
+
                    the VOC device.</p>
+
 
             <figure class="codes"><pre>
 
             <figure class="codes"><pre>
 
                     Weight:
 
                     Weight:
Line 1,191: Line 1,185:
  
 
                 <h2 id="summary" class="H2Head">Summary</h2>
 
                 <h2 id="summary" class="H2Head">Summary</h2>
                     <p class="PP">In this model, we try different algorithm to abttain a robust, interpretable, and accurate solution
+
                     <p class="PP">In this model, we tried different algorithm to abttain a robust, interpretable, and accurate solution to predict whether the tobacco is infected only according to 4 features in 91% confidence. Since there are 6 VOC sensors left unused in this model, the device can also be simplified in the future by reducing them. We can also try to add more functions to this device by making use of the left sensors.</p>
                    to predict whether the tobacco is infected only according to 4 features in 91% confidence. Since
+
                    there are 6 VOC sensors are meaningless in this model, we the device can also be simplified by
+
                    reduce them.</p>
+
 
             <br><br><br>
 
             <br><br><br>
 
             <div style="text-align: center">
 
             <div style="text-align: center">

Latest revision as of 15:58, 3 December 2017

Modeling

VOC Classification

Overview

The VOC device is designed to tell whether the tobacco is heathy or infected. Since this is an inquiry experiment, algorithms in data analysis are widely used in our modeling. We did data preprocessing, data analysis, and algorithm optimization on the data collected by VOC device. Finally, we used Logistic regression and detected the infected tobacco with 91% confidence.

Data preprocessing

First we defragmented the raw input data, and reorganized them into a matrix. 10 VOC factors were served as features, and the status(heathy or infected) was served as a tag to be predicted.

Then we analyzed the data using box plot and discovered that most data were normal, but some records were singular, whose box plot is shown as follows:

We removed those records with singular value, it turned out that the data left obey the normal distribution:

Data analysis

Our target was to create a model to predicted tobacco's status according to 10 input features. This is a classic two classification problem, which we had several algrithm to solve. The sampling algorithm is cross validation and the scoring policy we applied is ridit test

Decision Tree

First we used decision tree, which is based on information theory. ID3 decision tree was used to reduce the most information gain, while CART tree was used to reduce the GINI index. The performance of these two algorithm is almost the same. R = 0.83

MLP

The second algorithm we applied is Multi-Layer Perception, also called neural network. In this model, we used more than 100 neurons in each layer and the activation function is relu.

The result of MLP is much better than decision tree.R = 0.89

Leaner Model

Although the performance of MLP had been good enough, it's difficult to extract konwledge learnt by algorithm, which means the interpretability is weak. Why not try a simple model with high interpretability? First we tried LDA algorithm to compress the 10 dimensions data into 2 dimensions.

We define as within-class scatter matrix

We define as between-class scatter matrix

The result of LDA algorithm is as follows: :

This result proved the data are linear separable, which enabled us to chose logistics regression algorithm.

We difine

Then we can apply maximum likelihood method algorithm to estimate the paramaters.

The result is as follows:

                            Weight:
                            [[ 0.1819504 0.38788225 0.01350023 0.39594948 0.17799418
                            0.42087034
                            -0.57733395 -0.23876003 -0.00532918 -0.46174515]]
                            Intercept:
                            [ 0.00937812]
                            Effect:
                            D    35.300735
                            B    22.596339
                            F    18.289277
                            E    10.265025
                            C     0.393225
                            I    -1.575564
                            A   -10.679026
                            H   -14.398440
                            G   -26.211964
                            J   -39.130542
                            dtype: float64
                            Score:
                            0.894333333333
                        

Algorithm optimization

From the result of logistics regression, factor C and I and etc. are with less important weight, these factors may disturb the classifaction. We tried to reduce insigfinicant factors to simplify the model.

Finally, we reserved 4 factors with which we can predict the tobacco's status with 91% confidence and also reduced the VOC device.

                    Weight:
                    [[ 0.53196697  0.3404023  -0.53555988 -0.45588715]]
                    Intercept:
                    [-0.01204088]
                    Effect:
                    D    33.217011
                    F    15.492680
                    G   -17.319760
                    J   -33.967849
                    dtype: float64
                    Score:
                    0.912444444444
                

Summary

In this model, we tried different algorithm to abttain a robust, interpretable, and accurate solution to predict whether the tobacco is infected only according to 4 features in 91% confidence. Since there are 6 VOC sensors left unused in this model, the device can also be simplified in the future by reducing them. We can also try to add more functions to this device by making use of the left sensors.