Team:CLSB-UK/Model

Modeling

The home of real integration

Modeling

We developed several models to help design our biosensor to maximize output variation (i.e. the difference between a person with cancer and a healthy person) and to maximize the overall output so the fluorescence can be detected more easily with cheaper equipment, which would drive down net cost of using our sensor in a medical setting.

These include:

  • NUPACK modeling: Used NUPACK to model the secondary structures of the toehold switches in their closed and open states to maximize dynamic range.
  • Mass action kinetics modeling: Built a mass action kinetics model to identify the most important parameters to optimize in our design. We ported this model to JavaScript to make it easier for our design team to use and to explain the model to others.
  • Stochastic modeling: We collaborated with Oxford to validated our mass action kinetics model, by stochastic modeling of the uncertainties of the parameters.

We also built a cost model to provide insight into using our device as a mass-screening tool, which we tied into our human practices work.

NUPACK model


NUPACK is a software tool that can model the secondary structures of RNA systems.[1] We used NUPACK to optimize the design of our toehold switches.

We ran our simulations at 37°C, with 100 picomolar of miRNA, 20 nanomolar of DNA and 500 picomolar of anti-miRNA. We chose this temperature as it is the optimum for E. coli derived enzymes.

Using NUPACK, we checked our switches had no base pairing in the ribosome binding site in both the off and on state and minimal secondary structure around the start codon in the on state. We also modelled the bound state, ensuring the structure around the RBS and start codon was correct in both the off and on state.

Having demonstrated that our design would work, we minimized the |∆GRBS-Linker| via site directed mutagenesis (varying the sequence at specific sites). Making the |∆GRBS-Linker| as close to zero as possible maximizes the toehold switch output. This is discussed in more depth along with the other important parameters for toehold switch design on the design page.

However, NUPACK struggled to predict the specificity of the second series of switches as they had multi-step reactions. Additionally, we found that NUPACK was unable to match experimental data for molecular beacon specificity in our new form of riboregulators. Despite this, NUPACK was suitable for accurately designing our first series of switches due to their simpler and more predictable nature.

View structures of toehold switch for miRNA:


RBSStart codonTriggerBindingSiteFree energy of secondary structure: -26.40 kcal/molMFE structure at 37.0°CACGUTriggerBindingSitemiRNAStart codonRBSFree energy of secondary structure: -43.98 kcal/molMFE structure at 37.0°CACGU
Start codonRBSTriggerBindingSiteFree energy of secondary structure: -18.80 kcal/molMFE structure at 37.0°CACGUTriggerBindingSitemiRNAStartcodonRBSFree energy of secondary structure: -46.98 kcal/molMFE structure at 37.0°CACGU
anti-miRNAmiRNAFree energy of secondary structure: -18.57 kcal/molMFE structure at 35.0°CACGU Free energy of secondary structure: -15.27 kcal/molMFE structure at 35.0°CACGUTrigger Binding SiteStart CodonRBS Free energy of secondary structure: -68.61 kcal/molMFE structure at 35.0°CACGUStart CodonTrigger Binding Siteanti-miRNAmiRNARBS
anti-miRNAmiRNAFree energy of secondary structure: -21.70 kcal/molMFE structure at 35.0°CACGU RBSStart codonTriggerBindingSiteFree energy of secondary structure: -14.08 kcal/molMFE structure at 35.0°CACGU anti-miRNAmiRNATriggersitebindingRBSStartcodonFree energy of secondary structure: -72.09 kcal/molMFE structure at 35.0°CACGU

Base diagrams created in NUPACK, highlighted regions of interest by us.

Mass action kinetics


Mass action kinetics is the assumption that the rate of reaction is proportional to concentration of the reactants. We chose to create a mass action kinetics model as we could use relatively few parameters. Reducing the number of parameters, whilst keeping the model representative of the system, made it easier to understand the effect of varying each parameter. We built the model in MATLAB first, and later ported it to JavaScript to make it a better design tool for the biologists and better communicate how the model works.

Graph interactivity

We spent a long time reviewing previous teams models. A problem we found was that most models weren’t communicated in a way that was engaging and easy to understand. We’ve tried to explain our model in a way that someone with no experience in modeling could understand.

We’ve used interactive graphs on this page - when you get to them you can drag the sliders to change parameters and see how they affect the system.

We initially ported our model to a webpage so the design team could try out new stuff when they were designing our toehold switches. This turned out to be very useful for both of us. We decided to make it look a lot nicer and improve its compatibility with different browsers so we could put it on the wiki.

As a webpage it’s much more accessible - anyone (even on a phone) could use it without having to install any custom programs which would take both time to install and might require a license.

As far as we’re aware, we’re the first iGEM team to ever put up something like this. The closest was Pretoria UP 2016, but our graphs also contained our entire mass action kinetics model, instead of just a visualization.

It was built in JavaScript and therefore not limited by proprietary license agreements, and doesn’t require Java applets to be enabled (which modern browsers no longer support due to stability and security issues).[2]

JavaScript follows ECMA’s open standard so it can run on purely open-source tools. This helps acheive “iGEM’s goal of making everything in the competition open source”.[3] There are also a lot more open source libraries available for JavaScript which means other teams can build upon our work easier. These include Chart.js, the library we are using to draw the graphs.

Finally, our JavaScript code ran faster than our MATLAB code, which we attribute to both our greater experience in JavaScript and the aggressive optimisation that’s been done by vendors trying to compete for the fastest browsers.[4]

Initial mass action kinetics model

We created our first model following a meeting with Thomas Ouldridge at Imperial, who recently coauthored a paper about mathematically modeling toehold-mediated strand displacement. In the meeting we agreed that we’d need to model the different states of the toehold switch. We also discussed modeling the miRNA binding and unbinding to and from the toehold base-by-base. We later decided against this, as whilst it may have been useful for learning about the kinetics of toehold switches it would not have given us useful information to put into our design process and would have overcomplicated our model.

We first modelled a simple system of 4 substances with reversible reactions as follows:



Where CTS, PBTS and OTS represent the toehold switch in closed, partially bound and open states respectively. At this point we didn’t know the value of the rate constants so we used placeholders.

This was useful as it gave us experience programming in MATLAB, which we hadn’t done before, as well as forced us to learn the foundations for the maths we’d need later; mainly ODEs. Khan Academy was very helpful with this.

With all three reactions being reversible, MATLABs dsolve function was not able to solve our differential equations. We tried them by hand but were not able to solve them analytically. We had to solve numerically and chose the Euler method. This involves iterating with short time intervals (here we used dt ≈ 1s) between each calculation using a loop to keep a record of how the substances changed over time. We then graphed these values as follows:




We’ve recreated this model so it’s actually running in the browser - try dragging the sliders above the graph to modify the rate constants.

Mass action kinetics model with transcription and translation

The first simple model neglected transcription and translation completely, which meant it was an excessive simplification and consequently was not that useful. To address these shortcomings, we created a second model, derived from parts of Tobias Stögbauer’s PhD, with many more species.[5]



This model was very slow to run (so slow we’re not going to try run it in the browser), as there were many species and thus more calculations needed to be done per loop. However, it primarily was slow as we needed a small dt value to keep the model accurate so there were more loops in total.

We had to use a very small dt value or errors could easily add up with so many species. It ended up taking about a minute to run each simulation which wasn’t acceptable for us, as we wanted to be able to quickly change parameters and ideally test a range of parameters. Speeding up the model would make it more useful to the wetlab team. They then could vary each of our parameters and see how they affected GFP output, for example seeing the effects of different switch designs and experimental setups.

Having lots of species also made it harder to find accurate rate constants for all the steps in the reaction. Tobias Stögbauer’s PhD was very useful for this, although many of his values weren’t that applicable to us as we used a different cell-free system leading to inaccuracies - which were amplified when dt was increased.

It was also difficult to get useful data from the model as there were so many parameters that changing just one was unlikely to have a significant effect on the final result. This made it harder to draw conclusions from the model. For similar reasons it would also have made it hard to adjust the parameters to fit to our lab data when we had it. More parameters also increased the total uncertainty, decreasing the precision of our model.

Streamlined mass action model

We met with Thomas Ouldridge again after the previous model and he showed us how we could combine many steps of the reaction to slim our model down to just 4 parameters and 5 species, as well as giving us some more accurate parameters for our system.

With fewer parameters it was much easier to see the effect that varying each parameter has, and thus draw meaningful conclusions from the relationship between the parameters and GFP output.

Given the knowledge of the system that we had, it was reasonable to incorporate:

  • the GFP maturation into ktranslation
  • the opening of the toehold switch into the binding of the miRNA

We used mass action kinetics to model the decay of the RNAs and the opening of the toehold switch, however we modelled transcription and translation such that DNA and the open switches weren’t used up. We assumed the pool of resources was constant, as our construct would not significantly deplete the cell free extract’s resources.[6]

We also made the assumption that our inducible promoter could be modelled as a constitutive one. Given arabinose concentration is constant and the assumption that the time taken to induce the promoter was negligible, no changes to the way transcription was modelled were necessary, other than neglecting resource degradation. This was confirmed experimentally when we characterized part BBa_K808000.

In this model we included a term for miRNA decay, assuming that it decayed at the same rate as the other RNAs in the system. This was on the recommendation of Thomas Ouldridge as the degradation of miRNA is not well documented in our cell free system.

As we were not confident in this assumption we tried running the model with different miRNA degradation rates and found that between 5×10-4 and 5×10-3 the amount of GFP produced in the positive case increased by a factor of 3. Outside of this range the difference was small. As our value is within this range the uncertainty introduced by the miRNA degradation rate was high. However, this did not affect the model's usefulness in understanding the qualitative effect of varying the parameters as the miRNA degradation rate stretches the GFP against time graph horizontally. This changes the time our test takes, but not the final difference in GFP output.

As with our previous model we assumed protein degradation was negligible as our cell free system does not contain any proteases.



Following correspondence with Oxford’s modeling team, we changed the values of some of our parameters. Oxford recommended we used Karzburn’s 2011 paper, ‘Coarse grained dynamics of a cell free system’ for the translation constant. They also pointed out that we had forgotten to change some of the parameters’ units so our model was not dimensionally consistent, and we updated the model to fix those.


Parameter name Where we got it Value
ktranscription Used Tobias Stoegbauer’s PhD for this value, as it is equivalent to his kts value of 2.2NtPs-1 1.1×10-3s-1
ktranslation Karzburn’s 2011 paper gave 4 amino acids s-1[7]. As the GFP sequence we used was 239 amino acids long,[8] we simply did 4aa s-1 ÷ 239 aa. 1.7×10-2s-1
kopen When we met Thomas Ouldridge at Imperial he suggested a range of 105 to 106 for this value, given his experience modeling toehold switches. 6×105 M-1s-1
kdecay Estimate of messenger RNA decay rate in our cell free extract.[9] 1.28×10-3


One of the biggest potential problems we identified was that if enough translation of closed switches occurred, it would be difficult to see any meaningful results. The kleakage parameter represents the ratio of translation rate on the closed and the open switch. We concluded that would not get any observable difference if translation on the closed switch was above a ten thousandth of the rate on the open switch, something we would never have been able to quantify easily without the model.

Without the model we wouldn’t have realized quite how crucial reducing leakage was. We spent a lot of time designing our switches to minimize leakage.

The lab team also asked us to find the optimal concentration of DNA for both output variation and overall output. By varying ktranslation and the concentration of DNA we found that a higher concentration was better in general, however it would plateau eventually as miRNA consumption became the limiting factor. In the specific case of miRNA degradation being 2 orders of magnitude lower than the rate of degradation of the other RNAs the GFP concentration plateaued around 5×10-9 mol dm-3.

Try out the model for yourself and see if you can see our conclusions by changing the parameters to see the final output of GFP here:




Results fitting and analysis

Having collected our experimental results we reevaluated our model. We scaled the results to fit them on the graph, as they were in arbitrary units of fluorescence. We can plot this fluorescence is proportional to GFP concentration.[10]


00.511.522.533.54×104 time / smiRNA = 9×10-9 MGFP concentration / M00.20.40.60.811.21.41.61.8miRNA = 9×10-8 MmiRNA = 9×10-7 MInitial model and results ×10-11


The major differences between the results and our theoretical values was the later maximum GFP concentration and the difference between the measured maximum and the theoretical maximum for 9×10-9moles/dm-3 of miRNA. In order to delay the peak in GFP concentration the rate of RNA decay had to be decreased. We also introduced a term for GFP degradation, to account for the peak in fluorescence. We then tried to fit the data by changing the parameters with these ideas in mind. The new values for these parameters were:

  • kdecay ≈ 3×10-4
  • GFPdecay ≈ 1×10-5


×10-10 00.511.522.533.54×104 00.511.522.5miRNA = 9×10-9 MGFP concentration / Mtime / smiRNA = 9×10-8 MmiRNA = 9×10-7 MRevised model and results


We varied the other parameters within our uncertainties. To make the model more accurate, especially for the lower miRNA concentrations we decreased kopen to 105, the lowest possible value in our range.

Another way to improve the lower miRNA concentrations was increasing the decay rate of just the miRNA. We had assumed it decayed at the same rate as the other RNAs in our system. Increasing just the miRNA decay rate had similar effect to decreasing kopen.

With our initial parameters our model was a poor representation of our system. Once we updated our parameters, it was able to represent our results relatively well.

Stochastic model

The Oxford iGEM team kindly ran stochastic simulations of system for us. They ran both intrinsic and extrinsic simulations and provided us with the final uncertainties.

Intrinsic Simulation

Intrinsic stochastic models find the uncertainty caused by the changes in reaction rate due to the fact reactants collide randomly. This is more accurate than mass action kinetics for low reactant concentrations.

For our stochastic simulation, each time step a reaction rate was determined randomly using the Poisson distribution. As this is random, multiple stochastic simulations have to be run for a large enough sample size. A time step of 0.01 seconds was used.

The intrinsic simulations gave an uncertainty of only 0.0096%. This validated our use of mass action kinetics to model our system. As the intrinsic uncertainty is negligible we could assume rate of reaction was proportional to concentration of the reactants, i.e. the law of mass action.

While we later updated the parameters with our experimental data, the concentration of reactants did not decrease to invalidate this result.

Extrinsic Simulation

Extrinsic stochastic models find the uncertainty cause by the uncertainty of our parameters. The extrinsic uncertainty is by far the more significant of the two uncertainties in our model.

The extrinsic simulation assumes our parameters were normally distributed and runs multiple simulations randomly choosing parameter values.

The specific conclusions of our extrinsic simulations, such as the test being complete in 37 minutes in 90% of cases, are unfortunately no longer useful after our experimental results showed some of our starting parameters were significantly off. However, the extrinsic simulation demonstrated that biggest weakness with our model was our parameter's uncertainties.

Cost model

There's two parts to our cost modeling; the cost analysis which works out the cost of the test, and the much more impressive screening cost-effectiveness model that works out the value of rolling out a screening programme.

Our screening programme's cost per quality year of life gained would be just be £8300 at the current lowest threshold for lung cancer screening, and taking into account cancer recurrence. Treatments under £20,000 per quality year are deemed cost-effective by NICE, the UK public body that publishes healthcare guidance.

Cost analysis

Our test is much cheaper than existing solutions, especially when you consider the final product will be a multiplexing assay which will test for many diseases simultaneously.

Qiagen PAXgene blood RNA tube: 1 @ £752 / 100 kits[11] £7.52
Qiagen miRNeasy Mini Kit: 1 @ £1115 / 250 kits[12] £4.46
Taking a blood sample inc. labour[13] £3.42
Medical lab technician’s time: 15 minutes @ £8.45 / hour[14] £2.11
Paper-based toehold switch biosensor: 50 @ 1.7p / 1 μl switch[15] £0.85
Other lab equipment e.g. sterilisation: tubes: gloves (approx)[13] £0.30
Total £18.66

We estimated 15 minutes of lab technician’s time based off the advice of Professor Lovat's research team and Dr Pregibon. The processing of miRNA takes approximately 2 hours for 8 samples. However, if this process was further automated, this cost could be significantly reduced.

The total figure is likely to be an overestimate, as the two major components, namely the PAXgene blood RNA tube and the miRNAeasy Mini Kit which make up over 70% of the total aren’t for clinical use. These products are targeted at researchers and small volume applications, and have many extra features to match - which would not be required for our sensor. Economies of scale and stripping the products down to their core functions could introduce further significant savings.

We believe our test would cost less than £15 ($20) once this is taken into account, making it extremely cost effective as it can test for several diseases simultaneously.

Large cost savings could come by testing saliva instead of blood. Looking into the financials:

Qiagen miRNeasy Mini Kit: 1 @ £1115 / 250 kits[12] £4.46
Medical lab technician’s time: 15 minutes @ £8.45 / hour[14] £2.11
Paper-based toehold switch biosensor: 50 @ 1.7p / 1 μl switch[15] £0.85
Other lab equipment e.g. sterilisation, tubes, gloves (approx)[13] £0.60
Total £8.02

However, miRNAs present in saliva are less well documented. Saliva may also be less accurate as its composition can change depending on what was recently eaten so we didn’t look much more into this, but it could be a good area for further research.

Another cost involved in the test is the upfront equipment costs - see how we made a combined densitometer and fluorometer for less than £4 on our hardware page!

Screening cost-effectiveness model

To demonstrate that our test would be cost effective as part of a screening programme we built this cost effectiveness model. To do this it combines cancer prevelance data, the sensitivity and specificity values of our test, our cost analysis data and treament cost data.

We used Tammemägi et al.’s model to calculate the probability of someone getting lung cancer in the next 6 years. It is more sensitive than the simplistic criteria used by the National Lung Screening Trial, and thus a better way of selecting individuals for screening. Tammemägi et al. showed their model would be more cost effective method as more cancers were detected per number screened as it missed fewer cases of cancer.[16]

Although Tammemägi’s model was for lung cancer as a whole, we assume the risk factors put you at equal risk of non-small cell and small-cell lung cancer. As the miRNAs on which our test is based do not indicate small cell lung cancer, we have built the model around NSCLC. All other figures are NSCLC specific.

A quality adjusted life year (QALY) represents one year in perfect health. The ‘quality adjusted’ part accounts for the quality of life for that year, for example 2 years with a 0.6 quality of life score would be 1.2 QALYs. QALYs can be used to measure the benefit a treatment provides (if any). Treatments costing less than £20,000 per QALY are deemed cost effective by NICE. Treaments between £20,000 and £30,000 per QALY may be decided to be cost effective by NICE if there is a high degree of certainty in the cost per QALY or when there are substantial benefits not captured in the cost per QALY figure.[17]

We assumed 63% of the cancers we detected would be in stage 1 as in the National Lung Screening Trial[18] We then assumed that the detection of the other cancers would be evenly distributed through stages 2-4. This is unrealistic for a mass screening programme, as cancers not picked up in stage 1 would likely be picked up in stage 2. This would decrease the cost per QALY as the probability of cure in stage 2 is higher than the average of stages 2-4 and treatment in stage 2 is cheaper than the average cost of stages 2-4.[19]Therefore our model would underestimate the cost effectiveness of our sensor.

In terms of a screening tool, sensitivity relates to the false negative rate, while specificity relates to the false positive rate.

  • A 100% sensitive test will always pick up the disease but may also pick up patients that do not have the disease (no false negatives).
  • A 100% specific test will never pick up healthy patients but may also miss patients that do have the disease (no false positives).

High SensitivityFew false negativesNEGATIVE RESULTLow SpecificityLots of false positivesPOSITIVE RESULTHealthyHas disease

A highly sensitive test will have few false negatives but lots of false positives. Switch to high specificity.


Sensitivity was taken as 100%, as with appropriately set cutoffs our tool would identify everyone with NSCLC. This is likely to prove untrue were our test to be validated with miRNA levels in a large patient cohort - though sensitivity would still be very close to 100%. Therefore, P(False negative) = 0. However, we included it in the calculations anyway, as later we show that even if sensitivity was significantly less than 100% our test would still be cost effective. We assumed as a lower bound that false negatives will not survive - in reality, some will be detected at a later stage and survive - but were we to have included that, it would have resulted in additional QALYs gained not due to our sensor, meaning we chose to exclude them.

The specificity of our test would be 84% when sensitivity is 100%.[20]

P(cancer) is the figure generated by Tammemägi’s model, weighted by the probability of stage 1 detection (63%) and other stage detection (37%).

  • P(cancer) = P(true positive) + P(false negative)
  • As sensitivity ≈ 100%, P(false negative) ≈ 0
  • P(cancer) ≈ P(true positive) = specificity * P(positive test)
  • P(positive test) ≈ P(cancer) / specificity
  • P(false positive) = P(positive test) * (1 - specificity) ≈ P(cancer) * (1 - specificity) / specificity

Therefore the probability of a false positive is approximately proportional to the chance of having cancer. P(false positives) always amounted to less 1% of the total cost per QALY so this error of this approximation is negligible.

To recap:

  • P(true positive) = P(cancer)
  • P(false positive) = P(cancer) * (1 - specificity) / specificity
  • P(true negative) = 1 - (P(true positive) + P(false positive))
  • P(false negative) = 0

The probability of each option was calculated by multiplying P(true positive) * P(treatment success). In the table, P(treament) represents the chance of the treament being as described by the 2nd column.

Detection Treatment P(treatment) Cost
Stage 1 detection Successful treatment 0.48 £8100
Stage 1 detection Unsuccessful treatment 0.52 £21000
Stages 2-4 detection Successful treatment 0.16 £13000
Stages 2-4 detection Unsuccessful treatment 0.84 £13000
False positive No treatment required NA £470
True negative No treatment required NA £110
False negative Unsuccessful Treatment 1 £13,000

All figures are rounded to 2 significant figures to avoid over-stating precision

P(successful treatment) was taken as 0.48 for stage 1, the 5 year survival rate for stage one NSCLC.[21] P(unsuccessful treatment) is simply 1 - P(successful treatment).

P(successful treatment) was taken at 0.16 for later stages, the weighted average of 5 year survival rates of stages 2-4 and for cases were stage was unknown.[21]

The cost of a false positive was taken as the cost of a contrast enhanced chest CT scan (£359[19]), plus the cost of the test times 6 (£18.66 x 6 = £110) as it would be done for 6 years - as that would be the follow up of a positive result of our test. We assumed that all patients with a false positive would subsequently have their cancer ruled out by a CT. Previous screening trials have showed that other procedures are needed, sometimes bronchoscopy, or a false positive proceeding all the way to surgery and treatment. However, we believe a CT in combination with our test would rule out nearly all false positives (especially if several miRNAs were used in tandem to look at the properties of the ‘tumor’ in a subsequent test) - and the probability of a patient having further measures taken would likely be negligible. There is also no data available to quantify this additional cost.

The cost of treatment in stage 1 is £8000 and in stage 4 is £10,050.[19] We used the cost at stage 4 as the upper bound of the cost for stages 2-4. Similar to before, this is likely to decrease were a large scale screening trial to take place and figures for individual stages used - as cost of treatment in stage 2 is far lower than stage 4.[19] We assumed the cost of unsuccessful treatment in stage 1 was the same as the cost of successful treatment in stage one plus average cost of treatment in the other stages - as if treatment is unsuccessful in stage one, a patient is likely to have their tumor progress to later stages. The cost of treatment in stage 4 is not dependent on success as the treatment is just palliative care.

We ignored the cost of recurrence of a cancer - recurrences in cancer are complex and cannot be easily modelled. Furthermore, the earlier the cancer is detected, the lower the chace of recurrence. This means our sensor that can detect NSCLC at stage 1 would drastically reduce this cost.

As Tammemägi gave the probability of cancer in the next 6 years we calculated the cost effectiveness of screening each year for the next 6 years. Hence the cost of 6 true negatives is the cost of our test times 6 (£18.66 x 6 = £110).

Expected QALYs gained was calculated using 70 as lower a bound for life expectancy. The national life expectancy at birth in the UK is 81,[22] however for those who have NSCLC, and thus are likely to have smoked, life expectancy is at least 10 years lower.[23] The quality of life score is assumed to be 1, i.e when someone is cured of NSCLC, the years of life gained are assumed to be in perfect health or any other factors that reduced their quality of life were not considered to be due to NSCLC. Thus QALYs gained = 70 - current age. If an individual is over 66, QALYs gained are taken to be 5, as the statistics used were for 5 year survival.

Cost per QALY was calculated by dividing expected cost by expected QALYs. If this is less than £30000, screening for the next 6 years is cost effective for that patient.

When building this model we tried to use reasonable estimates to ensure we do not overestimate the cost effectiveness of our test.

You can download our spreadsheet here.

The model showed that is is cost effective to screen most people who have smoked.

The American Cancer Society recommends screening for people:[24]

  • between 55 and 77 years old,
  • who currently smoke or have quit smoking in the last 15 years
  • who have at least a 30 pack-year history of smoking

We entered the details for someone at the lowest thresholds: someone aged 55, quit smoking 15 years ago, has a 30 pack-year history of smoking along with the average American’s build: male, white, no history of cancer or any lung diseases, college educated and a BMI of 27.[25] They would cost £6900 per QALY to be screened, making our tool extremely cost effective by the NICE £20,000 per QALY target. It would therefore be cost effective, by our analysis, to screen people with our system with even lower risks of developing NSCLC. This is mainly due to the low cost of our test and significantly improved surival rates when NSCLC is diagnosed in earlier stages.

The one significant assumption we may have over-estimated is sensitivity, as it is unlikely to be 100% in reality. However, for the same individual it would remain cost effective to screen even if sensitivity were reduced to 34% (£19,800 per QALY). This would never be ethically acceptable, given the number of cases of cancer you would miss, but demonstrates the incredible cost efficacy of our sensor.

If the costs of recurrence, as estimated by Cancer Research UK were taken into account - £16000 for stage 1 and £17000 as the average cost of other stages[19] - the cost per QALY would still only be £8300, and at 39% sensitivity would be £19900 per QALY.

Our modeling therefore proves that our sensor would be cost effective in reducing mortality rates from NSCLC. The same principle on which this model is based could be applied to any cancer to screen those with sufficiently high risk using miRNA biomarkers.


References

  1. Zadeh, J. N., Steenberg, C. D., Bois, J. S., Wolfe, B. R., Pierce, M. B., Khan, A. R., ... & Pierce, N. A. (2011). NUPACK: analysis and design of nucleic acid systems. Journal of computational chemistry, 32(1), 170-173.
  2. Chrome Developers (2013, September 23). Saying Goodbye to Our Old Friend NPAPI. Chromium Blog. Retrieved October 9, 2017, from https://blog.chromium.org/2013/09/saying-goodbye-to-our-old-friend-npapi.html
  3. (2017). Competition/Deliverables/Wiki - 2017.igem.org. Retrieved October 1, 2017, from https://2017.igem.org/Competition/Deliverables/Wiki
  4. CreativeJS (2013, June 3). The race for speed part 1: The JavaScript engine family tree. Retrieved October 8, 2017, from http://creativejs.com/2013/06/the-race-for-speed-part-1-the-javascript-engine-family-tree/index.html
  5. Stögbauer, S. (2012) Experiment and quantitative modeling of cell-free gene expression dynamics. Ludwig Maximilian University of Munich, Germany
  6. Shin, J., & Noireaux, V. (2012). An E. coli cell-free expression toolbox: application to synthetic gene circuits and artificial cells. ACS synthetic biology, 1(1), 29-41.
  7. Karzbrun, E., Shin, J., Bar-Ziv, R. H., & Noireaux, V. (2011). Coarse-grained dynamics of protein synthesis in a cell-free system. Physical review letters, 106(4), 048104.
  8. SnapGene (n.d.). eGFP Sequence and Map. Retrieved October 8, 2017, from http://www.snapgene.com/resources/plasmid_files/fluorescent_protein_genes_and_plasmids/EGFP/
  9. Shin, J., & Noireaux, V. (2010). Study of messenger RNA inactivation and protein degradation in an Escherichia coli cell-free expression system. Journal of biological engineering, 4(1), 9.
  10. Furtado, A., & Henry, R. (2002). Measurement of green fluorescent protein concentration in single cells by image analysis. Analytical biochemistry, 310(1), 84-92.
  11. Qiagen PAXgene blood RNA tube price from a quote via email
  12. 12.0 12.1 QIAGEN (n.d.). RNA Isolation Kit: RNeasy Mini Kit. Retrieved October 8: 2017: fromhttps://www.qiagen.com/gb/shop/sample-technologies/rna/total-rna/rneasy-mini-kit/
  13. 13.0 13.1 13.2 GOV.UK (2014: November 27). NHS reference costs 2013 to 2014. Retrieved October 8: 2017: fromhttps://www.gov.uk/government/publications/nhs-reference-costs-2013-to-2014
  14. 14.0 14.1 NHS Employers (2017). Agenda for Change: NHS Terms and Conditions of Service Handbook
  15. 15.0 15.1 Clarmyra Hayes (2012: August 12). Cell-Free Circuit Breadboard Cost Estimate. OpenWetWare. Retrieved October 8: 2017: fromhttps://openwetware.org/wiki/Biomolecular_Breadboards:Protocols:cost_estimate
  16. Tammemägi, M. C., Katki, H. A., Hocking, W. G., Church, T. R., Caporaso, N., Kvale, P. A., ... & Berg, C. D. (2013). Selection criteria for lung-cancer screening. New England Journal of Medicine, 368(8), 728-736.
  17. National Institute for Clinical Excellence. (2012). Methods for the development of NICE public health guidance (third edition), 6.4.1
  18. National Lung Screening Trial Research Team. (2011). Reduced lung-cancer mortality with low-dose computed tomographic screening. N Engl J Med, 2011(365), 395-409.
  19. 19.0 19.1 19.2 19.3 19.4 (n.d.). Saving lives, averting costs, Cancer Research UK. Retrieved October 6, 2017, from http://www.cancerresearchuk.org/sites/default/files/saving_lives_averting_costs.pdf
  20. Hennessey, P. T., Sanford, T., Choudhary, A., Mydlarz, W. W., Brown, D., Adai, A. T., ... & Califano, J. A. (2012). Serum microRNA biomarkers for detection of non-small cell lung cancer. PloS one, 7(2), e32307.
  21. 21.0 21.1 (2015) Cancer Treatment & Survivorship Facts & Figures 2014-2015, American Cancer Society
  22. Office for National Statistics (2015). Life Expectancy at Birth and at Age 65 by Local Areas in England and Wales: 2012 to 2014. ONS, 3.
  23. (2016). CDC Fact Sheet: Tobacco-Related Mortality. Retrieved October 7, 2017, from http://www.cdc.gov/tobacco/data_statistics/fact_sheets/health_effects/tobacco_related_mortality/index.htm
  24. (2016). Who Should Be Screened for Lung Cancer? American Cancer Society. Retrieved October 7, 2017, from https://www.cancer.org/latest-news/who-should-be-screened-for-lung-cancer.html
  25. Centers for Disease Control and Prevention. (2003). National Health and Nutrition Examination Survey.