Team:UNIFI/Software

Our software

1. Introduction

Bachteria isn’t a single piece of code, but an entire ensemble working together to bring light to the hidden music in bioluminescence.

Keeping in line with the ensemble analogy, these are its sections:
1. Reception
2. Validation
3. Elaboration
4. Rendering

For each section of this ensemble, a chapter describes the instruments involved.

2. Reception

After bioluminescence measures are taken through a TECAN Infinite Pro 200 apparatus, an XLS file is generated.

As the ensemble resides within the Google Cloud theatre, it is necessary to convey the output to the system proper: without yielding an inch on security, we found web interfaces to be an extremely more convenient option for the general user than SSH-based transfers, especially in light of the open nature of this project.

Therefore, two instruments are brought in to feed every other section: nginx (1.10.3) and PHP (7.0). Due to the extremely lightweight nature of this task, no PHP framework was employed to accomplish this step.

<

3. Validation

Once the input has reached the system, an entire section makes sure every part keeps the rhythm: it is accomplished once more through PHP, more specifically through PHPExcel, a library licensed under LGPL.
While more recent libraries were available, such as PHPSpreadsheet, they broke compatibility with previous Office Excel versions as to ensure a higher code quality, and many are far from being as stable or complete.

4. Elaboration

Now that we’ve made sure every instrument is in tune, it is time to rehearse!

This is the Python (3.5.3) section, with Python-midi playing as the first cello: released under the MIT license, it allows us to actually write Musical Instrument Digital Interface files (MIDIs!).

However, to write these files we actually need to parse the data that was previously validated: it’s still in Excel format! Thankfully, this is the jazz Python-xlrd excels at. Released under a BSD-style license, this library allows us to load into memory lists containing every reading by the TECAN apparatus we started with.

However, to actually explore this data we employ Panda and its powerful dataframes to reduce the reading granularity, down from some 1/40’000 to 1/10.

This allows us to group together similar readings, and run the MIDI part on a much leaner musical set: once the data is ready, each value is mapped to one of 10 chords which is then written end file (future versions will allow the user to choose the chords, either ad libitum or from different musical theory constructs).

5. Rendering

MIDI files, by their own, have no sound. While this may sound false, as most users have had the opportunity to play one in their favorite software, what actually happened is this: the software picked from a default soundbank the instruments samples to really play the MIDI file out, and then returned that processed output to the user.

In order to achieve sound, and not a representation of it, the rendering section takes care of the grand finale!
Through FluidSynth, which is released under the GNU Lesser General Public License v2.1, a wav file is generated. Sound, at last!

However, wav files are heavy: while they are a lossless representation of the sound played by the virtual instrument, for many users a very good lossy compression, such as MP3, will be enough.
The Lame encoder , licensed under GNU Lesser General Public License v2.1, does exactly this, providing a near-true experience thanks to the GPSYCHO psycho-acoutisc and noise model. Furthermore, it is probably the most actively, if not the only, developed MP3 encoder.

All that is left is for you to enjoy!

Team Unifi

unifi.igem@gmail.com