RNA-Seq is a molecular biology method used to quantify RNA in a specific sample. It is based on New Generation Sequencing (NGS) methods and allows the study of gene expression.
At the beginning of DNA sequencing in the 70’s, two main methods were developed. One of them by Walter Gilbert (USA) and another one by Frederick Sanger (UK), they both obtained the chemistry Nobel prize in 1980. The two approaches are really different and we will quickly sum them up.
From the 90’s several new methods were developed to increase the performances and decrease the costs of sequencing. These methods so called NGS methods reduced costs and increased speed of sequencing, and thus opened new perspective for biologist. This year, for the IGEM competition we focused on one of the NGS method which is RNA-Seq (Figure 1).
Figure 1 : RNA-Seq Analysis. The first step is to extract mRNA from a sample. Then a specific enzyme called reverse transcriptase allows the reverse transcription into cDNA. Finally these cDNA are sequenced using NGS methods.
RNA-Seq as said previously allows to quantify RNA into a cell at a particular time. With NGS development, a huge amount of data became available to scientists. They actually needed peoples to compute these data and this is when bioinformaticians came up. Computers are actually thought to treat a lot of data faster than humans. Thus a lot of tools were developed to process NGS outputs. For the competition we used some of these tools to study splicing in C. elegans organism. Lets see how we proceeded !
In bioinformatics, sequence alignment is a way of arranging RNA sequences in relation to each other to determine their structure or function similarities. Sequences are stored in a matrix where rows from each sequence are compared. Gaps can be added into sequences so that identical or similar characters are aligned in successive columns. The organism studied here is C.elegans. The purpose here was to align RNAseq reads to its reference genome by using the Hisat algorithm. RNA is transcribed from DNA sequences that are composed of alternating coding exons and non-coding introns. A pre-RNA is produced that contains the transcribed Exons and Introns.
Out of this pre-RNA, only coding Exons must be kept and the introns removed. This process of removing introns is called splicing. Different combinations of exons can be brought together to produce different variants of the protein to be, in a process called alternative splicing. It is those spliced RNA sequences that are then sequenced. To do, so they are retro-transcribed into their complementary DNA, the cDNA. This DNA is sequenced using NGS.
Current sequencing technologies methods split the large DNA molecules to be sequenced into small chunks called reads. These reads sequences are mapped to the genome reference using algorithms like bowtie. Because reads are small, some sequences can be redundant, present at different locations in the genome, making them hard to map. To circumvent this, a technique of mapping called paired-end is used. It consists in sequencing a cDNA fragment at its extremities in both directions, 3’ to 5’ and 5’ to 3’ (reverse strand). Because these reads originate from the same fragment the distance between them is know and it is easier to map them. Indeed, if two reads can map at a same location only one will have its pair mapping further at the correct distance.
When many reads cover a common region, this region of the genome is highly expressed, because many RNA were produced out of it and reads found for them.
Reads come up as fastq files, which are formated text files, stored in the SRA archive (Sequenced Reads Archive) at the NCBI. The fastq files are produced by the sequencing technologies and consist of the combination of fasta information (the raw nucleic sequence) and the quality score of each sequenced nucleic acid base. It is possible to download them in a programmatic manner using the fastQ-dump software from the SRA toolkit. The name of the SRA archive collection is specified as well as the sequencing method (single or paired end), to produce one or two fastq files.
These fastq files are the input for the HISAT software, based on bowtie, it performs the mapping of the reads on the genome. HISAT was used with the parameters previously described in the work of Denis Dupuy that produced the reference junctions file (ref). HISAT outputs bam files, they are a binary version of a sam file which contains the mapping informations like localisation of sequences reads sequences.
While some sequenced reads will fall within the boundaries of an exon, some of them correspond to a sequence overlapping an Exon-Exon boundary. When mapped to the genome, these reads will have the left part of their sequence on the end of the first exon and the right one further, after the intron, on the beginning of the following exon. This is how a junction can be detected. It is not that simple but algorithms like bowtie can infer whether it is a true junction or not. Alternative splicing produces different exon combinations through different junctions, thus, a junction actually represents a specific spliced form of RNA.
After the step of aligning and mapping we went through the junction extraction process. To perform this, we used two well known tools, called samtools and regtools. Samtools allowed us to index the bam file output from the alignment and mapping steps. This is necessary as regtools, which allows the extraction of junctions and generates a bed file, needs an indexed bam file for input. Finally after the junctions extraction we ended up with a bed file containing all the junctions (spliced forms) of the sample.
The more reads map to a junction, the more often the two corresponding exons are associated. This is how we score a junction, by the number of reads mapping to it. However, the score isn’t sufficient to reflect the different expression levels of variants, because it is dependant on the expression level and can’t be used as a comparison variable. This is why we had to implement a usage ratio calculation.
This step is the core of our pipeline. The method used to calculate was developed by Denis Dupuy (IECB, Pessac). It relies on the START and STOP positions of each junction, we talk about acceptor (for start) and donor (for stop). The first thing we had to do was to regroup all the junctions sharing an acceptor or a donor. We then computed the ratio by just dividing the own score of a junction by the sum of the scores of the junction pool. Thus at the end a junction can have one acceptor ratio and one donor ratio. Finally the minimal score ratio between these two has been used because it is more robust, indeed, the lower the ratio, the higher the score at the denominator and the more represented the junctions.
The final output of this step is a CSV file containing the usage ratio for each junction. From this, we had to clean the data to keep only the junctions, for each gene, with a common acceptor/donor and a ratio equal to one. It was an important step because the CSV file contained all the junctions, even the one which where very rare, and could not be separated from the background noise due to the RNA-Seq method or some errors from the splicing machinery. This corresponds to the rare-junctions identified in Denis Dupuy’s work, those junctions having a ratio inferior to 1%.
The next step of the workflow was the data visualisation. There are actually several way to visualize data and make comparisons. In our case we could just compare data between the different conditions present in the dataset or use a reference file for the C.elegans. The question is : is there any reference file for C. elegans splicing ? and actually there is one. Denis Dupuy worked with the method described previously and actually generated the so wanted C. elegans reference file for splicing events. We then had a reference to compare our data with. This is exactly what we did. Using this reference which gives the “normal” splicing ratios of each form allowed us to compare the evolution of splicing with our data. This reference file is really important because with it, we can say if a splicing form is over/under-represented under specific conditions and that makes all the power of the method !
The results obtained from the ratio calculation was then computed to extract some extra data. At the beginning we simply plotted f(reference_ratio) = sample_ratio. Denis had the idea to calculate the distance and slope between the points related to be able to generate a new type of plot. Actually we were not really happy on how our plots were. There were a lot of points and no ways to focus on specific gene unless digging into the code itself to make a selection manually. That was not an option so we asked ourselves : how could we represent our data to make them easy to use ? Like a lot of things in informatics, other people thought about it and developed a really nice library called `plotly`. By using this we were able to generate beautiful plots and moreover interactive ones. The user can now easily travel inside the data and visualize what he wants. There is still a step left and it is the interpretation of our data.
Since the biology team had not produced any results of RNA-Seq, we had to choose a training dataset from Mae et al, which is composed of stages and muscle specific RNA-Seq reads. A very useful asset in order to detect tissue specific splicing patterns.
To produce the different plots, we used RStudio (GUI for R) in combination with ggplo2 and plotly packages allowing the generation of pretty plots. We obtained several graphs which a part of them will be presented in the following lines.
If the biology team had produced a modified C.elegans worm, we would have been interested in checking if other gene splicing were impacted by the genetic construct. We therefore compared muscle and neuron alternative splicing patterns in order to identify specific genes which could be responsible for the differentiation in one of the tissue studied.
It could also have been possible to compare RNA-Seq samples from our worms to neuron or muscle specific WT patterns and detect modified junction usages.