Wednesday 19 June 2013

qPCR normalisation

This post will discuss the normalisation of qPCR results using both the Delta Delta CT (Livak) and Standard Curve (Pfaffl) methods as applied to qPCR of RNA and ChIP.  It will also discuss some of the common pitfalls when considering replicates of qPCR data.

While this isn't a cutting edge technique, the motivation for this post is the number of qPCR spreadsheets I see that do this wrongly.

Delta Delta CT


The Livak method is more commonly known as the "Delta Delta CT" (ΔΔCT). The Delta Delta CT method makes one important assumption about the PCR, namely, the amplification efficiencies of the reference control gene and the target gene of interest must be approximately equal.  Specifically Delta Delta CT assumes that each PCR cycle will exactly double the amount of material in your sample (amplification efficiency = 100%).

$$$ ΔΔCT = ΔCT (treated sample) - ΔCT (untreated sample) $$$

where $$$ ΔCT(sample) = CT(target) - CT(ref) $$$, therefore

$$$ ΔΔCT = (CT(target,untreated) - CT(ref,untreated)) - (CT(target,treated) - CT(ref,treated)) $$$

where

$$$CT(target,untreated)$$$ = CT value of gene of interest in untreated sample
$$$CT(ref,untreated)$$$ = CT value of control gene in untreated sample
$$$CT(target,treated)$$$ = CT value of gene of interest in treated sample
$$$CT(ref,treated)$$$ =  CT value of control gene in treated sample

We can then calculate the ratio of our target gene in our treated sample relative to our untreated sample by taking $$$2^{ΔΔCT}$$$.

A quick worked example:

UntreatedTreated
Ref Gene16.1715.895
Target Gene21.22519.763


$$$$ΔΔCT = (CT(target,untreated) - CT(ref,untreated)) - (CT(target,treated) - CT(ref,treated))$$$$
$$$$ΔΔCT = (21.225 - 16.17) - (19.763 - 15.895)$$$$
$$$$ΔΔCT = (5.055) - (3.868)$$$$
$$$$ΔΔCT = 1.187$$$$
$$$$2^{ΔΔCT} = 2^{1.187} = 2.277$$$$

So our gene of interest is increased by 2.277 times in our treated sample versus our untreated sample.

The exact order that you do the subtraction in actually doesn't make a huge difference and you'll probably see other people do it slightly differently.  Consider these possibilities:

ΔUntreated vs ΔTreated

$$$ΔΔCT = (CT(target,untreated) - CT(ref,untreated)) - (CT(target,treated) - CT(ref,treated))$$$
$$$ΔΔCT = (21.225 - 16.17) - (19.763 - 15.895)$$$
$$$ΔΔCT = 1.187$$$

$$$ΔΔCT = (CT(ref,treated) - CT(target,treated)) - (CT(ref,untreated) - CT(target,untreated))$$$
$$$ΔΔCT = (15.895 - 19.763) - (16.17 - 21.225)$$$
$$$ΔΔCT = 1.187$$$

$$$ΔΔCT = (CT(ref,untreated) - CT(target,untreated)) - (CT(ref,treated) - CT(target,treated))$$$
$$$ΔΔCT = (16.17 - 21.225) - (15.895 - 19.763)$$$
$$$ΔΔCT = -1.187$$$

$$$ΔΔCT = (CT(target,treated) - CT(ref,treated)) - (CT(target,untreated) - CT(ref,untreated))$$$
$$$ΔΔCT = (19.763 - 15.895) - (21.225 - 16.17)$$$
$$$ΔΔCT = -1.187$$$


ΔReference Gene vs ΔTarget Gene


$$$ΔΔCT = (CT(target,untreated) - CT(target,treated)) - (CT(ref,untreated) - CT(ref,treated))$$$
$$$ΔΔCT = (21.225 - 19.763) - (16.17 - 15.895)$$$
$$$ΔΔCT = 1.187$$$

$$$ΔΔCT = (CT(ref,treated) - CT(ref,untreated)) - (CT(target,treated) - CT(target,untreated))$$$
$$$ΔΔCT = (15.895 - 16.17) - (19.763 - 21.225)$$$
$$$ΔΔCT = 1.187$$$

$$$ΔΔCT = (CT(ref,untreated) - CT(ref,treated)) - (CT(target,untreated) - CT(target,treated))$$$
$$$ΔΔCT = (16.17 - 15.895) - (21.225 - 19.763)$$$
$$$ΔΔCT = -1.187$$$

$$$ΔΔCT = (CT(target,treated) - CT(target,untreated)) - (CT(ref,treated) - CT(ref,untreated))$$$
$$$ΔΔCT = (19.763 - 21.225) - (15.895 - 16.17)$$$
$$$ΔΔCT = -1.187$$$

As long as you stay consistent with the two ΔCT subtractions (i.e. always subtract treated from untreated, or reference gene from control gene, or vice versa) then the magnitude of the ΔΔCT part will always be the same.  The only thing that will change is the sign.

This is why you'll see some people will calculate the expression ratio as $$$2^{ΔΔCT}$$$ and others will do it as $$$2^{-ΔΔCT}$$$.  The difference is basically just how they set up their initial equation - effectively which direction they are comparing the samples.

To see why Delta Delta CT actually works we have to consider what's actually going on under the hood.

The absolute amount of material that we obtain through PCR for each sample for each primer pair is inversely proportional to $$$2^{CT}$$$.  We normalise our genes to a reference gene within each sample to ensure that we don't have any systematic errors due to differences between each sample (an internal control).

So the ratio of target gene to reference gene in each sample is therefore $$$2^{CT(target)} / 2^{CT(ref)}$$$.

However, because $$$\frac{b^c}{b^d} = b^{c-d}$$$ we can rewrite this as $$$2^{CT(target) - CT(ref)}$$$.

We then calculate the ratio between our two sample by calculating the quotient between the ratio of target gene to reference gene between the two samples as such:

$$$$Ratio = {2^{CT(target,untreated) - CT(ref,untreated)} \over 2^{CT(target,treated) - CT(ref,treated)}}$$$$

which by the same identity rule we applied before equals our ΔΔCT equation:

$$$(CT(target,untreated) - CT(ref,untreated)) - (CT(target,treated) - CT(ref,treated))$$$.


Standard Curve

An improvement to the Delta Delta CT method was introduced by Pfaffl to account for PCR efficiency curves deviating from the theoretical 100% efficient reaction.

To measure how efficient our PCR is for a given amplicon we run a template dilution series and see how closely our idealised PCR compares to real life.

If we run a dilution series of (0.25, 0.5, 1, 2) we would expect that there would be a one CT difference between each sample in the ideal 100% efficient reaction as shown:

CT ValueConcentration
360.25
350.5
341
332

However, if our actual measured CT values indicate a larger difference then our PCR reaction has been less efficient than we hoped.

CT ValueConcentration
360.25
34.90.5
33.81
32.72

We can work out exactly how much less efficient by comparing the CT values and the log of the Concentration.

We can do this on any log scale you like although commonly it'll be done on either $$$log_2$$$ or $$$log_{10}$$$ scales.  The result will come out the same either way.  We will use $$$log_2$$$ from now on since it fits well with the property of PCR doublings.



What we are interested in is the slope of the trend line.  In the case above the slope is -1.1.  We can then work out the efficiency of the reaction as

$$$Efficency = 2^{-1/slope} = 2^{-1/-1.1} = 1.878$$$

Therefore, for each PCR reaction we generate 1.878 copies of our template rather than, the theoretical, 2 copies in the ideal case.  The Efficiency is often then represented as a scale between 0-1 which is obtainable by subtracting 1 from the calculated efficiency above.

If we do a standard curve for each primer set we have (reference gene and target gene) then we can incorporate them into our Delta Delta CT equation to get:

$$$$Ratio = {Efficency(Target)^{CT(target,untreated) - CT(target,treated)} \over Efficency(Ref)^{CT(ref,untreated) - CT(ref,treated)}}$$$$

Note that the order of subtraction matters more here as we make sure that the the exponent of each calculated efficiency contains only the CT's which were produced by that primer pair.

ChIP-PCR

Unlike in RT-PCR for gene expression quantitation, ChIP-PCR will use a single primer pair per region of interest.  However, we will then usually also include an Input sample which the ChIP sample is compared with.

Input usually comprises a huge amount of DNA so it is usually necessary to take a subset of the Input as our actual sample to PCR.  The amount that is actually used may be between 1% and 10%.

To account for this we should apply an input adjustment.  This involves calculating a dilution factor which is equal to $$$1/ {Fraction Input}$$$.  For example if you have 1% input then your dilution factor (DF) is 1/0.01 = 100.  We then calculate our CT correction factor by calculating $$$log_{Efficiency}(Dilution Factor)$$$.

Worked example:  5% input is a DF of 1/0.05 = 20.  For the standard curve described above this is a CT correction of $$$log_{1.878}(20) = 4.75$$$.  This is then subtracted from each of the Input CTs before continuing as per the Standard Curve approach described above.

Because we only have a single primer pair we can use the same efficiency for the PCR throughout which allows us to simplify the Standard Curve approach to $$$Efficiency^{ΔΔCT}$$$.

Often you'll want to represent your result as a percent of input and this can be calculated for each condition as $$$Efficiency^{ΔCT}$$$.

Replicates & Error Propagation

One place that a number of mistakes seem to creep in is the treatment of replicates within PCR experiments.

The most frequent mistake I've seen is the use of the wrong type of mean when calculating the average ratio.  The key fact to remember is to always use the arithmetic mean on anything on a linear scale and always to use the geometric mean on anything on an exponential scale.

More concisely, if it's CT values (or differences thereof), then use an arithmetic mean.  If it's concentration values (anything that is $$$Efficiency^{CT}$$$) then use the geometric mean.

I also frequently encounter incorrect treatment of error bars.  Often people will take the standard deviation (or standard error) of concentration data, or ratios, and depict them directly in their bar charts.  However, these metrics assume normally distributed data which is only the case for the CT values themselves, not the ratios or concentrations.  Briefly, if your y axis on your chart is linear scale and your error bars are the same on both sides then this is wrong.

Finally we should deal with error propagation.  The error in your experiment is composed of the errors of $$$(Untreated,Ref) + (Untreated,Target) + (Treated,Ref) + (Treated,Target)$$$.  This error will usually be more than the error implied by taking the standard error of the ΔΔCT values for each replicate.  However, for uncorrelated variables we can propagate additive errors using the formula:

$$$Error(a+b) = \sqrt{ Error(a)^2 + Error(b)^2 }$$$

This works well for delta delta CT as the error is just the product of multiple additive errors.  However, for standard curve approaches we also have error from the Efficiency values we calculated for our curve and this can't be expressed as an additive factor.

Instead we need to use a Taylor series to propogate the error.  This allows for all six sources of variance (4x CTs + 2x Efficiencies) to be included in the final error calculation.

Control Gene Choice

One final factor is the choice of a suitable housekeeping gene as a control.  If you choose a reference housekeeping gene that changes wildly in between your untreated and treated samples then the results of your target gene compared to this gene will be wrong.

One approach to reduce the effect of this is to use the geometric mean of multiple housekeeping genes as your reference instead of choosing one.  Even still, it's important to carefully choose your reference genes for stability in your particular experiment.

Software

Another approach is just to use some of the software that's out there already.  One that's well used is REST 2009 which supports virtually anything you'll want to do.  Similarly there are open source solutions such as pyQPCR.  If you are willing to pay money you can also use something like GenEx.

If you do decide to make your own spreadsheet you should, at the very least, confirm that the answers you get from there are the same as the answers that you get from these programs.

I've uploaded a spreadsheet which illustrates most of the things I've talked about in this post.  It has examples for calculating standard curves, ΔΔCT, Standard Curve and Input Correction for ChIP-PCR.  It also has worked examples for error propagation for both ΔΔCT (via simple additive error propagation) and for Standard Curves (via a taylor series).

Realistically though, for almost all cases, and particularly anything rtPCR related, I recommend using something off the shelf like REST 2009.  There aren't many reasons you need to make your own spreadsheets - unless of course you are trying to explain how PCR normalisation works in a blog post.

Note that I've previously had errors in this spreadsheet myself.  I take this as a sign of just how easy it is to make these mistakes and also a reminder that there could well be errors still lurking in this sheet (and in any other spreadsheet or software you've used). Thanks most recently to Duncan Brian (UCL) for highlighting that the SE from the Taylor Series I took from REST384 was inversely correlated to the SE of the individual components. I've updated the sheet to include both the taylor series which matches the output of REST 384 and also the Taylor series described in the Gene Quantification Platform.

References

Gene Quantification Platform, Real Time PCR amended, A useful new approach? Statistical problems? http://www.gene-quantification.com/avery-rel-pcr-errors.pdf, Last Accessed 2017.

Livak, Kenneth J., and Thomas D. Schmittgen. Analysis of Relative Gene Expression Data Using Real-Time Quantitative PCR and the $$$2^{-ΔΔCT}$$$ Method. Methods 25.4 (2001).

Pfaffl, Michael W. A new mathematical model for relative quantification in real-time RT–PCR. Nucleic Acids Research 29.9 (2001).

Vandesompele, Jo, et al. Accurate normalization of real-time quantitative RT-PCR data by geometric averaging of multiple internal control genes. Genome biology 3.7 (2002).

82 comments:

  1. Hi there,

    Thank you for your article. It is very helpful. I have a question. I have more than one housekeeping genes, 2 genes at the moment. How can I use both genes to calculate the efficiency of housekeeping genes? I think qbase software can do it, but our project is on a budget now and I just hope there will be simple (or rather complicated) equations to be able to calculate it too. If the software can do, there will be another way to calculate it manually too. Thank you.

    Chat

    ReplyDelete
    Replies
    1. Hi,

      You certainly can. The reference for this is Vandesompele, Jo, et al. Accurate normalization of real-time quantitative RT-PCR data by geometric averaging of multiple internal control genes. Genome biology 3.7 (2002).

      Essentially you can take the geometric mean of your housekeeping genes and simply use that instead of the individual control gene.

      It's also implemented in REST2009 (just select multiple genes as the control gene). REST is free (although you might need to make an account) from the Qiagen website (filter result by software then by analysis software or just search for "rest").

      Tony

      Delete
  2. Hey,

    very well written article! I'm still having problems in calculating error propagation. I could basically understand your calculations in your uploaded spreadsheet, but in your example, only one Ref-Gene is considered! How do I account for errors if e.g. 2 Ref_Genes are used, and if these are averaged before the first dCT is calculated! So how do i have to calculate to account for individual std errors or deviations of the Ref-Genes that these dont get lost after averaging and calculatind dCT? Hope it is clear what my problem is!

    ReplyDelete
    Replies
    1. Hi,

      I've tried to keep the spreadsheet simple (for some version of the term simple at least) so I did skip over that aspect a little bit.

      However, the way you would go about doing it is to calculate the average CT and error for each of your reference genes. (A +- Error(a) and B +- Error(b)). Then find the geometric mean (which is then used as the reference gene CT as per Vandesompele et al. You can then find the error of your reference gene by doing:

      $$$Error(g(a,b)) = {1 \over 2g } \sqrt{ (a \cdot Error(a))^2 + (b \cdot Error(b))^2 }$$$

      This subsequent error can then be incorporated into the remaining equation as per normal (being substituted for the error of the reference gene) to allow it to be incorporated into overall calculation.

      I had to search for the correct geometric mean error propogation formula so credit to joriki at stack exchange for the formula (which also has the generalised formula for n observations).

      Tony

      Delete
  3. Thanks for your comment! If I get this right, a = mean CTs Ref a and error a = std err of mean Ref a, while 2g = 2*gmean of Ref a,b, right? So if want to just calculate the sub err of the first dCT (GOI-Ref) for condition x, that would be: squroot (Error(g/(a,b))*Error(g/(a,b))+((std err c)*(std err c)), c being the target gene? I just incorporated the sub err of ref genes a,b into the equation you mentioned in you upper text (replicates and error propagation)!? If this is right, how to calculate error propagation if one uses the arithmetic mean of refgenes a,b and not the gmean? Since the arithmetic mean is used in genex! and as you mentioned in your text, one should always use arithmetic mean, when data is in log scale - Cts are on log scale! What do you say? And thanks already for your answer!

    ReplyDelete
    Replies
    1. That looks about right.

      I've updated my spreadsheet linked above to include a sheet for 2 reference genes in the DDCT case to show a worked example. I've done this quite quickly but it looks about right.

      I'd recommend against using the arithmetic mean of two reference genes. The distribution of CT values for a single gene will be approximately normally distributed so the arithmetic mean is the correct choice. However, the distribution of CT values for different genes will not be normally distributed and is closer to an geometric distribution (i.e. a small number of genes will be responsible for a large percentage of the RNA in a cell). Therefore, when averaging multiple genes it is appropriate to use the geometric mean (since that is the underlying distribution for gene expression).

      Tony

      Delete
    2. Thanks for updating your sheet! It's clear now how to use gmean in err prop.! But one more question: if I would rather prefer using arith. mean of Ref.-Genes, sub err would be calculated using: (squroot((Error(a)^2)+(Error(b)^2)))/2, right? Assuming several replicates, on let's say the PCR and RT level, have been measured, the sub err, after averaging these one after another would be calculated by basically using the same equation (sub err arith mean) each time, right? Since replicates are averaged using arith mean.! I just want to make sure using the right equations and the right order of preprocessing to finally plot proper error bars!

      Delete
    3. I think that would be fine for averaging repeated additive errors but I'd still recommend you consider geometric mean for averaging different genes for use as a reference. See Vandesompele et al. for the full justification.

      Tony

      Delete
  4. Thanks Tony! I'll have a look at the paper!

    ReplyDelete
  5. Hi Tony,
    Your spreadsheet helped me a lot with my qPCR calculations. I recently decided to use another reference gene. My question is how can I get the ratio of the genes if I am using two reference genes using the standard curve method? I cannot use the ddCt method because my primers have different efficiencies. So, do I need to calculate the geometric average of the Eff values for the reference genes?
    Thanks in advance,

    ReplyDelete
    Replies
    1. To be honest, I'm not aware of a robust general technique for combining two reference genes which have different efficiencies. This is basically because there aren't any log/exponent rules for combining two things of the form qb^x which have both different bases and different exponents.

      REST 2005+ seem to do it by taking the geometric mean of the absolute quantities (eff^Ct) which seems to use the arithmetic mean of the efficiency values. I'm not sure it's ideal but I can't suggest anything better. If you want to do this I'd probably just use REST2009 directly.

      I'd be very interested if anyone can demonstrate a way of doing this in the general case though.

      Delete
    2. Thank you for this excellent guide!

      I have REST-MCS Version 2 (very old Excel-based spreadsheet version with protected hidden macros, copyright 2005-2006, so probably what you mean by REST 2005), and I worked out a spreadsheet trying to duplicate its results to figure out how the software used the different efficiencies of my four reference genes. I worked it four different ways, and the one that yielded the same results as my REST-V2 was as follows:
      1) Calculate delta-Ct for each reference gene and target gene separately: delta-Ct = (MeanCt-control - MeanCt-Tx)
      2) Calculate Eff^delta-Ct for each reference and target gene separately: Eff^delta-Ct = (Efficiency+1)^delta-Ct when efficiency is expressed as [2^(-1/slope of std curve) - 1]. (My old version of REST does not subtract 1 from the efficiency like you do, so it doesn't have to add it back in again here. Just be sure to know which way your efficiency is expressed before using this equation).
      3) Calculate the geometric mean of the Eff^delta-Ct of all your reference genes: Geomean for n genes = n-root of (Eff-ref1^delta-Ct-ref1 * Eff-ref2^delta-Ct-ref2...* Eff-ref-n^delta-Ct-ref-n).
      This then goes in the denominator of your expression ratio equation: Expression ratio = (Eff-target^delta-Ct-target)/ Geomean(Eff-ref1^delta-Ct-ref1, Eff-ref2^delta-Ct-ref2...-ref4). This gives the same number as REST-V2's "absolute gene regulation" for each treatment condition.

      Thus the geometric mean is not taken until the very last step. Other methods I tried took the geometric mean of the mean reference Cts and/or the geometric mean of the reference genes' efficiency values, but though these yielded results within 0.01 of each other, they were not identical to the REST-V2 output. I don't know about later versions of REST. I downloaded REST 2009 and found that it was only able to do one treatment condition, whereas my experiment has three treatment conditions. My old REST-V2 can do up to 6 conditions, so I find it more useful, though it can only do 10 genes at a time.

      I haven't figured out how it calculates the standard errors yet. That is giving me more trouble, as I can't seem to wrap my brain around the error equations in your post. I have never worked with Taylor series before, and as I am missing their background logic I find them confusing.

      My question is this: underneath the numbers for absolute gene regulation REST-V2 has their standard error. Is it appropriate to use these as error bars for a graph of absolute gene regulation? Isn't this the same as fold-change, for which you said that S.E is not appropriate because it is not normally distributed? If I can't use these as error bars, what would you recommend plotting as a graph, since graphs now-days all need error bars? REST-V2 does not give delta-delta-Cts as output, nor does it give the upper and lower limits of errors as you show, and since the macro's are hidden I can't make it show its earlier steps. In order to plot delta-delta-Cts then I would have to make my own spreadsheet (as I have done here) and recalculate everything, which almost defeats the purpose of running it through REST, though REST-V2 calculates my amplification efficiency, efficiency S.E., and gives me p-values. Still, it would be strange to find that the output of the program cannot be graphed directly but must be transformed first. What are your thoughts on this?

      Delete
    3. It's been a while since I looked at the output of REST but in the general case you can calculate error bars (SE or SD) using normally distributed data and then convert back to the non-normal distributions. See my response to Craig Irving below for a little more detail.

      Delete
  6. Hi there,
    I´m working on a qPCR protocol. I want to do a comparative method but using concentration of total DNA. How can I normalize the Ct numbers with the DNA concentration?
    Thank you!

    ReplyDelete
    Replies
    1. Hi,

      If you need to know the absolute quantity of a particular sample in a qPCR experiment then you can work it out using the slope and intercept calculated from the standard curve as follows:

      $$$ log(concentration) = { (CT - intercept) \over slope } $$$

      This will give you a value for the concentration on a log scale which will be generally most appropriate for any downstream analysis you want to do with it from then on.

      If you just want to know relative differences between samples this is generally unnecessary.

      Tony

      Delete
  7. Hello,
    Could you comment on how the calculations will work on a time-course experiment ? I have a wild type (control) and knockout cell lines. For the first part, I normalize my samples to a single internal control(dCT). For my second part, I have 2 options - Either to normalize against my wild type (control) cell line or to normalize against Day 0 for each cell line. I am not sure which one is the most useful. Also the calculation of errors is something I am having problems with. Do I simply perform SD calculations on biological+technical replicates. If so, which values should I use ?

    Cheers
    Shredz

    ReplyDelete
    Replies
    1. My approach to this would be to calculate dCT values for each data point as you have done. At which point this can then be represented visually by plotting the average dCT of the replicates (+- SD of the dCT) for each data point.

      This allows you to show both the changes in expression of your gene over time (as compared to the internal control) as well as the error in your day 0 controls. Crucially, it will also show if the expression of your gene in your WT and KO cell lines starts differently (which might be hidden if you attempt to normalise further).

      Statistics from that point on would be based on the dCT values (i.e. for the simple cases I've described in the spreadsheet we just do a ttest between Condition A dCT and Condition B dCT). For a more complicated experiment the choice of stats test depends on the exact experimental design but a general linear model (based on the dCT values) is probably a good place to start.

      Tony

      Delete
  8. Thanks for the very helpful article, but I'm a bit confused with this statement:

    "The most frequent mistake I've seen is the use of the wrong type of mean when calculating the average ratio. The key fact to remember is to always use the arithmetic mean on anything on a linear scale and always to use the geometric mean on anything on an exponential scale.

    More concisely, if it's CT values (or differences thereof), then use an arithmetic mean. If it's concentration values (anything that is EfficiencyCT) then use the geometric mean."

    Aren't the Ct values already on an exponential (ie. log2) scale, and by taking E^Ct, one is transforming to a linear scale? If this is the case, then the opposite is true: use the arithmetic mean on log scale and geometric mean on linear scale.

    ReplyDelete
    Replies
    1. CT values for a single condition are approximately normally distributed and can therefore be plotted on a linear scale (equally spaced increments on the axis). You can verify this experimentally by drawing a Q-Q plot or performing a Shapiro-Wilk test on your own qPCR data. Arithmetic mean is most appropriate on data represented in this fashion.

      Doing E^CT will transform this onto an exponential scale (the upper values are further apart than the lower values so your axis would generally be represented with unequally spaced increments on the axis). Geometric mean is most appropriate on data represented in this fashion.

      I think we might be talking about the same thing but using the opposite terminology. I shall try to find some time in the next few days to come back to this part of my post and hopefully make it a little clearer.

      Tony

      Delete
  9. Thanks that does make it more clear. I think you're right that the confusion comes from the terminology, and my poor grasp of it: I'm thinking that since the Ct values are actually indicating a doubling of the copy number, they are naturally *in* an exponential scale - which, as you say, allows them to be plotted *on* a linear scale (without log transforming first). Maybe pedantic.

    ReplyDelete
    Replies
    1. I very much agree with you, I think my explanation of it could certainly be better.

      Tony

      Delete
  10. Hi! I found your page useful. If you have time to answer: I "inherited" a spreadsheet in which the application of the ddCt method is different.

    1. A delta Ct value is calculated for every biological replicate. (after technical duplicates being averaged).

    2. The average delta Ct of the control group is calculated and this is used to calculate the deltadelta Ct value for each sample (including the members of the control groups) separately. (sample=biological replicate=sample from one animal).

    That way, an individual fold change value is gained for every biological replicate. These fold changes are then averaged in the control and the treated group. Standard error of the mean or other statistics are calculated after that. I found there are slight differences in the results compared to the standard ddCt. Mathematically it may matter at which step the averaging is done but I am not sure if valuable data is lost or not or some kind of bias is caused or not.

    This method looks like a mess at first and it seems to be a hybrid method (delta Ct values are averaged in the control group but not in the treated group). What's your view? Could it work? I have a lot of data already analysed that way.

    ReplyDelete
    Replies
    1. It sounds a bit suspect. My biggest problems with it are:

      (1) Calculating mean/std errors/dev's from a fold change. These statistics can only be computed on data which approximates a normal distribution. As fold changes aren't normal they will likely be misleading. Geometric mean would work for fold changes, but there aren't really any substitutes for std error or deviation except rescaling them into a distribution that meets those assumptions (i.e. using dCt which is equivalent to log transformation).

      (2) You should be able do control_dCt = average(dCt_sample1_control,dCt_sample2_control) then ddCt = average(dCt_sample1 - control_dCt, dCt_sample2 - control_dCt) and you should get the same ddCt value as average(dCt_sample1-dCt_sample1_control, dCt_sample2-dCt_sample2_control); if in a somewhat less straightforward way. However, it sounds like there is some kind of extra step where the control_dCt is used to calculate an additional ddCt which I suspect will probably give erroneous results.

      If you are required to show control genes on your barchart I'd recommend doing dCt of targetA - targetB vs controlA - controlB for each sample. You can then either plot that directly or express it as ratios by doing efficiency to the power of dCt.

      I imagine what you have is probably close but not quite exact. Many qPCR spreadsheets end up this way - it's hard to be too far wrong as this will raise questions fairly early on, but it's also quite common to see odd approaches like the one you've seen which are close but not quite right.

      Delete
    2. Thanks again, it was me right below, too. :) This helped me a lot.

      Delete
    3. Hi,
      I'm hoping you can help me as I'm a tad confused. My situation is similar to the person above. I've been calculating the delta delta CT and fold changes of each of my replicates seperately, and then using the geometric mean to average these. I wanted to calculate the standard error etc from the fold changes but by the sounds of it thats not possible is that right?

      So if I want to put standard error error bars on my graphs I would have to stay as delta delta CT values and not convert to fold change?

      Thanks!

      Delete
    4. Essentially yes, standard error only makes sense with normally distributed data.

      You could calculate the standard error from the ddCt values then convert the corresponding values into the fold change scale (i.e. 2^(ddCt+SE) is your upper bar on your fold change while 2^(ddCt-SE) is your lower bar.

      However, I dislike plotting fold changes at all. You either have to plot it as a ratio of 0->inf in which case any decreasing fold changes are crammed into 0-1 while positive fold changes have a dynamic range of 1->inf which is entirely arbitrary. Or, some people take the range 0-1 and invert it, thus getting a scale from -inf -> inf but with a missing area between -1 and 1 which cannot contain any values.

      Finally, and I think most importantly, fold change is fundamentally something that is log normally distributed, so the appropriate way of presenting it is on a log normal scale - which is what you have with ddCt already.

      Delete
  11. Thank you for your clear explanation.

    ReplyDelete
  12. Hi Tony.
    Thanks for a good explanation, I do however have a question.
    I just performed a qPCR experiment, but im a bit confused about the use of the Delta delta Ct method.



    I have following results:

    Normal conditions: 16.6 and 17.6 (Target gene)

    Stress conditions: 18.24 and 17.3 (Target gene)





    Normal conditions: 20.55 and 20.89 (Ref gene)

    Stress conditions: 19.9 and 20.26 (Ref gene)



    So initial what i want to do it see if the expression of the stress gene pr ref gene is higher in stress conditions than in normal condition.

    Which by looking at the numbers should be, since the difference in Ct values for the samples are more closely related right?



    However when I do the delta delta calculation (I use the average of the dublicates)



    17.1 - 20.7 = -3.6



    17.7-20.08 = -2.38



    2^-(-2.38-(-3.6)) = 0.35



    So as far as I understand the delta delta Ct this means that the expression in stress genes is 35% lower than in normal conditions right? Which I dont think really understand according to when i just look at the numbers. Is it because it uses total expression? The reason why I dont want this is that my organism may die under the more stress full conditions so the total expression might be lower.

    ReplyDelete
    Replies
    1. Hi Jonas,

      That sounds about right although the expression level in stress conditions is only 35% of the level in the normal conditions. We can quality check our qPCR values by visual inspection.

      In your example your control genes are fairly similar Ct values (avg of 20.72 vs 20.08). We still have a Ct difference there of 0.64 (it has gone up by about 1.5 fold). In your target gene we still have fairly similar Ct values (avg 17.1 and 17.77). We have a difference of -0.67 (it has gone down by about 1.6 fold). Our delta delta Ct is therefore -1.31 which results in a ratio of 0.403 which indicates our target gene has went down by a little under 2.5 fold when compared with the control gene.

      However, this is a fairly minor fold change by qPCR standards and has a significant amount of variation between your duplicates. i.e. Your target gene varies by ~1 Ct, and your control gene by about ~0.3 Ct, between your replicates anyway.

      If you do a straightforward ttest between the dCt's you get a p-value = ~0.26. This would generally be considered insufficient evidence to reject the null hypothesis. In other words; your target gene isn't changing with reference to your control gene.

      You mention that you believe total expression might be changing in your model system. Looking at the Ct values this doesn't seem terribly likely in your model, assuming your control gene is representative of the total expression, as your control gene has a lower Ct value in your stressed condition than your normal condition which would indicate that it has actually gone up in expression.

      That said, choice of control gene should always be made carefully with respect to your model system. One persons control gene is another persons target gene.

      Tony

      Delete
    2. Okay thanks for the good answer! So basically the conclusion is that the data doesnt say anything or at least didnt prove that the expression of the stress gene went up as I would hope. How much value can be adressed to not having equal amount of RNA/cDNA throughout the process? The site im working at dont really have any efficient ways to measure concentration, so there is a certain insecurity when I do my DNAse, cDNA and qPCR processess.

      Delete
    3. And is it wrong to divide the two numbers? Because when I do that i get 0.81 for normal conditions and 0.88 for stress which is what i would expect.

      Delete
    4. I would argue that your experiment hasn't shown that the target gene has changed between your samples. Be careful though that this is not the same as proving that the target gene hasn't changed - you only have two replicates and a significant amount of variation between them.

      Given the Ct values above it implies you have broadly similar amounts of RNA in each of your samples. This is generally the purpose of normalising to a control gene anyway. If you have different concentrations then your control gene should be different which corrects the concentration problem.

      It would be wrong to divide the Ct values. Ct values are on a log scale so a subtraction is actually already equivalent to dividing the unlogged values.

      Tony

      Delete
  13. Hi Tony
    Thanks for all the information. How do this analysis work if I have two groups of patients (healthy and diseased, say 10 patients in each group)?

    ReplyDelete
    Replies
    1. This is the situation I've described in the post. (1 condition in your case healthy vs diseased). Try plugging your data into either REST 2009 or my spreadsheet above.

      Tony

      Delete
  14. Hi Tony,

    Thanks for all the info about qpcr analysis. I have a question regarding the quantification of gene expression only at one time point, such as only at baseline (time=0). Let me summarize below:

    1- I see papers that report gene expression results as GOI/reference and report higher expression with higher values. In this case, wouldnt higher values mean less expression? For exp: GOI1= 30, GOI2= 20, REF= 10. Isn't GOI2 expressed more than GOI1?

    2- If using GOI/REF is ok, should I log-transform the results?

    3- Would it be better to use 2^-dCt to quantify baseline gene expression?

    4- Can I use these results in a correlation?

    Thanks very much in advance!

    ReplyDelete
    Replies
    1. Hi,

      Higher Ct values correspond to lower expression. Delta Ct values are fundamentally doing a GOI/Reference calculation; to see why see the section in this post that starts "To see why Delta Delta CT...". The order you do the subtraction determines if your ratio is GOI/Reference or Reference/GOI (see the section starting "The exact order that you..."). Dividing raw Ct values by each other is not an appropriate approach.

      I don't see why you couldn't use either dCt's or ratios between conditions in correlation coefficients. Subject of course to satisfying yourself that your hypothesis meets the conditions of the coefficient (i.e. pearson measuring linear correlation whilst spearman measures monotonic correlation) and that you have a decent number of samples you have measured both of your variables on.

      Delete
    2. Thanks for your quick reply. Maybe I can illustrate my question through an example. I have two conditions (time=0, time=1), 1 GOI, 2 reference genes with a geo average of REF.
      In time=0: GOI = 20, REF = 15
      In time=1: GOI = 19, REF = 15

      As you see, I don't have much fold-change between time points. Therefore, I would like to see if gene expression of subjects at only time=0 correlates with any properties of subjects (such as reaction time).

      In that case, which value would I use from each subject to investigate a pearson correlation with reaction time?

      GOI/REF = 20/15 = 1.33
      OR
      dCt = 20-15 = 5. 2^-dCt = 2^-5 = 0.03
      OR
      something else?

      And, would a GOI/REF of 1.3 mean more expression than a GOI/REF of 2?


      any ideas would be appreciated!

      Delete
    3. If you want to consider the GOI normalised to a reference then use dCt. Again, don't divide Ct values.

      However, as a warning, I'd be very careful with the approach you are considering as it looks like it might be starting to turn into a fishing expedition. If you don't have a solid hypothesis about what you are looking for, but instead compare it against everything you can, then I'd expect you to find a high number of false positives.

      Delete
    4. So, I will use dCts (such as 5 in example above) for correlations and there is no need for any transformation, right?

      And just to confirm, higher dCts mean lower expression?

      Delete
    5. I would think dCt's would be most appropriate yes. The meaning of the dCt depends on the order you do the subtraction in (See the section "The exact order...").

      If you do GOI - Ref then higher dCt is lower expression. If you do Ref - GOI then higher dCt is higher expression. The only difference between the two is that the sign is flipped (I show eight examples of this in this blog post above).

      Delete
  15. This has been very helpful and clear.

    For the ChIP % input, using the example in the file, if you get a % input of 0.02506 is that 0.02% of the input or 2.5%? I have seen other formulas that make the conversion and I am unclear which is more accurate.

    Also, I don't which error to use if I want to display my ChIP data in % input. Is that what the % input upper and lower are? Could you elaborate or send me in the right direction?

    Thank you so much for shedding light on qPCR and other topics.

    ReplyDelete
    Replies
    1. In that spreadsheet the % input should range between 0 and 1. i.e. 0.1 would be 10% of input. The % upper and % lower give the lower and upper bounds for the percent input as calculated by the SEM of the dCT values.

      Delete
  16. Thanks for the helpful tutorial Tony.

    I just wondered why the initial equation must be swapped around from treated - untreated in the first line (ΔΔCT=ΔCT(treatedsample)−ΔCT(untreatedsample)), to untreated - treated in the last line (ΔΔCT=(CT(target,untreated)−CT(ref,untreated))−(CT(target,treated)−CT(ref,treated))?

    ReplyDelete
    Replies
    1. The section that starts "The exact order that you..." shows 8 possible options for doing the ddCt approach. As long as you keep the order of subtractions the same in the two dCt sections then only the sign can vary.

      Delete
  17. Hello Tony, Im glad I found this informal site for qPCR.

    I am recently learning the method and have some questions.

    I have a reference gene with efficiency of 2 and a taget gene with efficiency of 1.79. I heard this can be used with a specific formula which I have found it here on this site also.

    But concerning the analysis of the graphs with the CT-values. Should I use fit point (set the threshold manually) or the second derivative (set the threshold automatically)?

    Thank you in advance!

    ReplyDelete
    Replies
    1. I believe this question regards the method of choosing a crossing point (Cp) threshold to determine your Ct values in the first place. In a sense this preceeds the details I've talked about above.

      Choice of crossing point is clearly important as the reaction is not exponential indefinately. Choosing a crossing point too late will result in utilisation of all the reagents (and the reaction will taper out) and picking one too soon will result in more background signal (fewer PCR cycles to amplify your target).

      I would support the use of the second derivative method largely as it results in fewer subjective decisions by the user - improving reproducability. See Van Luu-The, Nathalie Paquet, Ezequiel Calvo, Jean Cumps, Improved Real-Time RT-PCR method for High-Throughput Measurements Using Second Derivative Calculation and Double Correction, BioTechniques, 2005 for an in depth discussion.

      Delete
    2. Hi,

      Thank you for the answer. Yes I will use the second derivative method.

      Regards
      Asal

      Delete
  18. Hi Tony,

    Thanks a lot for this post, it is very useful. One question about your suggestion of normalising to the geometric mean of multiple housekeeping genes. Presumably if you were to do this averaging of housekeeping genes at the Ct level, you would be taking the arithmetic mean of the Cts and then use this as the ref Ct for the delta-delta Ct calculation? This would give the same result as calculating the fold changes for each housekeeping gene individually and taking the geometric mean of the fold change results. I am basing this assumption on your advice of using arithmetic means for Ct and geometric means for fold changes. I took a look at the reference you gave but it wasn't clear given that they don't really discuss the delta-delta Ct method.

    Thanks,

    MFP

    ReplyDelete
    Replies
    1. Hi, This is similar to my reply with an anonymous poster on 9 November 2013.

      Essentially the difference is that when you are calculating an average value for an individual gene then the assertion is that all the Ct values from one gene will be normally distributed. Therefore, when calculating the average of Ct values for a single gene then you would use arithmetic mean.

      However, if you have Ct values from multiple genes then these will not follow a normal distribution (a small number of genes contribute proportionally a large fraction of the RNA in a cell). Therefore when you are averaging Ct values from a number of housekeeping genes the distribution will follow a geometric distribution rather than a normal distribution so the geometric mean should be used for averaging multiple genes.

      Tony

      Delete
  19. Hello,

    This is an interesting post, but I'm just a beginner and I am having problems with the technical replicates. After identifying the bad replicates (difference larger than 0.5) with the package Easyqpcr in R, I proceeded to eliminate them. However, this leaves the data unbalanced (different number of Cq between genes), and it seems that REST 2009 doesn't like that. Do you have any advice that could help me? (I decided to delete both replicates when they were too different since I didn't have a good enough reason to choose one of them)

    Thank you very much, I would appreciate any help,
    Sabela.

    ReplyDelete
    Replies
    1. My response to this is typically to reconsider removing replicates based on observations of the experimental values.

      Personally, I would argue in this case that either:

      (i) your replicates are demonstrating an actual biological variability and as such designating outliers as "bad" is misleading
      (ii) you had technical problems during your experiments and as such I would be inclined to redo the experiment

      You shouldn't just have bad replicates for no apparent reason. So don't remove replicates because they are outliers without having an underlying root cause that explains it.

      Tony

      Delete
  20. Hi, I have a couple of questions about your spreadsheet if you have time.

    First, in the 'ddCt RNA - Error Prop 2 Refs' sheet, when calculating 'Ref gene std err', the formula reads:
    =(1/(2*C22))*SQRT( (B19*B21)^2 + (C19*C21^2) )

    Should it read like this instead?
    =(1/(2*C22))*SQRT( (B19*B21)^2 + (C19*C21)^2 ) (moved the final ^2)

    Second, I'm very new to this type of analysis. Could you explain the role of the first term in that equation for me? (1/(2*C22))

    Thank you very much for your help. Your post and spreadsheet have helped me see the logic in all of this.
    Thanks,
    Nick.

    ReplyDelete
    Replies
    1. Hi, well spotted.

      I made an error while inputing that function. I made the same error for the second condition as well. I've updated the spreadsheet to be correct. I should caution that I don't guarantee the correctness of the spreadsheet - I originally intended it as a demonstration of principles.

      That cell is the propagation of uncertainty function for the geometric mean of two values - rather than the simpler function for the addition or subtraction of two values. I made a reply above to an Anonymous comment on 8 November 2013 which includes the function as well as the source.

      Tony

      Delete
  21. Dear Tony,
    I came here by accident! What a interesting blog you have, congratulations. Everything very detailed, nevertheless I haven´t found the answer to my question, let´s see if you can help me. In my case I´m studying organ profile, i.e. relative expression level for 5 genes in 6 different organs, I´m using one gene as a housekeeping and biological triplicates. At the beginning, in my ignorance, my idea was to use the Standard Curve method because I have the Efficiencies for all genes (Just to inform that the Ef are on the range of 95-101% for all Primers tested)... But now I just realized that in my particular case I don´t have TREATED and/or UNTREATED templates to perform a delta Ct. You see I only have templates collected from different tissues/organs (a Ct!). How do you suggest to estimate the ration between my Test gene and my Housekeeping gene in this particular case (considering that they have different Ef!)? Should I use the same equation as Pfafll considering that the treated sample is either for Test and reference genes 0 (zero)? Thanking in advance, hope you can help me here. Best regards. Marco.

    ReplyDelete
    Replies
    1. If you just want an overview I would probably just present the data as the ratio of target gene to reference gene within a given sample (+ error bars). For the ddCT method this is equivalent to just presenting the dCt value for target gene - ref gene, for standard curve this is:

      $$$$Ratio = {Efficency(Target)^{CT(target)} \over Efficency(Ref)^{CT(ref)}}$$$$

      An alternative is to do an all-pairs comparison (adjusting p-values as appropriate using something like Tukeys HSD).

      Delete
  22. Hi Tony,
    Thanks for the helpful post! One (maybe naive) question: in your Taylor Series you use the error of your primer efficiencies, but how do you calculate those errors? In your spreadsheet they are listed as 0.015. I have 3 technical replicates for my standard curve, so do I calculate primer efficiency with each replicate and then find the standard error of those calculated efficiencies?

    Thanks!
    Anne

    ReplyDelete
    Replies
    1. Hi

      You would have to repeat your dilution series to calculate the error of the standard curve. Ideally you'd do one for each of the samples you are measuring.

      You can then calculate the std error of the efficiencies as you suggest - you might want to do a QQ plot or shapiro-wilk test to check they are normally distributed before doing so though.

      Practically, I've never met anyone who actually does this so I'm not entirely certain how normally distributed the efficiencies will be. You should imagine that the standard curve for a gene will be fairly similar across replicates but of course this is always an unproven assumption in most cases.

      Subsequently adding a small non-zero error that is basically a guess is about the best you can do with most peoples data.

      Well done for doing it the right way though. :)

      Tony

      Delete
  23. Hi Tony!

    I'm a Master's student trying to finish up my thesis and my qPCR data are driving me bananas. I analyzed 7 genes with 1 housekeeping gene (only one because b-actin was the only gene tested in another experiment that was stable for ozone condition in mouse lung tissues).

    The premise of my experiment was to test the efficacy of 2 antioxidant diets (20% or 100%) in male and female after exposing them to either air or ozone.

    For example:

    MAC = Male Air Control
    MOC = Male Ozone Control
    MA20 = Male Air 20% diet
    MO20 = Male Ozone 20% diet
    MA100 = Male Air 100% diet
    MO100 = Male Ozone 100% diet

    I analyzed all of my data already in Excel according to this, page 15:

    http://www3.appliedbiosystems.com/cms/groups/mcb_support/documents/generaldocuments/cms_040980.pdf

    I used MAC (or FAC if females) as my untreated,control for everything. I don't know if this is correct since the instructions do not incorporate geometric mean. Also, I need to find out the p-value so that I can compare among my diet groups. How do I get the p-value on the ONE delta delta CT value if it was calculated based on a average of bunch of CT values (I took the average of all my biological replicates early on so the rest of my calculations are just working using ONE value).

    I have my Excel files here and if I could send one to you as an example then you'll understand what I mean and point out any mistakes I am making.

    Thank you for this informative bog post btw!

    ReplyDelete
    Replies
    1. Hi,

      If you only have a single delta delta CT value then you can't compute a p-value from this. The ddCT value doesn't include any indication of the variation within your experiment so it is impossible to tell if the ddCT value you got is unusual or not. It sounds like you have the original replicate information which you can return to for this; my typical approach is to compute dCT values for each sample/gene and then to compare these between conditions.

      If you are doing multiple genes with multiple conditions you will need to be careful to control your p-values for multiple hypothesis testing. If you use a stats package such as SPSS you can build a general linear model/ANOVA which captures the details of your entire experiment and which will have the option to implement post hoc correction correctly for you (i.e. Tukeys HSD, bonferonni). It will usually provide the option for you to select a specific control (i.e. MAC) or to do all-pairs comparisons.

      Tony

      Delete
  24. Hi! Great information on this site. I'm new to qPCR and was a bit confused regarding the normalization, as different sources have advised me differently.

    I did an RNA-IP, IPs were done with three different antibodies: antibody A, antibody B, and IgG.

    For my RNA-IP experiment, I had whole cell lysate (total input) which I ran over a size exclusion column, a fraction (pool) of which contained my RNAs of interest. The IPs were setup with that pool.

    I'm interested in determining whether a specific RNA is enriched in the IP over the pool or the total lysate input.

    For the RT-qPCR, do I need to do any scaling prior to running the qPCR? For instance, each IP contains about 5% of the total input, or about 30% of the pool. Do I need to adjust the amount of cDNA I put into the qPCR reaction or is all scaling done in the post analysis (the %input formula you wrote about). Do I load the same amount (ng) of cDNA into each qPCR reaction?

    Thank you for your help!

    ReplyDelete
    Replies
    1. If you are doing an immunoprecipitation to be measured by PCR then you are generally intending on measuring the quantity of DNA (or RNA in your case) that is pulled down.

      If you do scaling before your do the PCR then you've just changed the amount of DNA/RNA that's being measured. So, if you want to be able to compare two DNA/RNA samples against each other you must do any scaling equally between the samples (i.e. each sample is 5% of what was pulled down).

      The exception to this is you have a strong belief that your differential scaling is being done to normalise for a pre-existing known difference between samples. e.g. Senescent cells have less total histone than proliferating cells. If you are examining the relative abundance of some histone modification then you might normalise your sample by total histone content before performing your PCR. If you do this sort of thing you will likely be asked by reviewers to provide the same PCR without this normalisation as well as an additional figure.

      Another exception is you are intending on performing a sequencing experiment to determine distribution of your DNA/RNA in a genomic context. In which case you may simply have to load equal quantities of DNA/RNA simply due to technical requirements. In this case you could attempt to spike in known quantities of an artificial DNA sequence, before extracting equal amounts for sequencing, in order to recover an estimate of the original absolute abundance. This is not necessarily as easy as it sounds.

      Delete
  25. Best blog ever on this aspect..Simple and clear,,You have answers for every doubt of mine..Thanks,,Good Luck,

    ReplyDelete
  26. Hi, Tony, Great information you are having. Your post does helped me a lot. Nevertheless, I am still having some doubts with normalization of qPCR.

    Please bare with me as the following questions that I am going to ask might be naive as I am still new in qPCR. I have read up a lot of articles and forums on how to analyze the qPCR result, but I am just couldn't figure out the overflow/steps/fundamental of normalizing the qPCR result. Perhaps you could help me on this? By telling me the flow of analyzing the qPCR data?

    1.) One of my experimental objective is to study the expression of 8 ncRNA genes to check whether they are either up or down-regulated subjected in different conditions. I have 2 conditions in my experiment which is glucose as my experimental control and polyethylene(PE) powder as my treated condition. For your information, I am quantifying using absolute quantification, 3 housekeeping genes(HKGs) were tested together in this experiment. I have decided to choose ddCt method and geometric mean to normalize my data. You talked about the ratio which is below:

    Ratio=2CT(target,untreated)−CT(target,treated) / 2CT(ref,untreated)−CT(ref,treated)
    For my case, I should do it like this:
    Ratio=2CT(target,glucose)−CT(target, PE) / 2CT(ref,glucose)−CT(ref,PE)
    Am I right?


    2.) Apparently, I found another similar equation which is below:

    Ratio=2CT(target,untreated)−CT(target,treated) / cube root of [ H(X)g1*H(X)g2*H(X)g3]
    the denominator is geometric mean of multiple HKGs.
    Is it correct if I normalize my result using only second equation and ignore the first one as I am having 3 HKGs? The first equation is meant for single gene normalization?


    3.) Just let's say I am having 12 samples, 6 for each glucose and PE. After substituting my Ct values into the ratio's equation and I got 12 respective ratios. Should I proceed to SPSS independent t-test analysis or I should use another program called NormFinder? I got to know this program via published papers, but I do not really understand the principle and functions behind it. Hope that you can help me explain on this.


    4.) Is it necessary for us to normalize our data first before proceed into t-test analysis? Can I just do t-test directly without doing ddCT method first? Which means I have my Ct result of target gene and HKGs obtained from qPCR experiment. Can I just take the Ct values and perform t-test analysis directly without doing the ddCT method?

    5.) There is a lot of paper mentioning about p-value. Could you please tell me what is p-value? How can I find it?

    6.) Is it that normalization is to validate our HKGs?

    7.) How do know whether a gene is up or down-regulated based on the ratio?

    8.)Based on my understanding, the flow would be:
    Ct value obtained > substitute into ddCT/ geometric mean equation > t-test analysis > ANOVA > analyze your result whether they are significant different?
    Please tell me if I miss anything.


    Very much appreciation if you could help on this. I am know that I am very in weak in fundamental. So sorry if I have caused any inconvenience.
    Thank you.

    Cheers,
    Cheryl =)

    ReplyDelete
    Replies
    1. I want to clarify first that I don't particularly recommend using my spreadsheet to do production qPCR - it's not really intended for that. It was written to explain *why* qPCR is done the way it is and to provide examples of how the maths are applied in practise.

      I also argue that you probably shouldn't be using spreadsheets at all to do this.

      I strongly recommend that unless you have very unusual experimental setups you should default to using the software I listed in my post (i.e. REST 2009 et al.). These have been used and tested much more extensively than my (or pretty much any) little spreadsheet has.

      Onto your specific questions though;

      1. Yes this is correct.

      2. If you are using multiple housekeeping genes I would generally recommend taking the geometric mean of the housekeeping genes as the very first step then using the same equations as in (1) as normal.

      3. You should use dCt values as the basis for the t-test, not the ddCt values. I've not used NormFinder but it claims to find the best housekeeping gene from your set. This is basically an alternative to taking the geometric mean of all of the housekeeping genes.

      4. You would typically perform the t-test on the dCt values.

      5. A p-value is the propability of obtaining a result at least as large as the observed effect assuming that the null hypothesis is true. I probably can't do a better explaination than the one on wikipedia: http://en.wikipedia.org/wiki/P-value

      6. Housekeeping genes not changing is an assumption. You can validate it in a few different ways - (a) validating empirically they don't change in your model system across multiple replicates (raw Ct value always the same), (b) using something like NormFinder to pick the best one, (c) doing lots of them and assuming the geometric mean will be more stable than just one.

      7. If you have a ratio > 1 then it's an increase (numerator was greater than the denominator), if ratio < 1 then it's a decrease (numerator less than the denominator).

      8. t-test and ANOVA are basically the same thing. (Or more accurately the t-test is a specific instance of the ANOVA). I suggest the actual flow you want is: Ct value -> Software that does qPCR -> Interpretation of results in context of the experiment.

      Delete
  27. Thank you very much for this article. Even in the article by Livak and Schmittgen [1], which I used as a reference, wrong formulas based on arithmetic means and standard deviations are used instead of the correct geometrical metrics. No wonder there is so much confusion around, with the lack of clear and correct articles!

    I want to plot concentration values with standard deviation (not standard error) bars, because I need a measure of the spread (that is something about which many people seem to be confused, too, thinking SEM would be a suitable measure of the spread [2]).

    Thus, I calculated geometrical means and geometrical standard deviations from the final concentration values 2^(–ΔΔCₜ). However, one thing is still not clear to me:

    If I am right, the spread of the final concentration values is made up of two components: (a) the “biological” variability already present in the population, (b) the “technical error” variability due to technical/methodical inaccuracies. If we assume that the real concentrations in the original population are normally distributed, the “biological” spread should follow a normal distribution at the level of the final concentration values, while the “technical error” spread should follow a normal distribution at the level of the Cₜ values and should thus follow a logarithmic distribution at the level of the final concentration values.

    Accordingly, if the “biological” spread is (much) bigger than the “technical error” spread, it would be more appropriate to calculate arithmetic standard deviations for the final concentration values, wouldn’t it? If both “biological” spread and “technical error” spread contribute substantially to the data dispersion, I really do not know how to calculate proper standard deviations ...

    [1] http://www.ncbi.nlm.nih.gov/pubmed/11846609
    [2] http://www.sportsci.org/resource/stats/meansd.html

    ReplyDelete
    Replies
    1. It is more likely that both biological and technical errors are both normally distributed at the CT values level. Even if you were to have a biological variation that was somehow normally distributed at the level of concentration then due to the exponential amplification of changes at the CT level then it would be entirely swamped by even a tiny element of technical error.

      We would generally assume, barring systemic error, that all the variance would be normally distributed at the CT value level and would be a combination of both biological and technical variation.

      Subsequently you'd calculate a single standard deviation at the CT level which would contain both technical and biological variation.

      Delete
  28. Hi, thanks for a veryu helpful post. I just have some questions I try to get my head around.

    First I have a question regarding the delta CT SEM value. In your spreadsheet you calculate the SEM for condition A and condition B and then pool these to a common delta CT SEM.
    In another guide from applied biosystems, they seem to use the delta CT SEM from just the treated condition when calculating the delta delta CT SEM and further to fold change variations.
    http://www3.appliedbiosystems.com/cms/groups/mcb_support/documents/generaldocuments/cms_042380.pdf
    (p.58). Would you say that your calculation is the correct one?

    My second question is about the T-test. For the T-test you use the built-in formula in excel. This calculates the SD and mean from the given dCT values in the gray box. Is this the correct SD values to use in the T-test or should you use delta CT SD values calculated the same way as the "Sub errs" (but with SD instead of std err of cource)?

    I hope you can understand what I am asking!

    Kind regards,
    Alexander Hedbrant

    ReplyDelete
    Replies
    1. In my guide I combine all the sources of error either additively or via a taylor series expansion (see the text on this page for descriptions). I believe this is the most prudent thing to do although you will certainly see lots of examples of people who do it differently.

      T-tests don't use either standard error or standard deviation as inputs, they work for independent normally distributed sample data. dCTs are our normally distributed data source where we have one data point per sample so they are ideal for use in the t-test.

      Delete
  29. I have a comparative question for which there may be help in the community. I am trying to compare the expression of multiple genes in brain tissue across two different mammalian species. Because any particular gene will have nucleotide differences between species, I don't want to assume equal amplification efficiencies for any gene. I have collected CTs and efficiencies for multiple genes in both species, including those for one or two genes that appear to serve as viable housekeeping genes for normalization. The trick is how to correctly present the data and draw biological conclusions. I am not aware of any software package or published protocol for relative quantification of gene X in samples A and B when the efficiency may differ between samples.

    Any advice is appreciated.

    C.A. Baker

    ReplyDelete
    Replies
    1. The answer to this depends. If you assume equal amplification within each condition then the techniques I've described here are directly applicable. My spreadsheet already supports this use case where you have different amplification efficiency between control and treatment. REST-2009 and similar software packages support this too.

      If you can not assume equal amplification within your sample groups this implies there might be some confounding factor affecting your experiment which you might be best trying to figure out first.

      That said, if the confounding factor can't be eliminated easily (human tissue samples), there's no particular reason you couldn't extend the approach described in this article to tackle this use case. Specifically you would calculate an efficiency separately for every sample, use this to calculate concentrations in each sample separately, calculate the ratio between control and treatment and then log normalise these.

      I think this is probably overkill for virtually any experiment and so the task of actually doing this left as an exercise for the reader.

      Delete
  30. I,
    Can someone help me with a problem?
    If i am using a real time PCR just to check if one or more genes is present or not the Ct value for the present genes need to be the same, or almost the same?

    Thank you in advance
    R.Carvalho

    ReplyDelete
    Replies
    1. You'll need to define what you mean by "is present" for a gene. All, or most, genes are probably transcribed at some level even when transcription of the gene is suppressed via transcription factors etc. If you run a PCR long enough you'll detect this transcription, and if you run it really long enough you'll probably get a positive signal for any gene even if you just have water in your sample.

      If your question is "gene is present at a particular CT value" or "gene is present at the same level as another gene" then this is just a measurement of the CT value against whatever criteria you've defined.

      Delete
  31. hi Tony,
    this post is very helpful.

    two questions about ChIP qPCR data and % input

    I was wondering what exactly you mean by "subset of the Input". for my ChIP protocol, I perform immunoprecipitation on a 1 ml aliquot of cell lysate. I also take a 40 ul aliquot of cell lysate that is my "total input"- this second aliquot is not IPed but is essentially treated identically. I do not otherwise dilute the total input sample. so in this case, my input dilution factor would be 250 (or 1000 ul/40 ul), correct?

    second question...what exactly does % input mean? is it accurate to say that % input is the percent of your target amplicon that is bound to the IPed protein in your cells?

    best,
    L

    ReplyDelete
    Replies
    1. The subset of the input would be a fixed percentage of the lysate that you PCR (or sequence). In your case your DF is 25 as you say.

      % input is a rough approximation of the idea that your protein binds to some proportion of the DNA in your sample. If you immunoprecipitate then PCR and get the same CT values as the 25 DF input that you PCR at the same time then the argument goes that your protein must therefore have been on 1/25th of the DNA in the sample. Obviously this is a bit of an oversimplification but it's close enough to reality for it to be a common way of thinking about ChIP-PCR experiments.

      Delete
  32. Hi Tony, I usually go to ResearchGate to get answers, but found this blog interesting. I read that it is best to normalize gene expresion to more than one reference gene. We are comparing target gene expression in treated and untreatedd animals. We would like to use one in house reference gene and another reference gene (the same target gene in another organ as a reference). So lets say we are looking at expression of the target gene in the spleen and comapre it to intestine, pancreas etc..in this case we use b-actin as in house reference and lungs as an organ reference for all other organs mentioned above. How do you calculate DDCT between treated and untreated control animals with the 2 references above?. My reasoning is as it follows: 1) in "untretreated", normalize to b-actin reference (thus calculate DCT1). Then normalize the same "untreated" to the lungs (thus calculate DCT2). Call the difference DDCTa. 2) Do the same for the "treated" and call it DDCTb. 3) calculate the difference between DDCTa and DDCTb. Does this make anu sense? Thank you

    ReplyDelete
    Replies
    1. Realistically you are now in the world of mixed effects general linear modelling to attribute effects and measure significance which is beyond the scope of the techniques in this article. Also, given the diverse nature of the samples (multiple tissues, multiple animals and multiple treatments) I'd also say that simple PCR might not be the best approach - you might want to consider microarrays or RNA-Seq in order to be able to measure the effect of your treatment in a genome wide context.

      Delete
  33. Hello,

    Well written article! I have a question regarding my qPCR experiment and the application of the delta delta CT method to my data. I have the CT values, Efficiency and slope of 3 reference genes and 3 target genes. However, the cDNA are added to the Sybr. green mix as a 500x dilution (I started with 200ng cDN in 20 ul) for the reference genes, while the cDNA for my target genes is diluted only 20x. Is it possible to apply the method when I use different dilutions? How can I calculate back to be able to compare the data?

    Thank you!

    ReplyDelete
    Replies
    1. As long as each measured gene is treated identically on both sides of the experiment (control and treatment samples) then the approaches described in this will continue to work. For example if your control gene (lets say ACTB) is dilluted twice as much as your target gene then the delta CT in each of your measured samples will be twice as large, however because we then compare delta delta CT between samples this is accounted for.

      Delete
  34. Hi!
    Often times in our experiments, we are running into a problem where the mean fold change value for the control is greater than 1. For example, recently in a qPCR run with four control samples and four treatments, the reaction was run in triplicates. The triplicates gave highly similar C(t) values, with the largest observed difference being 0.84 and the smallest being 0.09. If our calculations have been checked multiple times and the control being utilized is reliable, what could be any potential reason why the mean fold change is greater than 1 (a value 1.13 for the previously described experiment)? Is there anything we can do to avoid this in the future?

    Thanks!

    ReplyDelete
    Replies
    1. It is not uncommon for housekeeping genes to be affected by an experimental treatment. The solution is to generally use multiple control genes but even this is not guaranteed as a given treatment might do something globally like increase RNA production of the entire cell.

      Delete
  35. Note: I no longer work in the bioinformatics field and I'm generally unable to respond to comments on this post in reasonable timescales.

    I'm also cognisant that my specific knowledge in the field is getting slowly more fuzzy due to lack of use as well as more out of date.

    Because of these reasons I'm going to disable future comments on this article and recommend that anyone with questions should attempt to contact a local bioinformation or computational biologist within their institution. If your institution does not have anyone that matches this description then you should probably start advocating that such a position is created - as you've probably found out it's getting harder to do modern biology without this type of support.

    ReplyDelete