tag:blogger.com,1999:blog-5657701209880344284.post6224761136227837231..comments2019-09-16T17:04:53.562+01:00Comments on McBryan's Musings: qPCR normalisationTony McBryanhttp://www.blogger.com/profile/14143519988774162737noreply@blogger.comBlogger82125tag:blogger.com,1999:blog-5657701209880344284.post-67043311035723878722015-07-18T17:07:45.042+01:002015-07-18T17:07:45.042+01:00Note: I no longer work in the bioinformatics field...Note: I no longer work in the bioinformatics field and I'm generally unable to respond to comments on this post in reasonable timescales.<br /><br />I'm also cognisant that my specific knowledge in the field is getting slowly more fuzzy due to lack of use as well as more out of date.<br /><br />Because of these reasons I'm going to disable future comments on this article and recommend that anyone with questions should attempt to contact a local bioinformation or computational biologist within their institution. If your institution does not have anyone that matches this description then you should probably start advocating that such a position is created - as you've probably found out it's getting harder to do modern biology without this type of support.Tony McBryanhttps://www.blogger.com/profile/14143519988774162737noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-25353703586188026862015-07-18T16:57:36.624+01:002015-07-18T16:57:36.624+01:00It's been a while since I looked at the output...It's been a while since I looked at the output of REST but in the general case you can calculate error bars (SE or SD) using normally distributed data and then convert back to the non-normal distributions. See my response to Craig Irving below for a little more detail.Tony McBryanhttps://www.blogger.com/profile/14143519988774162737noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-10805469192370112112015-07-18T16:49:52.393+01:002015-07-18T16:49:52.393+01:00Realistically you are now in the world of mixed ef...Realistically you are now in the world of mixed effects general linear modelling to attribute effects and measure significance which is beyond the scope of the techniques in this article. Also, given the diverse nature of the samples (multiple tissues, multiple animals and multiple treatments) I'd also say that simple PCR might not be the best approach - you might want to consider microarrays or RNA-Seq in order to be able to measure the effect of your treatment in a genome wide context.Tony McBryanhttps://www.blogger.com/profile/14143519988774162737noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-5886411087305232432015-07-18T16:37:05.593+01:002015-07-18T16:37:05.593+01:00As long as each measured gene is treated identical...As long as each measured gene is treated identically on both sides of the experiment (control and treatment samples) then the approaches described in this will continue to work. For example if your control gene (lets say ACTB) is dilluted twice as much as your target gene then the delta CT in each of your measured samples will be twice as large, however because we then compare delta delta CT between samples this is accounted for.Tony McBryanhttps://www.blogger.com/profile/14143519988774162737noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-19281595430629248552015-07-18T16:32:06.453+01:002015-07-18T16:32:06.453+01:00It is not uncommon for housekeeping genes to be af...It is not uncommon for housekeeping genes to be affected by an experimental treatment. The solution is to generally use multiple control genes but even this is not guaranteed as a given treatment might do something globally like increase RNA production of the entire cell.Tony McBryanhttps://www.blogger.com/profile/14143519988774162737noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-67920271370534058372015-06-08T14:46:50.307+01:002015-06-08T14:46:50.307+01:00Hi!
Often times in our experiments, we are running...Hi!<br />Often times in our experiments, we are running into a problem where the mean fold change value for the control is greater than 1. For example, recently in a qPCR run with four control samples and four treatments, the reaction was run in triplicates. The triplicates gave highly similar C(t) values, with the largest observed difference being 0.84 and the smallest being 0.09. If our calculations have been checked multiple times and the control being utilized is reliable, what could be any potential reason why the mean fold change is greater than 1 (a value 1.13 for the previously described experiment)? Is there anything we can do to avoid this in the future?<br /><br />Thanks!Preethttps://www.blogger.com/profile/05036776624884565459noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-45697008992272984292015-05-18T08:39:30.985+01:002015-05-18T08:39:30.985+01:00Hello,
Well written article! I have a question r...Hello, <br /><br />Well written article! I have a question regarding my qPCR experiment and the application of the delta delta CT method to my data. I have the CT values, Efficiency and slope of 3 reference genes and 3 target genes. However, the cDNA are added to the Sybr. green mix as a 500x dilution (I started with 200ng cDN in 20 ul) for the reference genes, while the cDNA for my target genes is diluted only 20x. Is it possible to apply the method when I use different dilutions? How can I calculate back to be able to compare the data?<br /><br />Thank you!Lydia Stravershttps://www.blogger.com/profile/10655232751921317862noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-46435575375323844602015-04-17T17:42:42.754+01:002015-04-17T17:42:42.754+01:00Hi Tony, I usually go to ResearchGate to get answe...Hi Tony, I usually go to ResearchGate to get answers, but found this blog interesting. I read that it is best to normalize gene expresion to more than one reference gene. We are comparing target gene expression in treated and untreatedd animals. We would like to use one in house reference gene and another reference gene (the same target gene in another organ as a reference). So lets say we are looking at expression of the target gene in the spleen and comapre it to intestine, pancreas etc..in this case we use b-actin as in house reference and lungs as an organ reference for all other organs mentioned above. How do you calculate DDCT between treated and untreated control animals with the 2 references above?. My reasoning is as it follows: 1) in "untretreated", normalize to b-actin reference (thus calculate DCT1). Then normalize the same "untreated" to the lungs (thus calculate DCT2). Call the difference DDCTa. 2) Do the same for the "treated" and call it DDCTb. 3) calculate the difference between DDCTa and DDCTb. Does this make anu sense? Thank youNasharhttps://www.blogger.com/profile/01648156237269806449noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-54178896888625985002015-03-31T00:03:10.926+01:002015-03-31T00:03:10.926+01:00Thank you for this excellent guide!
I have REST-M...Thank you for this excellent guide!<br /><br />I have REST-MCS Version 2 (very old Excel-based spreadsheet version with protected hidden macros, copyright 2005-2006, so probably what you mean by REST 2005), and I worked out a spreadsheet trying to duplicate its results to figure out how the software used the different efficiencies of my four reference genes. I worked it four different ways, and the one that yielded the same results as my REST-V2 was as follows: <br />1) Calculate delta-Ct for each reference gene and target gene separately: delta-Ct = (MeanCt-control - MeanCt-Tx)<br />2) Calculate Eff^delta-Ct for each reference and target gene separately: Eff^delta-Ct = (Efficiency+1)^delta-Ct when efficiency is expressed as [2^(-1/slope of std curve) - 1]. (My old version of REST does not subtract 1 from the efficiency like you do, so it doesn't have to add it back in again here. Just be sure to know which way your efficiency is expressed before using this equation).<br />3) Calculate the geometric mean of the Eff^delta-Ct of all your reference genes: Geomean for n genes = n-root of (Eff-ref1^delta-Ct-ref1 * Eff-ref2^delta-Ct-ref2...* Eff-ref-n^delta-Ct-ref-n).<br />This then goes in the denominator of your expression ratio equation: Expression ratio = (Eff-target^delta-Ct-target)/ Geomean(Eff-ref1^delta-Ct-ref1, Eff-ref2^delta-Ct-ref2...-ref4). This gives the same number as REST-V2's "absolute gene regulation" for each treatment condition. <br /><br />Thus the geometric mean is not taken until the very last step. Other methods I tried took the geometric mean of the mean reference Cts and/or the geometric mean of the reference genes' efficiency values, but though these yielded results within 0.01 of each other, they were not identical to the REST-V2 output. I don't know about later versions of REST. I downloaded REST 2009 and found that it was only able to do one treatment condition, whereas my experiment has three treatment conditions. My old REST-V2 can do up to 6 conditions, so I find it more useful, though it can only do 10 genes at a time.<br /><br />I haven't figured out how it calculates the standard errors yet. That is giving me more trouble, as I can't seem to wrap my brain around the error equations in your post. I have never worked with Taylor series before, and as I am missing their background logic I find them confusing. <br /><br />My question is this: underneath the numbers for absolute gene regulation REST-V2 has their standard error. Is it appropriate to use these as error bars for a graph of absolute gene regulation? Isn't this the same as fold-change, for which you said that S.E is not appropriate because it is not normally distributed? If I can't use these as error bars, what would you recommend plotting as a graph, since graphs now-days all need error bars? REST-V2 does not give delta-delta-Cts as output, nor does it give the upper and lower limits of errors as you show, and since the macro's are hidden I can't make it show its earlier steps. In order to plot delta-delta-Cts then I would have to make my own spreadsheet (as I have done here) and recalculate everything, which almost defeats the purpose of running it through REST, though REST-V2 calculates my amplification efficiency, efficiency S.E., and gives me p-values. Still, it would be strange to find that the output of the program cannot be graphed directly but must be transformed first. What are your thoughts on this?Krista Beachhttps://www.blogger.com/profile/09487489192029750818noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-8866507453388499312015-03-07T19:59:44.299+00:002015-03-07T19:59:44.299+00:00The subset of the input would be a fixed percentag...The subset of the input would be a fixed percentage of the lysate that you PCR (or sequence). In your case your DF is 25 as you say.<br /><br />% input is a rough approximation of the idea that your protein binds to some proportion of the DNA in your sample. If you immunoprecipitate then PCR and get the same CT values as the 25 DF input that you PCR at the same time then the argument goes that your protein must therefore have been on 1/25th of the DNA in the sample. Obviously this is a bit of an oversimplification but it's close enough to reality for it to be a common way of thinking about ChIP-PCR experiments.Tony McBryanhttps://www.blogger.com/profile/14143519988774162737noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-37958978105849037002015-02-17T22:04:01.867+00:002015-02-17T22:04:01.867+00:00sorry, DF = 25sorry, DF = 25Lhttps://www.blogger.com/profile/01253968908590850830noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-17063313649095492912015-02-16T21:12:50.844+00:002015-02-16T21:12:50.844+00:00hi Tony,
this post is very helpful.
two questio...hi Tony, <br />this post is very helpful. <br /><br />two questions about ChIP qPCR data and % input <br /><br />I was wondering what exactly you mean by "subset of the Input". for my ChIP protocol, I perform immunoprecipitation on a 1 ml aliquot of cell lysate. I also take a 40 ul aliquot of cell lysate that is my "total input"- this second aliquot is not IPed but is essentially treated identically. I do not otherwise dilute the total input sample. so in this case, my input dilution factor would be 250 (or 1000 ul/40 ul), correct? <br /><br />second question...what exactly does % input mean? is it accurate to say that % input is the percent of your target amplicon that is bound to the IPed protein in your cells? <br /><br />best, <br />LLhttps://www.blogger.com/profile/01253968908590850830noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-71779366677010484482015-02-14T22:10:29.059+00:002015-02-14T22:10:29.059+00:00You'll need to define what you mean by "i...You'll need to define what you mean by "is present" for a gene. All, or most, genes are probably transcribed at some level even when transcription of the gene is suppressed via transcription factors etc. If you run a PCR long enough you'll detect this transcription, and if you run it really long enough you'll probably get a positive signal for any gene even if you just have water in your sample.<br /><br />If your question is "gene is present at a particular CT value" or "gene is present at the same level as another gene" then this is just a measurement of the CT value against whatever criteria you've defined.Tony McBryanhttps://www.blogger.com/profile/14143519988774162737noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-49640427514201906372015-02-14T22:02:38.145+00:002015-02-14T22:02:38.145+00:00The answer to this depends. If you assume equal am...The answer to this depends. If you assume equal amplification within each condition then the techniques I've described here are directly applicable. My spreadsheet already supports this use case where you have different amplification efficiency between control and treatment. REST-2009 and similar software packages support this too.<br /><br />If you can not assume equal amplification within your sample groups this implies there might be some confounding factor affecting your experiment which you might be best trying to figure out first.<br /><br />That said, if the confounding factor can't be eliminated easily (human tissue samples), there's no particular reason you couldn't extend the approach described in this article to tackle this use case. Specifically you would calculate an efficiency separately for every sample, use this to calculate concentrations in each sample separately, calculate the ratio between control and treatment and then log normalise these.<br /><br />I think this is probably overkill for virtually any experiment and so the task of actually doing this left as an exercise for the reader.Tony McBryanhttps://www.blogger.com/profile/14143519988774162737noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-85943729701812141422015-02-14T21:49:38.870+00:002015-02-14T21:49:38.870+00:00In my guide I combine all the sources of error eit...In my guide I combine all the sources of error either additively or via a taylor series expansion (see the text on this page for descriptions). I believe this is the most prudent thing to do although you will certainly see lots of examples of people who do it differently.<br /><br />T-tests don't use either standard error or standard deviation as inputs, they work for independent normally distributed sample data. dCTs are our normally distributed data source where we have one data point per sample so they are ideal for use in the t-test.Tony McBryanhttps://www.blogger.com/profile/14143519988774162737noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-39073676585264344382015-02-14T21:43:16.200+00:002015-02-14T21:43:16.200+00:00It is more likely that both biological and technic...It is more likely that both biological and technical errors are both normally distributed at the CT values level. Even if you were to have a biological variation that was somehow normally distributed at the level of concentration then due to the exponential amplification of changes at the CT level then it would be entirely swamped by even a tiny element of technical error.<br /><br />We would generally assume, barring systemic error, that all the variance would be normally distributed at the CT value level and would be a combination of both biological and technical variation.<br /><br />Subsequently you'd calculate a single standard deviation at the CT level which would contain both technical and biological variation.Tony McBryanhttps://www.blogger.com/profile/14143519988774162737noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-8041708765147756172014-10-10T14:20:52.820+01:002014-10-10T14:20:52.820+01:00I,
Can someone help me with a problem?
If i am usi...I,<br />Can someone help me with a problem?<br />If i am using a real time PCR just to check if one or more genes is present or not the Ct value for the present genes need to be the same, or almost the same?<br /><br />Thank you in advance<br />R.CarvalhoRicardo Carvalhohttps://www.blogger.com/profile/03066726892243160652noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-74002937303061632732014-10-06T20:17:40.897+01:002014-10-06T20:17:40.897+01:00I have a comparative question for which there may ...I have a comparative question for which there may be help in the community. I am trying to compare the expression of multiple genes in brain tissue across two different mammalian species. Because any particular gene will have nucleotide differences between species, I don't want to assume equal amplification efficiencies for any gene. I have collected CTs and efficiencies for multiple genes in both species, including those for one or two genes that appear to serve as viable housekeeping genes for normalization. The trick is how to correctly present the data and draw biological conclusions. I am not aware of any software package or published protocol for relative quantification of gene X in samples A and B when the efficiency may differ between samples. <br /><br />Any advice is appreciated. <br /><br />C.A. BakerC Bakerhttps://www.blogger.com/profile/01211033592697401514noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-56125596385360831362014-10-01T09:50:25.531+01:002014-10-01T09:50:25.531+01:00Hi, thanks for a veryu helpful post. I just have s...Hi, thanks for a veryu helpful post. I just have some questions I try to get my head around.<br /><br />First I have a question regarding the delta CT SEM value. In your spreadsheet you calculate the SEM for condition A and condition B and then pool these to a common delta CT SEM.<br />In another guide from applied biosystems, they seem to use the delta CT SEM from just the treated condition when calculating the delta delta CT SEM and further to fold change variations. <br />http://www3.appliedbiosystems.com/cms/groups/mcb_support/documents/generaldocuments/cms_042380.pdf<br />(p.58). Would you say that your calculation is the correct one?<br /><br />My second question is about the T-test. For the T-test you use the built-in formula in excel. This calculates the SD and mean from the given dCT values in the gray box. Is this the correct SD values to use in the T-test or should you use delta CT SD values calculated the same way as the "Sub errs" (but with SD instead of std err of cource)?<br /><br />I hope you can understand what I am asking!<br /><br />Kind regards,<br />Alexander HedbrantAlexander Hedbranthttps://www.blogger.com/profile/01973904489554878884noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-84817971692873471392014-09-23T15:03:20.767+01:002014-09-23T15:03:20.767+01:00Thank you very much for this article. Even in the...Thank you very much for this article. Even in the article by Livak and Schmittgen [1], which I used as a reference, wrong formulas based on arithmetic means and standard deviations are used instead of the correct geometrical metrics. No wonder there is so much confusion around, with the lack of clear and correct articles!<br /><br />I want to plot concentration values with standard deviation (not standard error) bars, because I need a measure of the spread (that is something about which many people seem to be confused, too, thinking SEM would be a suitable measure of the spread [2]).<br /><br />Thus, I calculated geometrical means and geometrical standard deviations from the final concentration values 2^(–ΔΔCₜ). However, one thing is still not clear to me:<br /><br />If I am right, the spread of the final concentration values is made up of two components: (a) the “biological” variability already present in the population, (b) the “technical error” variability due to technical/methodical inaccuracies. If we assume that the real concentrations in the original population are normally distributed, the “biological” spread should follow a normal distribution at the level of the final concentration values, while the “technical error” spread should follow a normal distribution at the level of the Cₜ values and should thus follow a logarithmic distribution at the level of the final concentration values.<br /><br />Accordingly, if the “biological” spread is (much) bigger than the “technical error” spread, it would be more appropriate to calculate arithmetic standard deviations for the final concentration values, wouldn’t it? If both “biological” spread and “technical error” spread contribute substantially to the data dispersion, I really do not know how to calculate proper standard deviations ...<br /><br />[1] http://www.ncbi.nlm.nih.gov/pubmed/11846609<br />[2] http://www.sportsci.org/resource/stats/meansd.html<br />Joshua Krämerhttps://www.blogger.com/profile/10819433936305110579noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-7963807826955544962014-09-21T13:35:01.523+01:002014-09-21T13:35:01.523+01:00I want to clarify first that I don't particula...I want to clarify first that I don't particularly recommend using my spreadsheet to do production qPCR - it's not really intended for that. It was written to explain *why* qPCR is done the way it is and to provide examples of how the maths are applied in practise.<br /><br />I also argue that you probably shouldn't be using spreadsheets at all to do this.<br /><br />I strongly recommend that unless you have very unusual experimental setups you should default to using the software I listed in my post (i.e. REST 2009 et al.). These have been used and tested much more extensively than my (or pretty much any) little spreadsheet has.<br /><br />Onto your specific questions though;<br /><br />1. Yes this is correct.<br /><br />2. If you are using multiple housekeeping genes I would generally recommend taking the geometric mean of the housekeeping genes as the very first step then using the same equations as in (1) as normal.<br /><br />3. You should use dCt values as the basis for the t-test, not the ddCt values. I've not used NormFinder but it claims to find the best housekeeping gene from your set. This is basically an alternative to taking the geometric mean of all of the housekeeping genes.<br /><br />4. You would typically perform the t-test on the dCt values.<br /><br />5. A p-value is the propability of obtaining a result at least as large as the observed effect assuming that the null hypothesis is true. I probably can't do a better explaination than the one on wikipedia: http://en.wikipedia.org/wiki/P-value<br /><br />6. Housekeeping genes not changing is an assumption. You can validate it in a few different ways - (a) validating empirically they don't change in your model system across multiple replicates (raw Ct value always the same), (b) using something like NormFinder to pick the best one, (c) doing lots of them and assuming the geometric mean will be more stable than just one.<br /><br />7. If you have a ratio > 1 then it's an increase (numerator was greater than the denominator), if ratio < 1 then it's a decrease (numerator less than the denominator).<br /><br />8. t-test and ANOVA are basically the same thing. (Or more accurately the t-test is a specific instance of the ANOVA). I suggest the actual flow you want is: Ct value -> Software that does qPCR -> Interpretation of results in context of the experiment.Tony McBryanhttps://www.blogger.com/profile/14143519988774162737noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-22135472062536209312014-08-31T22:57:15.030+01:002014-08-31T22:57:15.030+01:00Hi, Tony, Great information you are having. Your p...Hi, Tony, Great information you are having. Your post does helped me a lot. Nevertheless, I am still having some doubts with normalization of qPCR.<br /><br />Please bare with me as the following questions that I am going to ask might be naive as I am still new in qPCR. I have read up a lot of articles and forums on how to analyze the qPCR result, but I am just couldn't figure out the overflow/steps/fundamental of normalizing the qPCR result. Perhaps you could help me on this? By telling me the flow of analyzing the qPCR data?<br /><br />1.) One of my experimental objective is to study the expression of 8 ncRNA genes to check whether they are either up or down-regulated subjected in different conditions. I have 2 conditions in my experiment which is glucose as my experimental control and polyethylene(PE) powder as my treated condition. For your information, I am quantifying using absolute quantification, 3 housekeeping genes(HKGs) were tested together in this experiment. I have decided to choose ddCt method and geometric mean to normalize my data. You talked about the ratio which is below:<br /><br />Ratio=2CT(target,untreated)−CT(target,treated) / 2CT(ref,untreated)−CT(ref,treated)<br />For my case, I should do it like this:<br />Ratio=2CT(target,glucose)−CT(target, PE) / 2CT(ref,glucose)−CT(ref,PE)<br />Am I right?<br /><br /><br />2.) Apparently, I found another similar equation which is below:<br /><br />Ratio=2CT(target,untreated)−CT(target,treated) / cube root of [ H(X)g1*H(X)g2*H(X)g3]<br />the denominator is geometric mean of multiple HKGs.<br />Is it correct if I normalize my result using only second equation and ignore the first one as I am having 3 HKGs? The first equation is meant for single gene normalization?<br /><br /><br />3.) Just let's say I am having 12 samples, 6 for each glucose and PE. After substituting my Ct values into the ratio's equation and I got 12 respective ratios. Should I proceed to SPSS independent t-test analysis or I should use another program called NormFinder? I got to know this program via published papers, but I do not really understand the principle and functions behind it. Hope that you can help me explain on this.<br /><br /><br />4.) Is it necessary for us to normalize our data first before proceed into t-test analysis? Can I just do t-test directly without doing ddCT method first? Which means I have my Ct result of target gene and HKGs obtained from qPCR experiment. Can I just take the Ct values and perform t-test analysis directly without doing the ddCT method?<br /><br />5.) There is a lot of paper mentioning about p-value. Could you please tell me what is p-value? How can I find it?<br /><br />6.) Is it that normalization is to validate our HKGs?<br /><br />7.) How do know whether a gene is up or down-regulated based on the ratio?<br /> <br />8.)Based on my understanding, the flow would be:<br />Ct value obtained > substitute into ddCT/ geometric mean equation > t-test analysis > ANOVA > analyze your result whether they are significant different? <br />Please tell me if I miss anything.<br /><br /><br />Very much appreciation if you could help on this. I am know that I am very in weak in fundamental. So sorry if I have caused any inconvenience. <br />Thank you.<br /><br />Cheers,<br />Cheryl =) <br />cherryhttps://www.blogger.com/profile/02776157742021734309noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-4511269048034516762014-08-31T19:30:43.289+01:002014-08-31T19:30:43.289+01:00Best blog ever on this aspect..Simple and clear,,Y...Best blog ever on this aspect..Simple and clear,,You have answers for every doubt of mine..Thanks,,Good Luck,Raj the kinghttps://www.blogger.com/profile/02175016704332144548noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-68193197150748664732014-08-30T19:06:41.914+01:002014-08-30T19:06:41.914+01:00If you are doing an immunoprecipitation to be meas...If you are doing an immunoprecipitation to be measured by PCR then you are generally intending on measuring the quantity of DNA (or RNA in your case) that is pulled down.<br /><br />If you do scaling before your do the PCR then you've just changed the amount of DNA/RNA that's being measured. So, if you want to be able to compare two DNA/RNA samples against each other you must do any scaling equally between the samples (i.e. each sample is 5% of what was pulled down).<br /><br />The exception to this is you have a strong belief that your differential scaling is being done to normalise for a pre-existing known difference between samples. e.g. Senescent cells have less total histone than proliferating cells. If you are examining the relative abundance of some histone modification then you might normalise your sample by total histone content before performing your PCR. If you do this sort of thing you will likely be asked by reviewers to provide the same PCR without this normalisation as well as an additional figure.<br /><br />Another exception is you are intending on performing a sequencing experiment to determine distribution of your DNA/RNA in a genomic context. In which case you may simply have to load equal quantities of DNA/RNA simply due to technical requirements. In this case you could attempt to spike in known quantities of an artificial DNA sequence, before extracting equal amounts for sequencing, in order to recover an estimate of the original absolute abundance. This is not necessarily as easy as it sounds.Tony McBryanhttps://www.blogger.com/profile/14143519988774162737noreply@blogger.comtag:blogger.com,1999:blog-5657701209880344284.post-1816539530938484712014-08-30T18:55:33.600+01:002014-08-30T18:55:33.600+01:00Essentially yes, standard error only makes sense w...Essentially yes, standard error only makes sense with normally distributed data.<br /><br />You could calculate the standard error from the ddCt values then convert the corresponding values into the fold change scale (i.e. 2^(ddCt+SE) is your upper bar on your fold change while 2^(ddCt-SE) is your lower bar.<br /><br />However, I dislike plotting fold changes at all. You either have to plot it as a ratio of 0->inf in which case any decreasing fold changes are crammed into 0-1 while positive fold changes have a dynamic range of 1->inf which is entirely arbitrary. Or, some people take the range 0-1 and invert it, thus getting a scale from -inf -> inf but with a missing area between -1 and 1 which cannot contain any values.<br /><br />Finally, and I think most importantly, fold change is fundamentally something that is log normally distributed, so the appropriate way of presenting it is on a log normal scale - which is what you have with ddCt already.Tony McBryanhttps://www.blogger.com/profile/14143519988774162737noreply@blogger.com