First, let's discuss the normal course of events when you have the luxory (time?) to measure one quantity eleventy-two times. We won't primarily have that luxory, and we'll have to make estimates of the experimental uncertainty of this without knowing that uncertainty very well! Then the game is {\em estimation} of the uncertainty itself using some trasparent, explicit means, and then {\em error propagation}. If you want to skip to that part right off, I understand.
Suppose then you make many measurements of one quantity associated with an object (e.g. mass of an object, speed of an object, metabolic rate of an object, etc), and you just plot your data is a simple-minded way.
Figure 1. Fulmars (northern fulmars), and measured metabolic rates of a small sample of males and females.
There are of course important statistical measures of the measurements, such as mean, standard deviation, and standard error of the mean. Important questions arise.
Figure 2. Principal statistical measures of some quantity '$x$'.
4 Important Questions:
From the graph in figure 1, what is $\overline{M}$, the mean metabolic rate for these to populations?
From the calculated standard deviation, how many digits can we keep in the 'best value' of the measured quantity, how many are 'significant'? Discuss.
Knowing that the standard deviation itself is an estimate how many digits of it should we keep (or are significant)? Discuss.
Express the best for each population in the form $M = M_{best} \pm \Delta M$; is there a significant difference between that of the 2 populations? Is the discrepancy between the 2 measurements greater than equal to or less than the uncertainty in those two quantities?
The best value is the Mean. The best estimate of $\Delta M$ is the standard deviation. We do not nave near enough measurements to know what the distribution of values actually is, and assuming a Gaussian is just that, and assumption. However the mean and standard deviation are calculable. In what follows, we'll suppose that $\Delta M$ may or may not be calculated as a standard deviation of many measurement, rather, we will try to estimate it. In fact we'll suppose we have estimated it, and then fit its uncertainty in the general process of error propagation.
Suppose that in some physical model in which the quantity of physical interests is some algebraic power of the measured quantity
$c = k/\sqrt{M}$, where $k$ is some constant, and we want to know the uncertainty of $c$ given the uncertainty (estimated!) of $M$, we perform an estimate of the uncertainty in $c$ with a Taylor expansion, truncating all but the 'linear term' in the expansion.
\begin{equation}
c \pm \Delta c = \frac{k}{(M \pm \Delta M)^{1/2}} = \frac{k}{\sqrt{M}(1 \pm (\Delta M/M))^{1/2}} \approx c \left( 1 - \frac{1}{2} (\pm) \frac{\Delta M}{M} + ...\right) \equiv c \left( 1 + \frac{1}{2} \frac{\Delta M}{M}\right).
\end{equation}
We have dropped the higher order terms in the Taylor expansion assuming that $\Delta M/M \ll 1$, and in any case we will take that as an estimate. After dividing through by $c$ and subtracting the 1 on both sides we have
\begin{equation}
\frac{\Delta c}{c} = + \frac{1}{2} \frac{\Delta M}{M}.
\end{equation}
The left hand side is the relative uncertainty in $c$, and works out to be one half times the relative uncertainty in $M$, half because that is the absolute value of the exponent of the functional dependence of $c$ on $M$.
Using this same approach one can show that for $c = ab$ or $a/b$, given the uncertainties $\Delta a$ and $\Delta b$, we have \begin{equation}
\frac{\Delta c}{c} = \frac{\Delta a}{a} + \frac{\Delta b}{b}, \end{equation}
where the absolution value of the algebraic powers of $a$ and $b$ is unity in both cases. We take the absolute value because we cannot have negative values appearing in such sums; all the uncertainties *add*.
If $c = x^n y^m z^o$, then we find \begin{equation}
\frac{\Delta c}{c} = |n|\frac{\Delta x}{x} + |m|\frac{\Delta y}{y} + |o|\frac{\Delta z}{z}. \end{equation}
The relative uncertainty in the quantity $c$ is the sum of the relative uncertainties of the variables, multiplied by, weighted by, the absolute value of the exponents.
There are of course more precise and smaller estimates of the uncertainties that could be pursued if the uncertainties in the measured quantities were better known, but we won't pursue these here. And we end this brief note by returning to question 4 above. How to express $c = c_{BEST} \pm \Delta c$? Because we have made a good faith estimate of $\Delta c$ that it is not the result a rich statistical set of many measurements of the same thing, we round up and truncate the value of $\Delta c$ to just one significant digit. We also round up and truncate the the best value so that decimal place of its least significant digit is the same as the decimal place of the uncertainty. For example, if $c_{BEST} = 2.423 \times 10^3$, and $\Delta c = 2.157 \times 10^2$, we must present our result as $c = (2.4 \pm 0.2) \times 10^3$.