Every lab will have at least one plot with data, error bars, and a theory curve. This implies the creation of a model, a mathematical model. This model will often come in the form of an expression which is typically 'solved for' a variable, y. This variable y is synonymous with: 'dependent variable', 'output of the model', 'measured quantity of interest'. And let's add, 'model prediction'. After your experiment, that is, after collecting a data set, you will have an organized array of measured values and uncertainties gotten by varying something, the 'something' represented in the model as the variable x. Here, the variable x is synonymous with: 'independent variable', 'input to the model', 'thing-you-can-control-in-the-experiment'. Naturally, the model is basically y=f(x), a mathematical expression, which can be read as 'output-of-experiment = experiment(input-to-experiment)', where the expression 'experiment$(argument)$' is a metaphor the process of how a physical system of apparatus and instruments converts some input into some output, and how that is like or analogous to a mathematical function converting its argument into the value $f(x)$, one that adequately 'models' that system, or is tentatively thought to do so. Physical theory suggests the model to the experimenter [1]. Ultimately, we'd like to use the mathematical function to make model predictions in some way. In our case the web-based suite of modeling routines hosted by fitteia.org may be used. It's required in the section of PHYS272L in which you are enrolled. It is very powerful, and free to use [2]. One translates the pretty model/equation into the less pretty C programming language, a task that seems simple, but is not trivial, the translation, I mean. Using 'C' is also not trivial either, but for other reasons. So, for example, if you thought something ought to be linearly related to something else - a spring's restoring force on the displacement of the hanging mass from the equilibrium position for instance- you'd put something like $ y = a*x + b$, perhaps, or if you thought the model was nonlinear as with the dependence of the frequency of the simple harmonic oscillator upon the spring constant, you might write something like $ y = sqrt(x/m),$ or $ y = a*pow(x,0.5). $ These are simple examples. You would type them in the input buffer called '$3^0$ - Functions and Parameters', and record in your lab notebook what '$y$' means and which column of data it belongs to, and so forth. This process of definition and abstraction is not at all trivial. But it's powerful. We will do a lot of this over the course of the semester, and so you will need to learn how to use fitteia.
Let's pursue a specific example. Some of the units have been changed for the purposes of obfuscation. Suppose you are trying measure something for which an appropriate mathematical is of the form, $y = \sqrt{bx}$. Your fitting parameter is 'b'. The essential physics you think applies to the experiment (theory) suggests that your dependent variable depends nonlinearly on your independent variable in the manner specified (e.g., $y$ varies as the square-root of $x$). So, you vary '$x$' 4 times and measure '$y$' 4 times. You record (probably in excel, and then print, cut, and tape into your lab notebook, although it's safer to do it in reverse order...noting file location, in ink, in your notebook) your 4 data points in the form $x,y,\Delta y$, a 3 column array of experimental results. You then paste this array into the appropriate spot on fitteia.org, as shown in Fig. 1 below.The best value of $b$ is given as 2.19546, and we have to ask how well we know this number, $b_{best}$, and how many of those decimal places are are known and thus significant? What is its uncertainty, and what are it units? And to what do we compare $b_{best}$ in order to arrive at a discrepancy? We have all these probing question regarding the results of the analysis. What we would like to do now is try and answer these questions. In fact, it is imperative that we answer them as good physicists. Let's return to $b_{best}$ output. Are all those digits certain? How many digits can we really keep and what is the uncertainty in the last digit we can keep, the least significant digit. The Goldilocks plot (explained elsewhere) will give you the appropriate uncertainty in the least significant digit. The fitting routine itself furnishes an error estimate of the best fit value (expressed with many, many 'significant' figures, and not all of these are certain!), but statistical assumptions underpinning that calculation may not apply to our approach, hence the need of a Goldilocks plot. The result of this critical analysis of fittea's output will be something like '$b = b_{best} \pm \Delta b$', where the $\Delta b$ term expresses the uncertainty in the best fit term, and the best fit value is truncated appropriately to include only the significant digits, as determined by the most significant digit of the uncertainty. Great. But to what do we compare $b_{best}$ in order to arrive at a discrepancy?
Well, that would be the experimental value of $b$ based on measurements. Suppose, according to the model, $b = g/(\mu L^2)$, with, of course, $g$ being the acceleration due to gravity, in furlongs per fortnight$^2$, $\mu$, some weird mass density in $deniers$, I don't know, pounds per furlong, and $L$, a length in furlongs, to be consistent. You have experimental uncertainties in all of those values. You punch in the numbers and you calculate $b_{experimental} = 2.3547912 \times 10^{3} (kpound-fortnight^2)^{-1}$, which are the same units as the best fit value but off by $10^3$. I bring this up because this sort of thing happens a lot. Sometimes it's a factor of a billion (that's when you start checking to see if your data has units of $nm$ or $m$). And in this case, you find your factor of a thousand from the fact that you should've plotted the independent variable in kilopounds and not pounds, so you change where the decimal place is in the x data in the fittea input, or in the spreadsheet and then throw it back in, and click 'fit', and now $b_b = 2.19546\times 10^3$. And $b_e = 2.354712\times 10^3$ in the same units, with shorter subscripts. I know it's grams in the picture above, but pretend it's pounds. Anyway, progress. The difference between these numbers is the discrepancy.
So do these value 'agree within experimental error'? That is, is the discrepancy is less than or equal to the uncertainty? Well, I don't know, I'd have to calculate $\Delta b_e$ by propagating uncertainty in that quantity based on the more basic, primitive uncertainties in $g$, the $\mu$ and the $L$ using standard techniques, which I wont go into here, but which I will assume you all know. Let's say the uncertainty of $\Delta b_b = 0.1$ and the uncertainty $\Delta b_e = 0.3.$ How would that comparison be written in your abstract, using significant figures appropriately, and what would you say there about agreement being 'within experimental error' or not? This question will be put to you on day 1. Have an answer ready.
Once you do, you'll recognize that the modeling exercise has furnished crucial qualitative and quantitative comparisons between theory and experiment which permit the experimenter to have a defensible position on the question whether the data was good or not. The experimenter will have taken an active role in coming to that opinion, and will have grown in some measure at being able to make an independent judgment about that important question, which is one of the important goals of these laboratory exercises.