Every lab will have at least one modeling plot. To cheat the process, just scroll to the bottom to see what this whole page is building up to.
Good notebook skills involve making plots right there on the quadrille paper especially designed for the very purpose. One can quickly make reasonably neat tables too without a lot of effort, and then proceed to make quick work of a graph, or plot, even while the data is being taken. It helps to work in teams, one person calling out the measurements, another writing things down, or adding data points to the plot, and so forth.
ANY plot will have axes and labels. Plots typically do not have titles in peer reviewed journals. Titles on the displayed plots below were put there for training purposes only (training the instructor that is...who is trying to teach himself something about 'matplotlib' in Python). What if you are observing some quantity decay in time, perhaps some radioactive substance of some sort, and someone calls out the readings of the Geiger-Counter, say, and you make the scatter plot, so called, shown below. Never mind whether we could actually do this realistically (we might need instruments to 'take the data' for us because we'd be too bored, or because we could never write things down that fast).
Figure 1. A scatter plot, or a simple y-data vs. x-data plot.
Huh. I must've normalized the counts to the biggest count. Well, let's continue anyway...So, this is cool and all, and sufficient for the kind of graph one makes on the fly as one takes data. It shows trends and could confirm model expectations and so on. It can reveal where you have enough data and maybe not enough, which is clearly of importanance while you are taking data. There must be plots like this in your lab notebook. It is a necessary intermediate modeling step. But the graph does not yet reflect how certain we are of each data point. The reader will know already that that uncertainty of the data point is to be noted when it goes in the data table, or rather when it's taken. The estimate will have been made already and the rationale for the estimate written down in the lab notebook! OK, so these uncertainties will be graphically depicted as error bars. These let us know how well we know the data points we are plotting. So a more developed graph looks like this now,
Figure 2. A scatter plot with simple error bars.
And this is great, but it isn't yet what we want in any final sense. We want to use our model, whatever it is, to help us measure some quantitity and determine an uncertainty in that quantity. How? If we can get some sophisticated program to (a) try a bejillion different values of a fitting parameter to find the model prediction for all possible values of the variable varied (i.e. the experimental 'independent variable'), which by the way gives us a curve of some sort, and (b) tell us what is the best curve by some statistical measure (e.g, $\chi^2$) and therefore the best fitting parameter, then we can use a model to compare with data and so 'measure' a best value of a fitting parameter. And this fitting parameter will either be the thing we are trying to measure, or will have bundled in it that thing we are trying to measure. In either case, we will get an estimate of the thing we hoped to measure. Here, in this kinda-made-up-for-netflix-originals-science-example-thing, it is a $half-life$, say, $t_{1/2}$, something we can compare with a known value, an accepted value, or something like that. We'll use a very sophisticated web-based fitting environment (using math routines kept on CERN's servers), found at fitteia.org. More on that on the link,
'still more on lab records'.
To return to the plots, we'll throw our data into the fitteia input buffer (you may need to generate spreadsheets to make this a little easier), enter a modeling function with (again, more on that elsewhere), with the half-life bundled into a fitting parameter cleverly named 'a' or 'b', something, anyway, and find a best fit. Then, because you are stubborn and wish to use your own judgment about just how good this fit is, you take control of the fitting process and play with adjustable parameter using the Goldilocks approach, finding fitting curves that still fit within the error bars (drawn really big here to show more immediately what we are talking about), but are maybe TOO BIG, or TOO SMALL, including of course the one that is JUST RIGHT. That one determines your best value, and it is the one automatically calculated by the fitting routine. The other two curves help you determine the uncertainty in that value....please, please, please note the difference in principle between the uncertainty in the fitting parameter, and the uncertainty in the measured data points! They will probably even have different units! You need to know both, you cannot really have one without the other, but probably only the uncertainty in the fit will make it to the abstract. Why? Because only this uncertainty quantifies the the goodness of the fit, how well the model and the data agree, something to appeal to, to look at when comparing discrepancies and experimental uncertainties and so forth. Anyway, here's a Goldilocks plot, yes, let's call it that.
Figure 3. A Goldilocks plot, a scatter plot with simple error bars, with 3 modeling curves, where the key fitting parameter is manually varied, choosing it to be now too big (but still within the error bars!), now too small, and and now JUST RIGHT, which determines the best fit and its uncertainty. You have to make a Goldilocks plot to tell the uncertainty in the fitting parameter.
The point of all of this of course is simply (or not so simply) to produce this modeling plot below. This for sure is a keeper, and gets taped into your lab notebook. You cannot write an abstract before you do this. Well, you could, but .... well, it's that $t_{1/2} = t_{best} \pm \Delta t_{best}$ that goes in the abstract. And the most significant digit of the uncertainty here determines the least significant digit of the best value. So, round things, to get just one digit of uncertainty....don't get me started. The student will want to know about this $\chi^2$ thing. It is a statistical measure of goodness of fit between data and a model, and we'll need to know more about that. We will use modeling software hosted by fitteia.org , described on another page.
Figure 4. A scatter plot indicating the best fit value, uncertainty of the that value, and the best fit curve. Note that $\Delta t_{best}$ arises from fiddling with the fitting parameter that produces the Goldilocks plot.