We can determine \(\delta V_1\) and \(\delta V_0\) experimentally by measuring the statistical spread in the minimum and maximum values of \(V_{pd}\) (when \(\phi=\ \frac{\pi}{2}\) radians and  \(\phi=\ 0\), respectively). The polarizer angle \(\theta\) is mechanically set, so the uncertainty \(\delta\phi\) depends on the accuracy of a nominally 15° step change in angle \(\theta\) with the apparatus at hand. 
Now we encounter an apparent conundrum: we want to use the curve fit to determine a best estimate for  \(V_0\) but solving for \(\delta V\) requires use to already have a numerical value for \(V_0\). How do we determine the numerical values for  \(\delta V_{pd}\left(\theta\right)\) needed  to find a best estimate of \(V_0\) if we don't already know \(V_0\)? This, however,   is not the impasse that it might seem. The curve fitting algorithm already requires that we supply an initial guess for the parameters. In this situation,  then, we can carry out the curve fitting algorithm a second or third time instead of just once, each time using the best fit values output by the algorithm in the previous fit as our new initial values for the new fit. Once the output values match the input values (within uncertainty), we stop. When the output values match the input values, we say the results are self-consistent.
Here is the procedure for finding self-consistent 'best fit' values from curve-fitting:
  1. Make a rough initial guess for the parameters \(V_0\)\(V_1\), and \(\theta_0\) from a graph of the data.
  2. Use the values of  \(V_0\)\(V_1\), and \(\theta_0\) output by the curve-fitting routine as a new 'initial guess'
  3. Repeat the curve fit  (using each output as a new input) until \(V_0\) stops changing. 
Note: if you know how to do programming in Python, this would be a great place to simplify your life by introducing a  while loop into the code that repeats the curve fit until the results become self-consistent. We plan to add a section illustrating how to do that  in a future version of this guide. 

fitting the model to data

calculating best fit values

We now turn to the actual Python code for non-linear curve fitting. Notice that  this is a  "weighted" fit, in that the stated uncertainty of each data point is taken into account during the fit. Practically speaking, this means the curve-fitting routine tries harder to match the model to the data at points with a smaller uncertainty (although it may not succeed) because those points are given greater importance ('weight'). This is as it should be, and is also needed to calculate a numerically accurate chi-square value for a determination of the "goodness of fit."
Here we assume that values have already been experimentally determined for uncertainties in  \(V_0\)\(V_1\), and \(\theta\). We will therefore leave these unchanged throughout the curve-fitting process.