I'm working on a simple linear regression model in a physics course, where we are doing measurements of the round trip speed of light, over increasing distances. We are using the Least Squares method to determine the slope of a linear function $y(x) = B*x$. In this case, it is given that $B$ kan be written as
$$B = \frac{\sum_{i=1}^{N} x_i y_i} {\sum_{i=1}^N x_i^2} \qquad(1)$$
Where I get stuck, is in trying to find the formula for error of $B$, when given the error of $y$, and assuming the error in $x$ is negligibly small.
Using this general formula for propagation of error (for a function $B(x, y)$: $$\delta B = \sqrt{\left(\frac{\partial B}{\partial x} \delta x\right)^2 + \left(\frac{\partial B}{\partial y} \delta y\right)^2} \qquad(2) $$
which, when $\delta x\approx0 $, simplifies to $$ \delta B = \left|\frac{\partial B}{\partial y}\right|\delta y \space. \qquad(3) $$
At this point, I'm not sure how I should go about partially differentiating the sums in the equation for $B$, in order to find $\delta B$. I know that the final answer should come out to $$ \delta B = \frac{\delta y}{\sqrt{\sum_{i=1}^{N} x_i^2}}\space, \qquad(4) $$ but would love some guidance as to how to get from formulas (1) and (3) to the final (4)! How does the process of partial differentiation work when you're partially differentiating a formula with sums in it?
EDIT (for clarification): We are working with a set of measurements $(x_i, y_i)$, where $y$ is a given distance of travel for a light pulse (metres), and $x$ is the time which the pulse takes to traverse that distance (seconds). Such that $y$ is a linear function of $x$, and $B$ should work out to be equal to the speed of light in m/s. The measurements of $x$ and $y$ are made in pairs, so for example: $$ (x_1, y_1) = (3.3*10^{-9}, 1)$$ $$ (x_2, y_2) = (6.4*10^{-9}, 2)$$
$\delta y$ is the standard deviation / error of the measurements taken of $y$.
Thanks in advance!