I hope this is not a double-post, but the other threads couldn't help me:
In my calculations of the differential cross section $\frac{d\sigma}{d\Omega}$, I am always a factor $\pi$ lower than the reference data. This is driving me NUTS. As I have checked my code multiple times and found no mistake, my best guess is that I am lacking the understanding of some basic definitions.
Here is my understanding:
In experimental physics, the differential cross section is defined as follows:
$$\frac{d\sigma}{d\Omega} = \frac{N}{F \cdot \rho \cdot \epsilon \cdot \Delta\Omega}$$
Where $N$ is the count of desired interactions, $F$ is the number of incoming particles, $\rho$ is the target area density in inverse microbarns, $\epsilon$ is the reconstruction efficiency and $\Delta\Omega$ is the solid angle element in which the particles are detected.
I assume that I am making a mistake related to $\Delta\Omega$. I assume that following equation holds:
$$\Delta\Omega = 2 \cdot \pi \cdot \Delta \cos(\theta_{CM}).$$
If my detector spans from $\cos(\theta_{CM})=0.9$ to $\cos(\theta_{CM})=1.0$, I think that $\Delta \cos(\theta_{CM})=0.1$, meaning it is the angle covered by the detector. That would imply $\Delta\Omega = 0.63$.
Are these assumptions correct?
When data is given as $\frac{d\sigma}{d\cos(\theta)}$, I assume I could just convert this to $\frac{d\sigma}{d\Omega}$ by multiplying with $\frac{1}{2\pi}$. Correct?
I am going crazy over this missing factor of $\pi$.