9
$\begingroup$

I've encountered many integrals that seem to integrate functions of distance with respect to mass, for example, $\int_0^Mr^2dm$ for the moment of inertia of continuous mass distribution.

I'm not sure what an integral of a function of distance with respect to mass means mathematically, but I have a couple of ideas.

$r$ is a function of mass: Then the integral can be expressed as $\int_0^Mr(m)^2dm$, which is definitely possible to integrate using calculus, however, I'm not sure what this would mean intuitively.

In calculus, I was taught the single integral of a function could be thought of as the "area between the graph and the x-axis". More rigorously, this would be the supremum of the set of all possible lower Darboux sums, equal to the infinium of all possible upper Darboux sums, which is equal to the limit of a Reimann sum as the number of intervals approaches infinity. All of these definitions require distance from the axis at some mass $m_0$ to be defined, however, I don't see how we'd define say $m=3$ or $r(3)$. Wouldn't there be multiple ways to define the location of the point where $m=m_0$ for any $m_0$ (I'm assuming the distance from the axis is determined based on the position of that mass)?

$dm$ is shorthand for another expression: This is the second thought I had, maybe dm is shorthand for something like $\rho dr$ where $\rho$ is the density of the object. This idea came from the fact that in math, $ds$ is often used for shorthand for $\sqrt{(u'(t)+v'(t)}dt$ or something similar for higher dimensions. If this is the case, the idea expressed by the integral makes much more sense, since integrating with respect to distance is much more intuitive than integrating with respect to mass.

But this idea still has a couple of issues. The limits of integration are defined in terms of mass, implying that the integral is actually with respect to mass- not distance (I've always seen shorthand like $ds$ is used within integrals that had limits in terms of the parameter, not s). Also, the idea of multiplying infinitesimals like algebraic variables seems somewhat non-rigorous, that is, it's not obvious what saying something like $dm=\rho dx$ means mathematically. I'm aware solving integrals by using substitution or solving differential equations by "multiplying both sides by $dx$" is possible; however, these are both notational conveniences for the chain rule and there doesn't seem to be a similar mechanism at play here.

So what does integrating with respect to mass actually mean? Is it a combination of both of these ideas? Is it neither?

Any clarification would be greatly appreciated.

$\endgroup$
8
  • 2
    $\begingroup$ Does this answer your question? Understanding the differential in integrals $\endgroup$
    – knzhou
    Commented Jan 1, 2021 at 19:07
  • 2
    $\begingroup$ That $\mathrm{d}m$ thing is not worth the few millimeters of space it saves on a page. Furthermore, it hides, as you have noticed, that the integral is over a spatial domain. And I share your shrugged shoulders over what it means mathematically. It does nothing but confuse people. IMHO it should be dumped in the trash bin. $\endgroup$
    – garyp
    Commented Jan 1, 2021 at 19:23
  • 12
    $\begingroup$ Don't trust anyone who writes the integral $\int r^2\,\text dm$ with limits from $0$ to $M$. $\endgroup$ Commented Jan 1, 2021 at 19:41
  • 1
    $\begingroup$ @garyp, this form is probably presented to develop the moment of a distrbuted object from the formula for the moment of a collection of point masses. $\endgroup$
    – The Photon
    Commented Jan 1, 2021 at 21:01
  • 1
    $\begingroup$ $dm$ needs to be restated in terms of $r$, and the integral limits run from 0 to R. $\endgroup$ Commented Jan 2, 2021 at 22:58

7 Answers 7

7
$\begingroup$

Intuitive answer

To compute the moment of inertia "numerically," we can imagine discretizing our object into a set of point masses which all have the same, small mass. Let's say the total mass is $M$ and we break the object into $N$ discrete point masses; then each point mass will have mass $M/N$.

Then the moment of inertia can be calculated as a sum over the discrete masses

\begin{equation} I = \sum_{i=1}^N r_i^2 m_i = \sum_{i=1}^N r_i^2 \frac{M}{N} \end{equation}

Now to take the continuum limit, we want to take $N\rightarrow \infty$. The sum will turn into an integral

\begin{equation} I = \int {\rm d} m\ r^2 \end{equation}

This gives us an idea of what is going on; the integral refers to a sum over infinitesimal mass elements, and is telling us to sum the product of the infinitesimal mass in the element and the distance of that mass element from the axis of interest. Of course this "derivation" is a little sloppy, so let us be a bit more precise and increase our level of sophistication.

More rigorous answer

(Caveat: When I say "rigorous," I don't mean full mathematical rigor, more like introducing some words and describing things at a physics level of rigor) You can interpret ${\rm d} m$ as being a measure. In other words, we have a function of space which tells us how much density is located in each region of space.

Let's suppose we have a 3d object, and call this density function $\rho(\vec{x})$. Then the mass contained in a small region of space (our measure on space), is related to the density function, by

\begin{equation} {\rm d} m = \rho(\vec{x}) {\rm d}^3 x \end{equation}

The moment of inertia integral becomes

\begin{equation} \int |\vec{x}|^2 {\rm d} m = \int |\vec{x}|^2 \rho(\vec{x}) {\rm d}^3 x \end{equation}

This is an unambiguous formulation of the integral that can be used in a calculation.

Now some philosophical comments:

One advantage of the ${\rm d m}$ notation, is that it does not commit to what kind of region you are integrating over. The expression $\int r^2 {\rm d}m$ is correct whether we are dealing with a 1-dimensional rod, a 2-dimensional disk, or a 3-dimensional object. Of course, to actually calculate the integral in practice, we need to relate ${\rm d}m$ to a line, surface, or volume integral using the appropriate density function.

Therefore the notation is useful as an abstract way to express the moment of inertia, but is not useful as a starting point for a calculation without adding additional information.

$\endgroup$
0
3
$\begingroup$

I think of it according to your second description, but there are a couple of (maybe meaningless) subtleties.

maybe dm is shorthand for something like ρdr

Indeed, you will find the moment of inertia expressed as

$$\int_V \rho(\vec{r})\left|\vec{r}\right|^2dV$$

in other contexts.

it's not obvious what saying something like dm=ρdx means mathematically.

Imagine you're building your object from a bunch of tiny bricks.

You could have a bunch of bricks all the same size, but different density and thus different mass per brick. Or you could have bricks all the same mass, but different size (but somehow all fitting together perfectly, a feat that might depend on the shape and mass distribution of the object).

Either way, once you put enough bricks together you get the complete object, and its moment of inertia will be the same whichever kind of bricks you use to build it.

Describing mathematically how to take the limit as the size (or mass) of the bricks shrinks toward 0 is left as an exercise.

$\endgroup$
2
$\begingroup$

You're right - and this confused me a lot, too, because in particular, $r$ may not be a proper function of $m$, i.e. it may (and often is) be multivalued (one-to-many), in that there are multiple radii at which the same mass $m$ would exist, rendering the usual idea of integration problematic.

And what I came to is this, which amounts to a long comment on the calculus, as that's really where I feel the problem lies: how the calculus - math - is often taught, versus how it ends up working in practice. More specifically, the problem is the integral notation, and it hides what is really going on. You see, in a way, either you can say this notation

$$\int r^2\ dm$$

is a lie (note the lack of bounds, too!), or you can say that the way we typically told we should write integrals

$$\int_{a}^{b} f(x)\ dx$$

is a kind of lie, and its this latter one that I'd like to talk about here. But in either case, there's some lying going on. And what I want to do is motivate here, with a long shpiel, another notation, that we could use to make clear what is really going on.

In particular, the "$d$" thing is not actually simply something that designates "what variable you integrate with respect to". That interpretation is too simplistic and doesn't cover all the use cases, as you've just encountered.

It's hard to give a formalism for just what the "$d$" thing is, and there are quite a variety of formalisms available - for example, one of these is what is called differential forms, another is called non-standard analysis, a third is called measure theory - so without going into too many details, what I would say is to stick to an intuitive definition only for now, which after all is important because otherwise the formalism will seem like dry rules:

$dx$ (and $dm$, etc.) are increments, or functions thereof.

You see, an integral

$$\int_{a}^{b} f(x)\ dx$$

really is not too much different from a sum, i.e. from this:

$$\sum_{n=a}^{b} f(n)$$

That's why we use that funny symbol $\int$. It's an old script and italicized "s", for "sum". Just as $\sum$ is the Greek letter sigma, a Greek version of "s", again, "sum". If you look at some historical English writings, you may encounter the $\int$ symbol used as a letter.

The difference is that, while a sum adds up for each integer from $a$ to $b$ inclusive, an integral adds up for each real number from $a$ to $b$ inclusive. It doesn't add $f(x)$ at each $x \in [a, b]$, rather it adds $f(x)\ dx$ at each $x \in [a, b]$. $dx$ is a part of the thing being added up, not some auxiliary thereto.

The reason it's there is not to "just indicate what variable we integrate with respect to". You've just seen how this interpretation is too narrow and even leads one into trouble in practice. (For example, in integrals with multiple variables $dx$ and $dy$) Rather, to understand the need for it, let's take a closer look at this idea of "summing up at every real number".

The chief difference between the real numbers and whole numbers, for our purposes, is that in any interval of the latter, there are only finitely many, but in any interval of the former, there are uncountably infinitely many. And while we can add up a finite, or even countable, number of real summands to get a finite real result, we cannot add up an uncountable number of such summands, and get a finite result, unless all but finitely many of them are zero. But that, of course, would be no different than a usual sum, so it's kind of trivial.

Hence, to get something non-trivial, we must create a third option: what if we add up a bunch of things that are so tiny, that they are smaller than any real number, yet not zero? Such things are not real numbers - hence all the different formalisms I just mentioned - but we will in effect add them up and pretend the output we get is a real number. That's why we use an $\int$, not a $\sum$ - the latter only takes real numbers in, the former takes these other things in and cranks real numbers out. But what are they?

Well, to make it more natural to where this comes from, suppose I write a sum as

$$\sum_{n=a}^{b}\ f(n)\ \Delta n$$

where this is not a different notation. Instead, $\Delta n$ is just another numerical factor we've added in to the usual summation. What it is is the size of the increment as $n$ jumps from one whole number to the next whole number. We just don't typically write it, because it is $1$. But note that if I put the increment of another variable that was not changing, i.e.

$$\sum_{n=a}^{b}\ f(n)\ \Delta m$$

I'd get a sum of 0, because those increments are 0. Hence it seems "$\Delta(...)$" "selects" what variable we are integrating with respect to, but it isn't actually what determines it: what determines it is what's under the sum symbol.

In

$$\int_{a}^{b} f(x)\ dx$$

$dx$ is the increment as $x$ jumps from one real number to an adjacent real number - except there is no such thing, so we have to come up with some more complex formalism to try and pretend there is in some way, one of the most elementary being the Riemann (or Darboux, as you point out) sums, where the limit allows us to shrink a finite increment down. We use this increment as it provides a convenient damping factor. But there's no reason we could not use something else, provided it were also "suitably small" - without getting too deep into formalisms.

And so, what my point here at the end is is, is that we really should notate the variable of integration explicitly, i.e. we should not write

$$\int_{a}^{b}\ f(x)\ dx$$

but we should really write

$$\int_{x=a}^{b}\ f(x)\ dx$$

which makes the variable explicit, just as with sums. Now there's no problem! We could change out $dx$ with anything else, or even get rid of it altogether (in which case the integral will be infinite unless $f(x)$ is zero at all but a countable number of points).

In the case of

$$\int r^2\ dm$$

what we are integrating with respect to is NOT the variable $m$, but instead a point, $P$, within the region occupied by the object, $O$, a set of points in $\mathbb{R}^3$. Hence, we should really write

$$\int_{P \in O} r^2\ dm$$

Now $dm$ is an infinitely small amount, that has nothing to do with the variable of integration. The variable of integration is really $P$, a point on the object. $dm$ is the point-sized amount of mass "just around" $P$, that would be zero but isn't quite. So we're adding up all the tiny, uncountably many moments of inertia $dm r^2$ contributed by each little bit of mass around each point $P$ in the object. And we add up all those little tiny masses at each point in $A$. If you want,

$$\int_{P \in O} [r(P)]^2\ dm(P)$$

would be even more explicit, as $dm$ is different at each particular point.

So you aren't "integrating with respect to mass". You're integrating with respect to a point within the object. What you are integrating, though, involves infinitely small masses at each point therein.

And

$$\int_{0}^{M} (...)$$

notation? That's silly. That's what happens when you take the "$d$ means 'what to integrate with respect to'" business too far, from this perspective.

$\endgroup$
1
$\begingroup$

I think the easiest way to understand what is going on here is to imagine summing up the gravitational field at a point due to a bunch of point charges. Let's take your "center of gravity" example. If we have two point charges in one dimension, the center of gravity is given by:

$$x_{cm} = \frac{m_{1}x_{1} + m_{2}x_{2}}{m_{1} + m_{2}}$$.

, and it should be clear that as we add more and more masses, we get something like:

$$x_{cm} =\frac{\sum_{i}m_{i}x_{i}}{\sum_{i}m_{i}}$$

Often in physics, though, we want to worry about these things for the case of continuous distributions of matter, in which case, we can think of the distibution as an infinite number of infinitesimal masses, which, in the language of calclulus, means converting the sum into an integral, which means:

$$x_{cm} = \frac{\int dm\,x}{M}$$

where $M$ is the total mass. In practice, we rarely work with x as a function of m, though, and instead, we have density functions, and write something like:

$$M = \int dx \rho$$

, which implies $dm = \rho dx$, and consequently, think of the center of mass equation as:

$$x_{cm} = \frac{\int dx \rho(x) x }{M}$$

In fact, if you know the dirac delta function, you can write the first "sum" equation this way, with the density function:

$$\rho(x) = m_{1}\delta(x - x_{1}) + m_{2}\delta(x-x_{2})$$

whose integral will simply evaluate out to the first equation.

$\endgroup$
2
  • $\begingroup$ In practice, we rarely work with x as a function of m, though Rarely? When is this ever done? Is it even possible to do this? What would $x(1\rm{kg})$ even mean, for example? $\endgroup$ Commented Jan 1, 2021 at 19:48
  • 3
    $\begingroup$ @BioPhysicist, the distance from the origin where you've accumulated 1 kg of mass. It's also the case that sometiemes the $dm$ integral is simple enough that you dispense with the step of substituting in the density and converting it into a volume integral. $\endgroup$ Commented Jan 1, 2021 at 20:26
1
$\begingroup$

Mathematical Setup

So far, you have learnt about Riemann integration: a function $f:[a,b]\to\Bbb{R}$ is Riemann-integrable if it is bounded and the supremum of lower Darboux sums equals the infimum of upper Darboux sums, in which case the integral is defined to be this common value, and is denoted as $\int_a^bf$. This definition generalizes almost verbatim to $n$-dimensions. Anyway, I want to highlight the structure of things: notice that you have three pieces of information:

  1. An interval $[a,b]$ (this is your starting setup).
  2. A collection, $\mathcal{R}_{[a,b]}$, of functions $[a,b]\to\Bbb{R}$, called the Riemann-integrable functions (these are the functions which are "nice enough" to be assigned an integral).
  3. A linear mapping $\mathcal{R}_{[a,b]}\to\Bbb{R}$ which assigns to each Riemann-integrable function $f$ its Riemann integral $\int_a^bf$ (which you can easily prove is a linear mapping).

I'm guessing you haven't taken a course in Lebesgue integration with repsect to arbitrary measures (because then this question becomes "almost obvious"). Let me try to outline the structure of this theory of integration, we again have three pieces of information

  1. A measure space $(X,\mathcal{M},\mu)$. Here, $X$ is a set, $\mathcal{M}$ is a collection of "nice subsets" of $X$, and $\mu$ is what's called a measure (which is a function $\mathcal{M}\to [0,\infty]$). Very roughly, if $A\in\mathcal{M}$ (recall this means $A$ is a "nice" subset of $X$) then $\mu(A)$ is a number called the measure of $A$, and intuitively you should think of $\mu$ as a "measurement device" telling you "how big" the set $A$ is.

  2. A certain collection $\mathcal{L}^1(\mu)$ of "nice" functions $X\to\Bbb{R}$ (we can consider more general target spaces, but let's not bother now), called the Lebesgue-integrable functions on $X$, with respect to $\mu$. Most of the functions you encounter in an introductory physics class are going to be "nice enough" anyway, so I won't bother defining this space now.

  3. A linear mapping $\mathcal{L}^1(\mu)\to\Bbb{R}$ which assigns to each "nice function" $f\in \mathcal{L}^1(\mu)$ a specific real number, denoted as $\int_Xf\, d\mu$ or $\int_Xf(x)\, d\mu(x)$, called the Lebesgue integral of $f$ over $X$ with respect to the measure $\mu$ (again, I won't go into details of how this is carefully defined unless you really want me to). The $d$ here is just to make things look as classical as possible, don't read too much into this here.

As you can see, from a technical perspective, Lebesgue integration is much more involved, but structurally, it is essentially the same, there are three things at play: the general setup ($[a,b]$ vs $(X,\mathcal{M},\mu)$), the "nice collection of functions" ($\mathcal{R}_{[a,b]}$ vs $\mathcal{L}^1(\mu)$) and the integration mapping ($\int_a^b(\cdot)$ vs $\int_X(\cdot)\, d\mu$). The Lebesgue theory can be shown to be a generalization of the Riemann theory.


Using it in Physics

Now given a body $B$, we can interpret its moment of inertia $I=\int_Br^2\, dm$ using the Lebesgue integral as follows. When I speak of a body, I really mean a subset $B\subset \Bbb{R}^3$ (if you want to be technical then ok we can require this to be either Lebesgue measurable or Borel measurable). Next, we have a measure $m_B$ (defined on either the Lebesgue or Borel $\sigma$-algebra of $B$). Recall from above that measures should intuitively be thought of as "measurement devices telling how how big sets are"; also by indicating the subscript $B$, I'm trying to emphasize that this is very much dependent on the specific body we're interested in.

For example, if $B$ is a solid cube of unit side-length, having uniform mass density of $\rho$, then by definition for any (Lebesgue/Borel measurable) subset $A\subset B$, we put $m_B(A):= \rho \cdot \text{volume}(A)$. So in this setting, the way $m_B$ measures size of a given set $A$ is relative to the density $\rho$ and the volume of the set $A$ in question. So, specializing some more, we have $m_B(B)=\rho \cdot \text{volume}(B)$; i.e the mass of the body in question is its density times its volume (of course, I defined things precisely to make this true).

Finally, we're considering the norm function $r:\Bbb{R}^3\to\Bbb{R}$, $r(p):= \lVert p\rVert:= \sqrt{p_1^2+p_2^2+p_3^2}$, i.e the Euclidean length of $p$. With this, the symbol $\int_Br^2\, dm_B$ makes perfect sense. We're integrating the function $r^2$ (which is a mapping $\Bbb{R}^3\to \Bbb{R}$, taking each $p\in\Bbb{R}^3$ to its squared length $r(p)^2 = p_1^2+p_2^2+p_3^2$) over the set $B$ with respect to the measure $m_B$. Thus, the symbol $\int_Br^2\, dm$ can be interpreted precisely using the concept of a Lebesgue integral.


A Few Remarks

Typically, one starts with a (Lebesgue/Borel measurable) set $B\subset \Bbb{R}^3$, and considers a function $\rho:\Bbb{R}^3\to[0,\infty)$ such that $\rho|_{B^c}=0$, i.e $\rho(x)=0$ for all $x\notin B$. This is what we refer to as the "density of $B$", so of course the interpretation is that for each $x\in\Bbb{R}^3$, $\rho(x)\geq 0$ is a number telling us the density of the body $B$ at the point $x$ (this is why if $x\notin B$, we require $\rho(x)=0$...because if there's is no body there, we should assign the density to be zero). From this density, we can construct a measure $m$ as follows: for any $A\subset \Bbb{R}^3$, we define $m(A):=\int_A \rho\, dV$ (where $dV$ stands for integration with respect to volume, or more precisely, the Lebesgue measure on $\Bbb{R}^3$). This is often notationally denoted as $dm=\rho \,dV$; again the $d$ has no independent meaning in this context. It's sole purpose is to invoke a sense of familiarity with classical notation. The meaning of the equation $dm=\rho\,dV$ is that for all (measurable) sets $A$, $m(A)=\int_A\rho\,dV$ (i.e the measure $m$ is defined by integrating $\rho$ with respect to Lebesgue measure).

Now, for any "nice" function $f:\Bbb{R}^3\to\Bbb{R}$, we can consider the Lebesgue integral $\int_{B}f\, dm$, and we can show that (if you carefully unwind the definition of Lebesgue integral, and use things like monotone convergence theorem) this equals $\int_{B}f\rho\, dV$, i.e \begin{align} \int_{B}f\,dm &= \int_{B}f\rho \, dV \equiv \int_{B}f(x)\rho(x)\, dV(x). \end{align} Physically, the way we interpret this equation is that integration of a function $f$ "with respect to mass" (which has a precise meaning as a Lebesgue integral) can be equivalently thought of as a "weighted integral" of $f$ times the density $\rho$ with respect to the usual volume integral. You may have encountered things like weighted integrals/ weighted averages in probability or some other context... well this is precisely that.

Almost every example you will encounter will be of this type: you start with some $\rho$ (or you have a word problem which tells you how to calculate $\rho$, eg you have a cone of total mass $M$, base radius $R$, height $H$, whose mass density increases linearly as you go radially outward) and from this you use the formula above to calculate integrals with respect to mass. The right-most integral is now a simple exercise in multivariable integration; you may have to parametrize the set $B$ appropriately, use symmetries carefully if applicable, use Fubini's theorem etc, or any other trick to evaluate the integral.

However, there are some examples where this is not the case. It is possible to have measures which cannot be "represented as a density". i.e there exist measures $m$ such that there is no $\rho:\Bbb{R}^3\to\Bbb{R}$ such that for all sets $A$, $m(A)=\int_A\rho\, dV$. The most famous of these examples is the dirac measure $\delta_0$, which assigns $\delta_0(A)=0$ if $0\notin A$ and $\delta_0(A)=1$ if $0\in A$. The mathematical way of saying this is that there are some measures on $\Bbb{R}^3$ which are not absolutely continuous with respect to the Lebesgue measure on $\Bbb{R}^3$, and hence have no Radon-Nikodym derivative with respect to Lebesgue measure.

Also, a question you may have thought of is "how to rigorously define the density of a body"; you may have seen something like $\rho = \frac{dm}{dV}$, and you may have wondered how is this defined precisely (this is obvious in some simple cases, but not always). Well, the answer again is related to the concept of Radon-Nikodym derivative.


Of course, this answer is definitely on the more mathematical side, but hopefully it gives you a few buzzwords to look up, and gives you something to look forward to in future more advanced analysis classes, where you can then relate the more abstract concepts in analysis down to something very physical.

$\endgroup$
0
$\begingroup$

An integral over mass, is in reality an integral over volume, with mass density $\rho$. This means an integral over every particle of a solid.

$$ {\rm d}m = \rho\, {\rm d}V \tag{1} $$

Consider a solid whose extent is described by a position vector $\vec{\rm pos}(u,v,w)$ which is a function of three parameters. For example a cylinder would have $\vec{\rm pos}(r,\theta,z)$.

Each of the three parameters has limits that bound the solid. This completes the mathematical definition of the solid, and we can name these limits with index 1 and 2

Definition of a Solid

The position of each particle in a solid is defined in terms of three parameters

$$ \begin{aligned} \vec{\rm pos}(u,v,w) & = \pmatrix{x \\ y \\z} \\ u & = u_1 \ldots u_2 \\ v & = v_1 \ldots v_2 \\ w & = w_1 \ldots w_2 \\ \end{aligned} \tag{2} $$

keeping with the cylinder example

$$ \begin{aligned} \vec{\rm pos}(r,\theta,z) & = \pmatrix{r \cos \theta \\ r \sin \theta \\ z} \\ r & = 0 \ldots R \\ \theta & = 0 \ldots 2\pi \\ z & = -\tfrac{\ell}{2} \ldots \tfrac{\ell}{2} \\ \end{aligned} $$

Integration over solid

To find the infinitesimal volume as a function of the parameters $(u,v,w)$ you need to do some vector algebra.

$${\rm d}V = \underbrace{ \frac{\partial \vec{\rm pos}}{\partial u} \cdot \left( \frac{\partial \vec{\rm pos}}{\partial v} \times \frac{\partial \vec{\rm pos}}{\partial w} \right) }_{f(u,v,w)}\; {\rm d}u\,{\rm d}v\,{\rm d}w\ \tag{3} $$

For the cylinder you will find $f(r,\theta,z)=r$ or ${\rm d}V = r\;{\rm d}r\,{\rm d}\theta\,{\rm d}z$ from (3).

Depending on the parameterization of the solid it is important to establish the scalar function of volume

$$ f(u,v,w) = \frac{\partial \vec{\rm pos}}{\partial u} \cdot \left( \frac{\partial \vec{\rm pos}}{\partial v} \times \frac{\partial \vec{\rm pos}}{\partial w} \right) \tag{4} $$

Mass Properties

The integrals over volume can be used to calculate a variety of items

$$ \begin{array}{r|l} \text{quantity} & \text{integral} \\ \hline \text{volume} & V = \int \limits_{w_1}^{w_2} \int \limits_{v_1}^{v_2} \int \limits_{u_1}^{u_2} f\, {\rm d}u\,{\rm d}v\,{\rm d}w \\ \hline \text{mass} & m = \int \limits_{w_1}^{w_2} \int \limits_{v_1}^{v_2} \int \limits_{u_1}^{u_2} \rho\, f\, {\rm d}u\,{\rm d}v\,{\rm d}w \\ \hline \text{center of mass} & \vec{\rm cm} = \tfrac{1}{m} \int \limits_{w_1}^{w_2} \int \limits_{v_1}^{v_2} \int \limits_{u_1}^{u_2} \rho\,f\, \vec{\rm pos} \, {\rm d}u\,{\rm d}v\,{\rm d}w \\ \hline \text{MMOI tensor} & \mathrm{I} = \int \limits_{w_1}^{w_2} \int \limits_{v_1}^{v_2} \int \limits_{u_1}^{u_2} \rho\,f \pmatrix{y^2+z^2 & -x y & -x z \\ -x y & x^2+z^2 & -y z \\ -x z & -y z & x^2+y^2} \, {\rm d}u\,{\rm d}v\,{\rm d}w \\ \end{array} \tag{5}$$

Again, with the example of the cylinder, apply the above to get

$$ \begin{array}{r|l} \text{quantity} & \text{integral} \\ \hline \text{volume} & V = \pi R^2 \ell \\ \hline \text{mass} & m = \rho \pi R^2 \ell \\ \hline \text{center of mass} & \vec{\rm cm} = \pmatrix{0\\0\\0} \\ \hline \text{MMOI tensor} & \mathrm{I} = \pmatrix{ \frac{\pi R^2 \rho \ell (3 R^2+\ell^2)}{12} & & \\ & \frac{\pi R^2 \rho \ell (3 R^2+\ell^2)}{12} & \\ & & \frac{\pi R^4 \rho \ell}{2} } \end{array} $$

The mass moment of inertia tensor (last row above) is defined about the origin and in terms of density. To convert it to the more conventional formula in terms of mass, use the mass-density function in row #2

$$ \mathrm{I} = \pmatrix{ \tfrac{m}{12}(\ell^2+3 R^2) & & \\ & \tfrac{m}{12}(\ell^2+3 R^2) & \\ & & \tfrac{m}{2}(R^2) } $$

$\endgroup$
0
$\begingroup$

A geometric interpretation of this integral could be submitted by the Stieltjes integral. But this could be a Riemann approach: Let us suppose a line segment (~an idealized rod) from $0$ to $L$ of x-axis. Define a "mass function" $m:[0,L]\to\Bbb R^+$, which for every $x\in [0,L]$ it gives us the mass $m(x)$ contained in the segment from $0$ to $x$. The mass function is increasing, for we suppose that we have mass everywhere in the rod. This means that the mass function in invertible, and its inverse is $x=x(m):[0,M]\to[0,L]$, where $m(0)=0$ (we have no mass from $0$ to $0$) and $m(L)=M$, the whole mass of the rod. Given a partition of the "mass segment" $[0,M]$, $P=\{0=m_0,m_1, ..., m_n=M\}$, and a set of intermediate points $E_P=\{m_1^*,...,m_n^* \}$ we acquire the sums $$ \sum_{i=1}^nx(m_i^*)(m_{i}-m_{i-1})\approx\int_0^Mxdm.$$

If we desire the more formal approach by Stieltjes integral, we can consider like this: Assume the mass function as above and the identity function on the rod $id:[0,L]\to\Bbb R$. Given a partition of the line segment $[0,L]$, $P=\{0=x_0,x_1, ..., x_n=L\}$, and a set of intermediate points $E_P=\{x_1^*,...,x_n^* \}$, by Stieltjes definition we get $$ \sum_{i=1}^nid(x_i^*)(m(x_{i})-m(x_{i-1}))=\sum_{i=1}^nx_i^*(m(x_{i})-m(x_{i-1}))\approx\int_0^Lid(x)dm(x)=\int_0^Lxdm.$$

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.