The logic 101 view
In logic the way it's typically taught to mathematicians (and the way it's actually done in a lot of work on logic itself), a universal quantifier only introduces a variable. It doesn't give it a domain. So you don't write things like $\forall n \in \mathbb{N}, P(n)$, but $\forall n, n \in \mathbb{N} \Rightarrow P(n)$ (which is parsed as $\forall n, (n \in \mathbb{N} \Rightarrow P(n))$ — I'll generally leave out these extra parentheses in my answer). In this framework, $\forall n \in \mathbb{N}, P(n)$ is considered an abbreviated notation for $\forall n, n \in \mathbb{N} \Rightarrow P(n)$.
And so if you're quantifying over an integer $n$ and over a sequence of length $n$, you end up with the statement
$$
\forall n, n \in \mathbb{N} \Rightarrow \bigl(
\forall X, X \in \{(X_i, \mathcal{T}_i)_{i=1}^n\} \Rightarrow
P(X)
\bigr)
$$
The universal quantifiers are not adjacent, so you can't swap them using the quantifier commutation rule.
This is very limiting: even in simple cases like $\forall x \in \mathbb{R}, \forall y \in \mathbb{R}, P(x,y)$, you can't swap the quantifications over $x$ and $y$, since the expanded version is $\forall x, x \in \mathbb{R} \Rightarrow (\forall y, y \in \mathbb{R} \Rightarrow P(x,y))$ which is not the same thing as $\forall x, \forall y, x \in \mathbb{R} \Rightarrow (y \in \mathbb{R} \Rightarrow P(x,y))$. It's “obvious” that all of this is equivalent to $\forall (x,y) \in \mathbb{R}^2, P(x,y)$, but things in math are not obvious until proved, so how would we prove it?
The details of the proof would depend on a specific formulation of logic, and I won't go into all the details here. However, there is another important theorem, to add to the theorem “you can swap universal quantifiers”: you can lift a universal quantifier outside the right-hand side of an implication. That is,
$$
P_0 \Rightarrow \forall y, P_1(y)
\quad \text{ is equivalent to } \quad
\forall y, P_0 \Rightarrow P_1(y)
$$
where the variable $y$ can appear in $P_1(y)$, but not in $P_0$.
The reason $y$ may not appear $P_0$ is that in the left-hand side, $P_0$ is outside the scope of the $\forall y$ quantifier. It's a fundamental syntactic property that a variable cannot escape its scope. The scope is what gives a meaning to the variable.
Now we can tackle the simple two-variable example above and transform propositions into equivalent propositions:
$$
\begin{array}{ll}
\forall x \in \mathbb{R}, \forall y \in \mathbb{R}, P(x,y) \\
\forall x, x \in \mathbb{R} \Rightarrow \bigl( \forall y, y \in \mathbb{R} \Rightarrow P(x,y) \bigr) & \text{expanding notation} \\
\forall x, \forall y, x \in \mathbb{R} \Rightarrow (y \in \mathbb{R} \Rightarrow P(x, y)) & \text{by the forall/implication lifting theorem} \\
\forall y, \forall x, x \in \mathbb{R} \Rightarrow (y \in \mathbb{R} \Rightarrow P(x, y)) & \text{by the forall swapping theorem} \\
\forall y, \forall x, y \in \mathbb{R} \Rightarrow (x \in \mathbb{R} \Rightarrow P(x, y)) & \text{by propositional logic} \\
\forall y, y \in \mathbb{R} \Rightarrow \bigl( \forall x, x \in \mathbb{R} \Rightarrow P(x,y) \bigr) & \text{by the forall/implication lifting theorem} \\
\forall y \in \mathbb{R}, \forall x \in \mathbb{R}, P(x,y) & \text{contracting notation} \\
\end{array}
$$
Now let's go back to your example. What goes wrong if we try to apply the reasoning above to $\forall n \in \mathbb{N}, \forall X \in \{(X_i, \mathcal{T}_i)_{i=1}^n\}, P(X)$? Everything is fine until we get to
$$
\forall X, \forall n, X \in \{(X_i, \mathcal{T}_i)_{i=1}^n\} \Rightarrow \bigl( n \in \mathbb{N} \Rightarrow P(x, y) \bigr)
$$
Now we would like to apply the lifting theorem in the converse direction. But the property that we want to “de-lift” $\forall n$ under is $X \in \{(X_i, \mathcal{T}_i)_{i=1}^n\}$, in which $n$ appears. The theorem requires a predicate $P_0$ here where $n$ does not appear. So the lifting theorem does not apply.
Bonus: on universal quantification and implication
There's a notation that combines a universal quantifier with an adjacent implication. There's a theorem that says you can swap adjacent universal quantifiers. There's a theorem that says you can swap a universal quantifier with an adjacent implication. Is there some kind of more general concept where these would all be special cases of a general theorem?
Well, yes. Otherwise I wouldn't have presented things this way. In fact, I'm going to describe two generalizations.
A universal quantifier is a kind of implication from an infinite conjunction. That is, $\forall x, P(x)$ can be seen as the conjunction $\bigwedge_x P(x)$, where the conjunction is over all possible $x$. Now, in general, you can't expect infinite formulas to have all the same properties as finite formulas. But, at least when doing logic over a finite space, propositional logic will definitely apply. And there is a wide range of cases where theorems that apply to all finite models are true of infinite models as well (but I won't get into what this means precisely). Assuming the rules of propositional logic apply:
- Forall swap: $\left( \forall x, \forall y, P(x,y) \right) \Longleftrightarrow \left( \bigwedge_x \bigwedge_y P(x,y) \right) \Longleftrightarrow \left( \bigwedge_{x,y} P(x,y) \right) \Longleftrightarrow \left( \bigwedge_y \bigwedge_x P(x,y) \right) \Longleftrightarrow \left( \forall y, \forall x, P(x,y) \right)$ (by re-grouping the terms of the conjunction, since it's associative and commutative)
- Forall/implication lift: $\left( P0 \Rightarrow \forall y, P_1(y) \right) \Longleftrightarrow \left( P0 \Rightarrow \bigwedge_y, P_1(y) \right) \Longleftrightarrow \left( \bigwedge_y (P0 \Rightarrow P_1(y)) \right) \Longleftrightarrow \left( \forall y, P0 \Rightarrow P_1(y) \right)$ (using a distributivity law: $A \Rightarrow (B \wedge C)$ is equivalent to $(A \Rightarrow B) \wedge (A \Rightarrow C)$)
An implication is a kind of universal quantification. “A implies B” can be seen as “no matter how you prove A, B is true”. (But you do have to prove A somehow, otherwise you can't conclude that B is true.) In other words, $A \Rightarrow B$ is equivalent to $\forall \pi \in \mathscr{P}(A), B$ where $\mathscr{P}(A)$ is the class of all proofs of $A$. (Note that here, I'm treating the $\forall x \in S$ notation as primitive, rather than an abbreviation. More on this in the section about types below.) (A “class” is a generalization of a set; the details don't matter here.) So the forall/implication lifting theorem states that $\forall \pi_0 \in \mathscr{P(P0)}, \forall y, P_1(y)$ is equivalent to $\forall y, \forall \pi_0 \in \mathscr{P(P0)}, P_1(y)$ — and this is just a matter of swapping the adjacent universal quantifiers.
The typed view
Above, I presented the way foundational mathematics is traditionally done, with quantifiers applying to a variable. But that's not how day-to-day mathematics is done: we pretty much always write “for all $x$ in a given set”, not just “for all $x$”. Sometimes we don't bother writing the set explicitly, when it's obvious from the context or the notation: for example, when doing real analysis, we write or say “for all $x$” and it's implicit that $x$ is a real number. And we write or say “for all $n$” and it's implicit that $n$ is an integer, or even that it's a non-negative integer. But always, if we take the time to write things down precisely, we'll end up with $\forall x \in \mathbb{R}$ or $\forall n \in \mathbb{N}$.
Intuitively, every variable has a domain. Sometimes this domain is restricted. For example, in real analysis, we might want to say “for any nonzero real number $x$”, and we write $\forall x \in \mathbb{R}^*$. The variable $x$ has an “overall domain”, which is all the real numbers, and a specific domain for this particular proposition, which is the set of nonzero real numbers. Instead of $\forall x \in \mathbb{R}^*, P(x)$, we might write $\forall x \in \mathbb{R}, x \ne 0 \Rightarrow P(x)$.
This notion of overall domain can be formalized in typed logics. Typed logics are mostly taught to computer scientists, because they have a lot of connections with programming. But they're also of interest to logicians, and they explain how everyday mathematical logic works.
In a typed logic, every variable has a type. When a quantifier introduces a new variable, it needs to specify the type of this variable. Intuitively speaking, the type of a variable is the set from which it can be taken. (When you actually study the theory, a type is not necessarily a set — but that aspect is not important here.)
The notation for specifying the type of a variable is $x:T$ where $x$ is the variable and $T$ is the type. So, for example, a universal quantification is $\forall x:T, \ldots$. This is pretty much equivalent to the everyday mathematical notation $\forall x \in T, \ldots$ since for our purposes here, we can pretend that types are sets.
The statement in the question is:
$
\forall n : \mathbb{N},
\forall X : \{(X_i, \mathcal{T}_i)_{i=1}^n\},
P(X)
$
Notice how the type of $X$ uses the variable $n$. The technical term for this is a dependent type — a type where a variable appears.
In a type theory, it is a theorem that you can swap adjacent universal quantifiers, as long as their types are independent. That is,
$$
\left( \forall x:T, \forall y:U, P \right)
\quad \text{is equivalent to} \quad
\left( \forall y:U, \forall x:T, P \right)
$$
provided that $x$ does not appear in $U$ and $y$ does not appear in $T$. The reason for the independence requirement is that if, for example, $x$ appeared in $U$, then the right-hand formula would have $x$ outside the scope of the quantifier that introduces it. And a variable is defined by its scope. It's a syntax error to use a variable outside its scope. (Or, depending on exactly how you model variable scopes, if the variable name $x$ appears outside the scope that defines $x$, then the occurrence of $x$ outside the scope is a different variable, and using the same name for different variables is valid but confusing to the reader.)
In your example, the universal quantifier swap theorem does not apply since the type of $X$ depends on $n$.
I recommend reading up on type theory if you're interested in computer science or programming. If your focus is mathematics (and not specifically logic), type theory won't really help you (it doesn't particularly help with algebra, analysis, geometry, etc. outside of rare advanced intersectionality), but it does provide (in my opinion) better foundational intuition to many aspects of everyday mathematical reasoning than untyped logic.
Bonus: other quantifiers
You can swap adjacent existential quantifiers: $\exists x, \exists y, P(x,y)$ is equivalent to $\exists y, \exists x, P(x,y)$. This is true even with domains or types: $\exists x \in A, \exists y \in B, P(x,y)$ is equivalent to $\exists y \in B, \exists y \in B, P(x,y)$ — provided that $x$ doesn't appear in $B$ and $y$ doesn't appear in $A$ ($x$ and $y$ may not escape their respective scope).
A constraint on a universal quantifier is like an extra implication: $\forall x \in A, P(x)$ is equivalent to $\forall x, x \in A \Rightarrow P(x)$. A constraint on an existential quantifier is like an extra conclusion: $\exists x \in A, P(x)$ is equivalent to $\exists s, P(x) \wedge x \in A$. An existential quantifier is like an infinite disjunction: $\exists x, P(x)$ is like $\bigvee_x P(x)$.
It is not true in general that you can swap quantifiers. It's the case for the two common quantifiers ($\forall$ and $\exists$), but not for more “exotic” quantifiers. For example, some texts define $\exists!$ as the “exists-unique” quantifier: $\exists! x, P(x)$ means that $\exists x, P(x)$ and $\forall x, \forall y, (P(x) \wedge P(y)) \Rightarrow x = y$. The propositions $\exists!x, \exists!y, P(x,y)$ and $\exists!y, \exists!x, P(x,y)$ are not equivalent.