The method of moments is not simply about matching the first moment. The general strategy is to use as many moments as is needed to obtain a system of equations that has a unique solution for the parameter. So in your case, because the first moment is zero, we use the second: $$\operatorname{E}[X^2] = \frac{\theta^2}{3},$$ hence we match the second raw sample moment to this: $$\frac{\hat \theta^2}{3} = \frac{1}{n} \sum_{i=1}^n X_i^2,$$ or $$\hat \theta = \sqrt{\frac{3}{n} \sum_{i=1}^n X_i^2}.$$ We could also match on the second central moment, also known as the variance: $$\operatorname{E}[(X - \operatorname{E}[X])^2] = \operatorname{Var}[X] = \frac{\theta^2}{3},$$ since the first moment is zero. This leads to a different estimator: $$\frac{\hat \theta^2}{3} = \frac{1}{n} \sum_{i=1}^n (X_i - \bar X)^2.$$ In practice, $\bar X$ will be "close" to zero, so these will be roughly the same, but notice that if $n = 1$, the first estimator is still defined, whereas the second will give $0$ for $\theta$, which is not sensible.
As a general approach, if we have, say, $k$ parameters to estimate, then the method of moments leads to a system in $k$ equations that we can (sometimes) recursively solve. This tends to be an easier way to obtain estimators for the parameters compared to, say, maximum likelihood estimation, but the problem with method of moment estimators is that they don't always make sense given the data.