That is a model selection question and in general there are many approaches. Asking on https://stats.stackexchange.com/ will probably give you multiple detailed answers.
When you do a t-test or an F-test for a single variable, you test whether the coefficient is zero "in the presence of all the other variables". Dropping all the insignificant ones at once changes the model,and there is no clear-cut answer. You can do an F-test for the significance of the resulting model after the reduction (i.e., test if all remaining non-intercept coefficients are zero), and keep the model if you do not reject the null.
There are many other (and better) approaches for model selection, i.e., picking which covariates to include in the model. You can for example, use all-subset regression (\ell_0 penalty) together with any of these measures: AIC, BIC, Mallow's $C_p$, adjusted $R^2$, prediction error sum-of-squares (PRESS). You can also use the Lasso or concave penalties (like the MCP) to avoid going over all subsets.
You can also use step-wise (forward and/or backward) regression which basically uses the t or F test in a sequential manner to include the most significant variables one at a time (or to eliminate the least significant one sequentially.)
These are not the only approaches!