This statement is unclear and highlights the limitations of Carroll’s understanding of the statistical foundation of the CAT-DI. Multivariate analysis covers a wide range of statistical methods including multivariate analysis of variance, multivariate regression, factor analysis for measurement data, item response theory, and many other possibilities. The specific requirement of the number of respondents relative to “items” will depend on many factors including the specific multivariate model, types of hypotheses being tested (assuming they are being tested at all), effect sizes, and specific method of estimation (eg, least squares vs maximum likelihood). In the context of item response theory, we certainly need a greater number of respondents than items, but a statement suggesting that it must be 10:1 has no basis whatsoever in statistical theory. Since we are using item response theory to calibrate the model, the question is whether the solution is stable and the estimation procedure has converged. Carroll’s statement is apparently based on a quotation from Nunnally2 and predates the method of estimation used in our article (ie, marginal maximum likelihood estimation) and therefore is at best questionable in terms of the degree to which it applies and, more important, to what it applies. Furthermore, convergence was obtained both in the current study and in our previous study using a similar number of participants and number of items. If the more complex bifactor model was not appropriately estimated, then it would be unlikely that it would provide such an overwhelming improvement in fit over the simpler unidimensional model. Finally, Carroll ignores the description of the analysis where we clearly state that we used a balanced incomplete block design to select a subset of approximately 250 items per participant. Even if we were testing hypotheses regarding the item parameters (for example, to assess differential item functioning), the rule of 10 times the number of subjects to items could not possibly hold true for all hypotheses, item parameters, item response theory models, and effect sizes.