Although ARCH-related models have proven quite popular in finance, they are less frequently used in macroeconomic applications. In part this may be because macroeconomists are usually more concerned about characterizing the conditional mean rather than the conditional variance of a time series. This paper argues that even if one's interest is in the conditional mean, correctly modeling the conditional variance can still be quite important, for two reasons. First, OLS standard errors can be quite misleading, with a "spurious regression" possibility in which a true null hypothesis is asymptotically rejected with probability one. Second, the inference about the conditional mean can be inappropriately influenced by outliers and high-variance episodes if one has not incorporated the conditional variance directly into the estimation of the mean, and infinite relative efficiency gains may be possible. The practical relevance of these concerns is illustrated with two empirical examples from the macroeconomics literature, the first looking at market expectations of future changes in Federal Reserve policy, and the second looking at changes over time in the Fed's adherence to a Taylor Rule.