This paper analyzes the asymptotic convergence properties of policy iteration in a class of stationary, infinite-horizon Markovian decision problems that arise in optimal growth theory. These problems have continuous state and control variables, and must therefore be discretized in order to compute an approximate solution. The discretization converts a potentially infinite dimensional fixed-point problem to a finite dimensional problem defined on a finite grid of points in the state space, and it may thus render inapplicable known convergence results for policy iteration such as those of Puterman and Brumelle (1979). Under certain regularity conditions, we prove that for piecewise linear interpolation, policy iteration converges quadratically, i.e. the sequence of errors en = |Vn - V*| (where Vn is an approximate value function produced from the nth policy iteration step) satisfies en+1 = Le2n for all n. We show how the constant L depends on the grid size of the discretization. Also, under more general conditions we establish that convergence is superlinear. We illustrate the theoretical results with numerical experiments that compare the performance of policy iteration and the method of successive approximations. The quantitative results are consistent with theoretical predictions.