The evaluation of scientific output has a key role in the allocation of research funds and academic positions. Decisions are often based on quality indicators for academic journals and over the years a handful of scoring methods have been proposed for this purpose. Discussing the most prominent methods (de facto standards) we show that they do not distinguish quality from quantity at article level. The systematic bias we find is analytically tractable and implies that the methods are manipulable. We introduce modified methods that correct for this bias, and use them to provide rankings of economic journals. Our methodology is transparent; our results are replicable.