This paper is a survey which describes and explains in non-technical terms the logic behind various methodologies used in conducting retrospective quantitative evaluations of public policy programs. The programs usually have as their main targets firms or individuals who benefit from direct subsidies and/or training. It is hypothesised that because of the technical nature of quantitative evaluations, some of the public officials to whom these evaluations are intended, may find them too complex to comprehend fully. Hence, those officials might disregard them up front, or form a biased opinion (positive or negative) or even accept the results on their face value. However, because all evaluations are subjective by definition, the public officials should have some basic knowledge on the logic behind the design and context of evaluations. Only then, can they judge themselves on their worth, and consequently decide to what degree they will take into account their findings and recommendations. The paper initially discusses the issues of accountability and causality and then introduces policy evaluation as a two phase process: First, estimations are made on the potential impact of the policy in question and then a judgement is passed on the worth of the impacts estimated, through a cost benefit analysis. The estimations in turn, comprise of two related areas: the design of the evaluation and the model specification. In designs, one has to consider whether counterfactual populations are included or not and whether the impact variables are in cross-sectional or longitudinal format. In model specifications the evaluator must decide which independent control variables he will include in the regression model so as to account for selection bias. In cost benefit analysis decisions have to be made as to whether the analysis will be made at partial equilibrium or general equilibrium level and whether the judgements formulated will be based purely on efficiency grounds or using just distributional criteria as well. The paper recommends among others, that (a) public policy evaluations should establish clear rules of causation between the public intervention and the potential impact measured, (b) limitations in the estimation and cost benefit analysis phase must be explicitly stated and (c) retrospective evaluations should be conducted at closer intervals after the end of the intervention so as to reduce the external heterogeneity generated due to the time lag between the results produced and the on-going programs.