One of the most critical issues when using neural networks is how to select appropriate network architectures for the problem at hand. Practitioners usually refer to information criteria which might lead to over-parameterized models with heavy consequence on overfitting and poor ex-post forecast accuracy. Moreover, since model selection criteria depend on sample information, their actual values are subject to statistical variations. So, to compare multiple models in terms of their out of sample predictive ability, a test procedure is needed. But, in such context there is always the possibility that any satisfactory results obtained may simply be due to chance rather than any merit inherent in the model yielding to the result. The problem can be particularly serious when using neural network models which are basically atheoretical. In this paper we propose a strategy for neural network model selection which is based on a sequence of tests and, to avoid the data snooping problem, familywise error rate is controlled by a proper technique. The procedure requires the implementation of resampling techniques in order to obtain valid asymptotic critical values for the test. Some simulations results and applications to real data are discussed.