In “Improving neural networks by preventing co-adaptation of feature detectors“, Hinton, Srivastava, Krizhevsky, Sutskever, and Salakhutdinov answer the question: What happens if “On each presentation of each training case, each hidden unit is randomly omitted from the network with a probability of 0.5, so a hidden unit cannot rely on other hidden units being present.” This mimics the standard technique of training several neural nets and averaging them, but it is faster. When they applied the “dropout” technique to a deep Boltzmann neural net on the MNIST hand written digit data set and the TIMIT speech data set, they got robust learning without overfitting. This was one of the main techniques used by the winners of the Merck Molecular Activity Challenge.
Hinton talks about the dropout technique in his video Brains, Sex, and Machine Learning.