We demonstrate accurate spatio-temporal gait data classification from raw tomography sensor data without the need to reconstruct images. This is based on a simple yet efficient machine learning methodology based on a convolutional neural network architecture for learning spatio-temporal features, automatically end-to-end from raw sensor data. In a case study on a floor pressure tomography sensor, experimental results show an effective gait pattern classification F-score performance of 97.88 1.70\%. It is shown that the automatic extraction of classification features from raw data leads to a substantially better performance, compared to features derived by shallow machine learning models that use the reconstructed images as input, implying that for the purpose of automatic decision-making it is possible to eliminate the image reconstruction step. This approach is portable across a range of industrial tasks which involve tomography sensors. The proposed learning architecture is computationally efficient, has a low number of parameters and is able to achieve reliable classification F-score performance from a limited set of experimental samples. We also introduce a floor sensor dataset of 892 samples, encompassing experiments of 10 manners of walking and 3 cognitive oriented tasks to yield a total of 13 types of gait patterns.