The adoption of Deep Neural Network (DNN) methods to solve problems in real-world scenarios has been increasing as the data volume grows. Although such methods present impressive results in supervised learning, it is known that the occurrence of noises modifying the original data behavior can affect the model accuracies and, consequently, the generalization process, which is highly relevant in learning tasks. Several approaches have been proposed to reduce the impact of noise on the final model, varying since the application of preprocessing steps to the design of robust DNN layers. However, we have noticed that such approaches were not systematically assessed to understand how the noise influences have been propagated throughout the DNN architectures. This gap motivated us to design this work, which was focused on modeling noisy data with temporal dependencies, typically referred to as time series or signals. In summary, our main claim was to create a network capable of acting as a noise filter and being easily connected to existing networks. To reach this goal, we have defined a methodology, which was organized into four phases: i) execution of a study about the application of DNNs to model signals collected from a real-world problem; ii) investigation of different preprocessing tools to transform such signals and reduce noise influences; iii) analysis about the impact of increasing/reducing the noise on the final model; and iv) creation of a new DNN that can be embedded into DNN architectures and act as noise filtering layer to keep the overall performances. The first and second phases were achieved in collaboration with researchers from the Universidad de La Frontera, which provided a set of signals directly collected from the Llaima volcano in Chile. The modeling performed on such signals allowed the creation of a new architecture called SeismicNet. By knowing the behavior or such signals, we could create a controlled scenario with different additive noise levels and outputs produced by our original models, thus meeting the third phase. Next, we performed two new studies to understand the impact of noises in our scenario. Firstly, we used statistical tests to confirm the error variation when noise is added to the expected signals. Then, we used XAI (eXplainable Artificial Intelligence) to visually comprehend the noise propagation into the DNN layers. Finally, we were able to finish up the last phase and accomplish our primary goal: the design of a new neural network architecture with embedding noise filtering to suppress the preprocessing phase. Interpreting the obtained results, we understand that this novel approach learned the noisy features better and was capable of delivering stable results apart from the noise level on the signal.