Deep Learning / impact

Hint: Click ↑ Pushed to see the most recently updated apps and libraries or click Growing to repos being actively starred .
0
4 (+0) ⭐

Learning the impact of Hyperparameters in a deep learning model

0
57 (+0) ⭐

Training deep neural networks with low precision multiplications

0
49 (+0) ⭐

(no description)

0
21139 (+6) ⭐

The most cited deep learning papers

0
5 (+0) ⭐

Analysis Of The Context Size Impact In Deep Learning Conversational Systems

0 comments    
0
40 (+0) ⭐

Pre-trained model, code, and materials from the paper "Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation" (MICCAI 2019).

0 comments    
0

In this work I investigate the speech command task developing and analyzing deep learning models. The state of the art technology uses convolutional neural networks (CNN) because of their intrinsic nature of learning correlated represen- tations as is the speech. In particular I develop different CNNs trained on the Google Speech Command Dataset and tested on different scenarios. A main problem on speech recognition consists in the differences on pronunciations of words among different people: one way of building an invariant model to variability is to augment the dataset perturbing the input. In this work I study two kind of augmentations: the Vocal Tract Length Perturbation (VTLP) and the Synchronous Overlap and Add (SOLA) that locally perturb the input in frequency and time respectively. The models trained on augmented data outperforms in accuracy, precision and recall all the models trained on the normal dataset. Also the design of CNNs has impact on learning invariances: the inception CNN architecture in fact helps on learning features that are invariant to speech variability using different kind of kernel sizes for convolution. Intuitively this is because of the implicit capability of the model on detecting different speech pattern lengths in the audio feature.

0 comments    
0