Deep Learning / small code

Hint: Click ↑ Pushed to see the most recently updated apps and libraries or click Growing to repos being actively starred .
0
tylib 🍂
8 (+0)

python small code for deep learning

0

Jupyter notebooks for the code samples of the book "Deep Learning with Python"

0
2 (+0)

Small tools, notebooks, code snippets for the Keras Deep Learning library

0 comments    
0
4 (+0)

Code for the paper Deep Learning with Medical Images: Best Practices for Small Data

0 comments    
0

This is a research project about how to do medical image classification on small dataset by deep learning The pdf is the report. The VGG16 is the code for experiments

0 comments    
0
8 (+0)

Code for the TensorFlow Tutorial | Deep Learning Using TensorFlow | TensorFlow Tutorial Python | Edureka (https://www.youtube.com/watch?v=yX8KuPZCAMo) with small corrections and additions of missing parts by Claude COULOMBE - PhD candidate TÉLUQ / UQAM - Montréal

0 comments    
0
928 (+0)

This repository contains small projects related to Neural Networks and Deep Learning in general. Subjects are closely linekd with articles I publish on Medium. I encourage you both to read as well as to check how the code works in the action.

0 comments    
0

This is a small piece of code to make it easier to store market data from different exchanges and store them into an SQL database to be used for Deep Learning in later steps. It can be deployed into digitalocean dokku server

0 comments    
0
14 (+0)

Image Classifier Going forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smartphone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications. In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice, you'd train this classifier, then export it for use in your application. We'll be using this dataset of 102 flower categories. When you've completed this project, you'll have an application that can be trained on any set of labelled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. This is the final Project of the Udacity AI with Python Nanodegree Prerequisites The Code is written in Python 3.6.5 . If you don't have Python installed you can find it here. If you are using a lower version of Python you can upgrade using the pip package, ensuring you have the latest version of pip. To install pip run in the command Line python -m ensurepip -- default-pip to upgrade it python -m pip install -- upgrade pip setuptools wheel to upgrade Python pip install python -- upgrade Additional Packages that are required are: Numpy, Pandas, MatplotLib, Pytorch, PIL and json. You can donwload them using pip pip install numpy pandas matplotlib pil or conda conda install numpy pandas matplotlib pil In order to intall Pytorch head over to the Pytorch site select your specs and follow the instructions given. Viewing the Jyputer Notebook In order to better view and work on the jupyter Notebook I encourage you to use nbviewer . You can simply copy and paste the link to this website and you will be able to edit it without any problem. Alternatively you can clone the repository using git clone https://github.com/fotisk07/Image-Classifier/ then in the command Line type, after you have downloaded jupyter notebook type jupyter notebook locate the notebook and run it. Command Line Application Train a new network on a data set with train.py Basic Usage : python train.py data_directory Prints out current epoch, training loss, validation loss, and validation accuracy as the netowrk trains Options: Set direcotry to save checkpoints: python train.py data_dor --save_dir save_directory Choose arcitecture (alexnet, densenet121 or vgg16 available): pytnon train.py data_dir --arch "vgg16" Set hyperparameters: python train.py data_dir --learning_rate 0.001 --hidden_layer1 120 --epochs 20 Use GPU for training: python train.py data_dir --gpu gpu Predict flower name from an image with predict.py along with the probability of that name. That is you'll pass in a single image /path/to/image and return the flower name and class probability Basic usage: python predict.py /path/to/image checkpoint Options: Return top K most likely classes: python predict.py input checkpoint ---top_k 3 Use a mapping of categories to real names: python predict.py input checkpoint --category_names cat_To_name.json Use GPU for inference: python predict.py input checkpoint --gpu Json file In order for the network to print out the name of the flower a .json file is required. If you aren't familiar with json you can find information here. By using a .json file the data can be sorted into folders with numbers and those numbers will correspond to specific names specified in the .json file. Data and the json file The data used specifically for this assignemnt are a flower database are not provided in the repository as it's larger than what github allows. Nevertheless, feel free to create your own databases and train the model on them to use with your own projects. The structure of your data should be the following: The data need to comprised of 3 folders, test, train and validate. Generally the proportions should be 70% training 10% validate and 20% test. Inside the train, test and validate folders there should be folders bearing a specific number which corresponds to a specific category, clarified in the json file. For example if we have the image a.jpj and it is a rose it could be in a path like this /test/5/a.jpg and json file would be like this {...5:"rose",...}. Make sure to include a lot of photos of your catagories (more than 10) with different angles and different lighting conditions in order for the network to generalize better. GPU As the network makes use of a sophisticated deep convolutional neural network the training process is impossible to be done by a common laptop. In order to train your models to your local machine you have three options Cuda -- If you have an NVIDIA GPU then you can install CUDA from here. With Cuda you will be able to train your model however the process will still be time consuming Cloud Services -- There are many paid cloud services that let you train your models like AWS or Google Cloud Coogle Colab -- Google Colab gives you free access to a tesla K80 GPU for 12 hours at a time. Once 12 hours have ellapsed you can just reload and continue! The only limitation is that you have to upload the data to Google Drive and if the dataset is massive you may run out of space. However, once a model is trained then a normal CPU can be used for the predict.py file and you will have an answer within some seconds. Hyperparameters As you can see you have a wide selection of hyperparameters available and you can get even more by making small modifications to the code. Thus it may seem overly complicated to choose the right ones especially if the training needs at least 15 minutes to be completed. So here are some hints: By increasing the number of epochs the accuracy of the network on the training set gets better and better however be careful because if you pick a large number of epochs the network won't generalize well, that is to say it will have high accuracy on the training image and low accuracy on the test images. Eg: training for 12 epochs training accuracy: 85% Test accuracy: 82%. Training for 30 epochs training accuracy 95% test accuracy 50%. A big learning rate guarantees that the network will converge fast to a small error but it will constantly overshot A small learning rate guarantees that the network will reach greater accuracies but the learning process will take longer Densenet121 works best for images but the training process takes significantly longer than alexnet or vgg16 *My settings were lr=0.001, dropoup=0.5, epochs= 15 and my test accuracy was 86% with densenet121 as my feature extraction model. Pre-Trained Network The checkpoint.pth file contains the information of a network trained to recognise 102 different species of flowers. I has been trained with specific hyperparameters thus if you don't set them right the network will fail. In order to have a prediction for an image located in the path /path/to/image using my pretrained model you can simply type python predict.py /path/to/image checkpoint.pth Contributing Please read CONTRIBUTING.md for the process for submitting pull requests. Authors Shanmukha Mudigonda - Initial work Udacity - Final Project of the AI with Python Nanodegree

0 comments    
0

Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) Explore the Data Play around with view_sentence_range to view different parts of the data. view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) Dataset Stats Roughly the number of unique words: 227 Number of sentences: 137861 Average number of words in a sentence: 13.225277634719028 English sentences 0 to 10: new jersey is sometimes quiet during autumn , and it is snowy in april . the united states is usually chilly during july , and it is usually freezing in november . california is usually quiet during march , and it is usually hot in june . the united states is sometimes mild during june , and it is cold in september . your least liked fruit is the grape , but my least liked is the apple . his favorite fruit is the orange , but my favorite is the grape . paris is relaxing during december , but it is usually chilly in july . new jersey is busy during spring , and it is never hot in march . our least liked fruit is the lemon , but my least liked is the grape . the united states is sometimes busy during january , and it is sometimes warm in november . French sentences 0 to 10: new jersey est parfois calme pendant l' automne , et il est neigeux en avril . les états-unis est généralement froid en juillet , et il gèle habituellement en novembre . california est généralement calme en mars , et il est généralement chaud en juin . les états-unis est parfois légère en juin , et il fait froid en septembre . votre moins aimé fruit est le raisin , mais mon moins aimé est la pomme . son fruit préféré est l'orange , mais mon préféré est le raisin . paris est relaxant en décembre , mais il est généralement froid en juillet . new jersey est occupé au printemps , et il est jamais chaude en mars . notre fruit est moins aimé le citron , mais mon moins aimé est le raisin . les états-unis est parfois occupé en janvier , et il est parfois chaud en novembre . Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end. You can get the <EOS> word id by doing: target_vocab_to_int['<EOS>'] You can get other word ids using source_vocab_to_int and target_vocab_to_int. def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] \ for sentence in source_text.split('\n')] target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] + [target_vocab_to_int['<EOS>']] \ for sentence in target_text.split('\n')] return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) Tests Passed Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. import problem_unittests as tests """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) TensorFlow Version: 1.1.0 Default GPU Device: /gpu:0 Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: model_inputs process_decoder_input encoding_layer decoding_layer_train decoding_layer_infer decoding_layer seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Target sequence length placeholder named "target_sequence_length" with rank 1 Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0. Source sequence length placeholder named "source_sequence_length" with rank 1 Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function inputs = tf.placeholder(tf.int32, [None, None], 'input') targets = tf.placeholder(tf.int32, [None, None]) learning_rate = tf.placeholder(tf.float32, []) keep_prob = tf.placeholder(tf.float32, [], 'keep_prob') target_sequence_length = tf.placeholder(tf.int32, [None], 'target_sequence_length') max_target_len = tf.reduce_max(target_sequence_length) source_sequence_length = tf.placeholder(tf.int32, [None], 'source_sequence_length') return inputs, targets, learning_rate, keep_prob, target_sequence_length, max_target_len, source_sequence_length """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) Tests Passed Process Decoder Input Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch. def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size Preprocessed target data """ # TODO: Implement Function go = tf.constant([[target_vocab_to_int['<GO>']]]*batch_size) # end = tf.slice(target_data, [0, 0], [-1, batch_size]) end = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) return tf.concat([go, end], 1) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) Tests Passed Encoding Implement encoding_layer() to create a Encoder RNN layer: Embed the encoder input using tf.contrib.layers.embed_sequence Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper Pass cell and embedded input to tf.nn.dynamic_rnn() from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data tuple (RNN output, RNN state) """ # TODO: Implement Function embed = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) def lstm_cell(): lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) return tf.contrib.rnn.DropoutWrapper(lstm, keep_prob) stacked_lstm = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)]) # initial_state = stacked_lstm.zero_state(source_sequence_length, tf.float32) return tf.nn.dynamic_rnn(stacked_lstm, embed, source_sequence_length, dtype=tf.float32) # initial_state=initial_state) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) Tests Passed Decoding - Training Create a training decoding layer: Create a tf.contrib.seq2seq.TrainingHelper Create a tf.contrib.seq2seq.BasicDecoder Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length) decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer) dec_train_logits, _ = tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_summary_length) # for tensorflow 1.2: # dec_train_logits, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_summary_length) return dec_train_logits # keep_prob/dropout not used? """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) Tests Passed Decoding - Inference Create inference decoder: Create a tf.contrib.seq2seq.GreedyEmbeddingHelper Create a tf.contrib.seq2seq.BasicDecoder Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function start_tokens = tf.constant([start_of_sequence_id]*batch_size) helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer) dec_infer_logits, _ = tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_target_sequence_length) # for tensorflow 1.2: # dec_infer_logits, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_target_sequence_length) return dec_infer_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer)

0 comments    
20387 Deep Learning libraries
(30565 libraries)
Go
(132908 libraries)
(72503 libraries)
(28530 libraries)
(37594 libraries)
C#
(64829 libraries)
(27487 libraries)
(47653 libraries)
(17537 libraries)
(13354 libraries)
(34853 libraries)
(19220 libraries)
(201750 libraries)
(20703 libraries)
Vue
(27652 libraries)
CSS
(108293 libraries)
(138934 libraries)
(85044 libraries)
(20387 libraries)
C++
(137302 libraries)
C
(105485 libraries)
(63727 libraries)
(84971 libraries)
(12012 libraries)
(83219 libraries)
PHP
(119123 libraries)
(189294 libraries)
(266027 libraries)
(10963 libraries)
Nim
(7227 libraries)
D
(13615 libraries)
(47359 libraries)
(3627 libraries)