Deep Learning / scientific

Hint: Click ↑ Pushed to see the most recently updated apps and libraries or click Growing to repos being actively starred .
18189 (+11) ⭐

Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.

3 (+0) ⭐

A two-way deep learning bridge between Keras and Fortran


Claire's Scientific Reports classification paper

2017-Summer πŸ‚
4 (+0) ⭐

UBC Scientific Software Seminar: Practical Deep Learning following


Scientific time series and deep learning state of the art

2017-Winter πŸ‚
7 (+0) ⭐

UBC Scientific Software Seminar: Neural Networks and Deep Learning in Python

agmb-docker πŸ‚
6 (+0) ⭐

Docker images for scientific computing and deep learning - by AG Bethge


This is the code package related to the follow scientific article: Luca Sanguinetti, Alessio Zappone, Merouane Debbah 'Deep-Learning-Power-Allocation-in-Massive-MIMO' presented at the Asilomar Conference on Signals, Systems, and Computers, 2018.


This paper proposes a deep learning-based method to detect multiple people from a single overhead depth image with high precision. Our neural network, called DPDnet, is composed by two fully-convolutional encoder-decoder blocks built with residual layers. The main block takes a depth image as input and generates a pixel-wise confidence map, where each detected person in the image is represented by a Gaussian-like distribution, The refinement block combines the depth image and the output from the main block, to refine the confidence map. Both blocks are simultaneously trained end-to-end using depth images and ground truth head position labels. The paper provides a rigorous experimental comparison with some of the best methods of the state-of-the-art, being exhaustively evaluated in different publicly available datasets. DPDnet proves to outperform all the evaluated methods with statistically significant differences, and with accuracies that exceed 99%. The system was trained on one of the datasets (generated by the authors and available to the scientific community) and evaluated in the others without retraining, proving also to achieve high accuracy with varying datasets and experimental conditions. Additionally, we made a comparison of our proposal with other CNN-based alternatives that have been very recently proposed in the literature, obtaining again very high performance. Finally, the computational complexity of our proposal is shown to be independent of the number of users in the scene and runs in real time using conventional GPUs.

5 (+0) ⭐

Summarization systems often have additional evidence they can utilize in order to specify the most important topics of document(s). For example, when summarizing blogs, there are discussions or comments coming after the blog post that are good sources of information to determine which parts of the blog are critical and interesting. In scientific paper summarization, there is a considerable amount of information such as cited papers and conference information which can be leveraged to identify important sentences in the original paper. How text summarization works In general there are two types of summarization, abstractive and extractive summarization. Abstractive Summarization: Abstractive methods select words based on semantic understanding, even those words did not appear in the source documents. It aims at producing important material in a new way. They interpret and examine the text using advanced natural language techniques in order to generate a new shorter text that conveys the most critical information from the original text. It can be correlated to the way human reads a text article or blog post and then summarizes in their own word. Input document β†’ understand context β†’ semantics β†’ create own summary. 2. Extractive Summarization: Extractive methods attempt to summarize articles by selecting a subset of words that retain the most important points. This approach weights the important part of sentences and uses the same to form the summary. Different algorithm and techniques are used to define weights for the sentences and further rank them based on importance and similarity among each other. Input document β†’ sentences similarity β†’ weight sentences β†’ select sentences with higher rank. The limited study is available for abstractive summarization as it requires a deeper understanding of the text as compared to the extractive approach. Purely extractive summaries often times give better results compared to automatic abstractive summaries. This is because of the fact that abstractive summarization methods cope with problems such as semantic representation, inference and natural language generation which is relatively harder than data-driven approaches such as sentence extraction. There are many techniques available to generate extractive summarization. To keep it simple, I will be using an unsupervised learning approach to find the sentences similarity and rank them. One benefit of this will be, you don’t need to train and build a model prior start using it for your project. It’s good to understand Cosine similarity to make the best use of code you are going to see. Cosine similarity is a measure of similarity between two non-zero vectors of an inner product space that measures the cosine of the angle between them. Since we will be representing our sentences as the bunch of vectors, we can use it to find the similarity among sentences. Its measures cosine of the angle between vectors. Angle will be 0 if sentences are similar. All good till now..? Hope so :) Next, Below is our code flow to generate summarize text:- Input article β†’ split into sentences β†’ remove stop words β†’ build a similarity matrix β†’ generate rank based on matrix β†’ pick top N sentences for summary.

12484 Deep Learning libraries
(20765 libraries)
(95981 libraries)
(52207 libraries)
(17221 libraries)
(29133 libraries)
(46485 libraries)
(24363 libraries)
(44150 libraries)
(14258 libraries)
(10277 libraries)
(25281 libraries)
(16446 libraries)
(163913 libraries)
(15855 libraries)
(15509 libraries)
(77425 libraries)
(72986 libraries)
(59339 libraries)
(12484 libraries)
(99474 libraries)
(82010 libraries)
(48149 libraries)
(44113 libraries)
(11014 libraries)
(68085 libraries)
(101148 libraries)
(130354 libraries)
(142124 libraries)
(6793 libraries)
(4487 libraries)
(11402 libraries)
(41082 libraries)
(2690 libraries)