First Steps in Deep Learning Using TensorFlow

In the past year, there was this phrase that kept popping up everywhere – “Deep learning”. It created the impression that you can do everything with it. Therefore, I have decided to investigate that the fuss is all about.

My first steps (which I recommend for any beginner) were understanding the science behind the magic. Here are several resources which I found helpful (this list will update from time to time):

Update 16.2.17 : Additional resources (Thanks to Elad Osherov)

When I felt that I had a better understanding of this “magic” I immediately wanted to apply it to my engineering problems – a relatively unharvested field of research. I found (based on recommendations from several colleagues) that the best tool for implementation would be to use Google’s TensorFlow with Python.

 

TensorFlow logo - Library includes Neural Network tools

 

Confession: I have not worked with python before + I work on a windows machine.

However, learning python for an experienced programmer like myself proved to be relatively easy (once you get used to all of these indentations instead of brackets) and setting it up on a windows machine took a few hours (TensorFlow currently supports python 3.5 on windows machines so I had to create a virtual environment on my anaconda ).

After doing all of this it was time to start programming!

I recommend starting with the TensorFlow Tutorials. They are very well explained and very easy for a beginner.

I started with the MNIST for ML beginners tutorial. 

Here is my final code (Detailed explanations are given in the tutorial):

import tensorflow as tf

#Get the Data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)

x = tf.placeholder(tf.float32,[None, 784])

W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))

y = tf.nn.softmax( tf.matmul(x,W) + b )

y_ = tf.placeholder( tf.float32, [None, 10] )
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y, y_))

train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)

init = tf.global_variables_initializer()

sess = tf.Session()
sess.run(init)

for i in range(1000):
   batch_xs, batch_ys = mnist.train.next_batch(100)
   sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})

correct_predictions = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32))

test_Accuracy = sess.run(accuracy,feed_dict={x: mnist.test.images, y_: mnist.test.labels})
print(test_Accuracy)

It gets approximately 92% accuracy. It is pretty bad for MNIST but we used a very simple model. in the next tutorial (and post) I’m going to use a deep convolutional network to improve this result.

In the next tutorial (and post) I’m going to use a deep convolutional network to improve this result.