Install newest release of TensorFlow 1.4 on the operating system of your choice. Use installation instructions on https://www.tensorflow.org site and instructions on https://github.com/tensorflow/tensorflow. If you know what you are doing install TensorFlow for GPU. Otherwise, install TensorFlow for CPU. Use attach Jupyter notebook: 0_test_install.ipynb to demonstrate that TensorFlow is properly installed. Please document all installation steps including the version of Python you are using.
I'm using python 2.7 and TensorFlow 1.4 for the following analysis. The software is installed in a virtual environment. The following code is installing the software on my ec2 instance:
Create environment
conda create -n tensorflow python=2.7 anaconda
source activate tensorflow
Install pip
sudo apt install -y python-pip
Install Tensorflow and pandas
pip install --ignore-installed --upgrade tensorflow
pip install pandas
Install conda packages
conda install matplotlib
conda install nb_conda
import tensorflow as tf
hello = tf.constant('It works!')
sess = tf.Session()
print(sess.run(hello))
# Check that you have a recent version of TensorFlow installed, >= 0.12.0rc0
print("You have version %s" % tf.__version__)
%matplotlib inline
import pylab
import numpy as np
# create some data using numpy. y = x * 0.1 + 0.3 + noise
x_train = np.random.rand(100).astype(np.float32)
noise = np.random.normal(scale=0.01, size=len(x_train))
y_train = x_train * 0.1 + 0.3 + noise
# plot it
g = pylab.plot(x_train, y_train, '.')
Construct a simple neural network (a network of logistic units) which will implement (X1 XOR X2) AND X3 function. Choose weights (θ_i-s) of all dendritic inputs and bias inputs. Demonstrate that your network works by presenting the truth table. Present your network by a simple graph. You can produce the graph in any way convenient including pan and paper.
We're starting solving the X1 XOR X2 problem. In the case of x1 XOR x2, the value is true if one of the values is true. We can construct a truth table that look like the following:
X1 | X2 | b(x) |
---|---|---|
0 | 0 | 0 |
0 | 1 | 1 |
1 | 0 | 1 |
1 | 1 | 0 |
A network would look like the following:
## This is for an `png` image
from IPython.display import Image
fig = Image(filename=('img/xor.png'))
fig
$b1 = -9 + 6 * x1 + 6 * x2$
$b2 = -4 + 8 * x1 + 8 * x2$
for x1=0, x2=0
$b1=-9$
$b2=-4$
$b(x)$ is negative, $b(x)$ is 0
We can put everything together into one network graph
The network including the AND would look like the following:
fig = Image(filename=('img/p2.png'))
fig
The overall truth table looks like this:
X1 | X2 | X3 | b(x) |
---|---|---|---|
0 | 0 | 0 | 0 |
0 | 0 | 1 | 0 |
0 | 1 | 0 | 0 |
0 | 1 | 1 | 1 |
1 | 0 | 0 | 0 |
1 | 0 | 1 | 1 |
1 | 1 | 0 | 0 |
1 | 1 | 1 | 0 |
Determine the value of number e = 2.7183… to 6 decimal places using Taylor expansion. Export the TensorBoard graph of your process. Perform similar calculation using expression for e as lim┬(n → ∞)〖〖(1+ 1/n)〗^n 〗. Again export the TensorBoard graph of you process. Provide working code for both approaches.
Note: Alas, I was first working on the problemset uploaded to canvas (the one with the Fibonacci sequence). I did realize too late, that the problem send by mail was different. Please send us a short info mail if such a thing happens in the future? I lost quite some time on the wrong document.
We're starting with the first approach. As stoping parameter we're defining the stop value as: 2.718055556. Before we work in Tensorflow we're working in Python.
# Load libraries
import tensorflow as tf
import numpy as np
import matplotlib.pylab as plt
# Parameters
i = 1
tay_sum = 0
break_value = 10
# While loop
while True:
add = np.power(1+(1/i), i)
print(add)
print(abs(add))
# Stop condition
if abs(add) < 1e-6:
break
tay_sum += add
# Print output
print('value: {}, addition: {}'.format(tay_sum, add))
i += 1
if i > break_value:
break
Next we're working in Tensorflow on the same problem.
# Parameters
stop_value = 2.718055556
break_value = 10
n = 1.0
i = 1
f = [tf.constant(0), tf.constant(1)]
# Function
def after_point(x):
s = str(x)
if not '.' in s:
return 0
return len(s) - s.index('.') - 1
# Series
while True:
print(("n:{0}, sum of series: {1})").format(i, '%.20f'%(n)))
if n >= stop_value:
break
n = float((1+1/n) ** n)
print(after_point(n))
i += 1
if i > break_value:
break
# Output to tensorflow
with tf.Session() as sess:
output = sess.run(f)
print(output)
tf.summary.FileWriter ("p3", sess.graph)
fig = Image(filename=('/home/tim/img/p3.jpg'))
fig
When I tried running code on page 63 om my notes for lecture 10, the resulting TensorBoard graph was not entirely identical with the graph on page 64. Please fix the code on page 63 in order to produce the graph identical to the graph on page 64.
Below you can find the fixed code:
import tensorflow as tf
with tf.name_scope("Scope_A"):
a = tf.add(1, 2, name="A_add")
b = tf.multiply(a, 3, name="A_mul")
with tf.name_scope("Scope_B"):
c = tf.add(4, 5, name="B_add")
d = tf.multiply(c, 6, name="B_mul")
with tf.name_scope("Output"):
e = tf.add(b, d, name="output")
writer = tf.summary.FileWriter('p4', graph=tf.get_default_graph())
writer.close()
fig = Image(filename=('/home/tim/img/p4.png'))
fig
Please examine attached Jupyter notebook 2_linear_regression.ipynb. As you are running its cells, the notebook will complain about non-existent API calls. This notebook was written in an earlier version of TensorFlow API and some calls changed their names. Fix all code by replacing older calls with calls in TF 1.4 Uncomment all optional (print) lines. Provide a copy of this notebook with all intermediate results and the image of TensorFlow graph as captured by the TensorBoard.
Below is the fixed code:
# Import tensorflow and other libraries.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import numpy as np
import math
get_ipython().magic(u'matplotlib inline')
import pylab
sess = None
def resetSession():
tf.reset_default_graph()
global sess
if sess is not None: sess.close()
sess = tf.InteractiveSession()
resetSession()
print(tf.__version__)
# Create input data using NumPy. y = x * 0.1 + 0.3 + noise
x_train = np.random.rand(100).astype(np.float32)
noise = np.random.normal(scale=0.01, size=len(x_train))
y_train = x_train * 0.1 + 0.3 + noise
# Uncomment the following line to plot our input data.
pylab.plot(x_train, y_train, '.')
# Create some fake evaluation data
x_eval = np.random.rand(len(x_train)).astype(np.float32)
noise = np.random.normal(scale=0.01, size=len(x_train))
y_eval = x_eval * 0.1 + 0.3 + noise
# Build inference graph.
# Create Variables W and b that compute y_data = W * x_data + b
W = tf.Variable(tf.random_normal([1]), name='weights')
b = tf.Variable(tf.random_normal([1]), name='bias')
# Uncomment the following lines to see what W and b are.
print(W)
print(b)
# Create a placeholder we'll use later to feed x's into the graph for training and eval.
# shape=[None] means we can put in any number of examples.
# This is used for minibatch training, and to evaluate a lot of examples at once.
x = tf.placeholder(shape=[None], dtype=tf.float32, name='x')
# Uncomment this line to see what x is
print(x)
# This is the same as tf.add(tf.mul(W, x), b), but looks nicer
y = W * x + b
At this point, we have:
# Write the graph so we can look at it in TensorBoard
# https://www.tensorflow.org/versions/r0.12/how_tos/summaries_and_tensorboard/index.html
#sw = tf.train.SummaryWriter('summaries/', graph=tf.get_default_graph())
sw = tf.summary.FileWriter('summaries/', graph=tf.get_default_graph())
fig = Image(filename=('/home/tim/img/p5.png'))
fig
# Create a placeholder we'll use later to feed the correct y value into the graph
y_label = tf.placeholder(shape=[None], dtype=tf.float32, name='y_label')
print (y_label)
# Build training graph.
loss = tf.reduce_mean(tf.square(y - y_label)) # Create an operation that calculates loss.
optimizer = tf.train.GradientDescentOptimizer(0.5) # Create an optimizer.
train = optimizer.minimize(loss) # Create an operation that minimizes loss.
# Uncomment the following 3 lines to see what 'loss', 'optimizer' and 'train' are.
print("loss:", loss)
print("optimizer:", optimizer)
print("train:", train)
# Create an operation to initialize all the variables.
#init = tf.initialize_all_variables()
init = tf.global_variables_initializer()
print(init)
sess.run(init)
# Uncomment the following line to see the initial W and b values.
print(sess.run([W, b]))
# Uncomment these lines to test that we can compute a y from an x (without having trained anything).
# x must be a vector, hence [3] not just 3.
x_in = [3]
sess.run(y, feed_dict={x: x_in})
# Calculate loss on the evaluation data before training
def eval_loss():
return sess.run(loss, feed_dict={x: x_eval, y_label: y_eval})
eval_loss()
# Track of how loss changes, so we can visualize it in TensorBoard
#tf.scalar_summary('loss', loss)
tf.summary.scalar('loss', loss)
#summary_op = tf.merge_all_summaries()
summary_op = tf.summary.merge_all()
# Perform training.
for step in range(201):
# Run the training op; feed the training data into the graph
summary_str, _ = sess.run([summary_op, train], feed_dict={x: x_train, y_label: y_train})
sw.add_summary(summary_str, step)
# Uncomment the following two lines to watch training happen real time.
if step % 20 == 0:
print(step, sess.run([W, b]))
# Uncomment the following lines to plot the predicted values
pylab.plot(x_train, y_train, '.', label="target")
pylab.plot(x_train, sess.run(y, feed_dict={x: x_train, y_label: y_train}), label="predicted")
pylab.legend()
# Check accuracy on eval data after training
eval_loss()
fig = Image(filename=('/home/tim/img/p5_2.png'))
fig
def predict(x_in): return sess.run(y, feed_dict={x: [x_in]})
# Save the model
saver = tf.train.Saver()
saver.save(sess, '/home/tim/my_checkpoint.ckpt')
# Current prediction
predict(3)
# Reset the model by running the init op again
sess.run(init)
# Prediction after variables reinitialized
predict(3)
saver.restore(sess, '/home/tim/my_checkpoint.ckpt')
# Predictions after variables restored
predict(3)