1. Hello, Colab!#
1.1. Pre-reading#
NONE
1.2. Objective#
Quickly explore our Google Colab environment!
Colab Notebook#
This is just a Jupyter Notebook, intended to be opened in Google Colab.
Jupyter Notebooks mix Markdown and executable Python in the same document. This GitHub Pages website is static, meaning it cannot run code, but you can open this Notebook in Google Cloud and run it for free!
Open in Colab#
From this website you can click the launch button 🚀 at the top right of the page.
Otherwise, either:
Link your GitHub account and browse to this file from within Colab
Install the Open in Colab Chrome extension
Change your URL to replace
github.com/
withgithubtocolab.com
Download the file and then upload it to Colab
1.3. Platform and Hardware#
First, let’s checkout what operating system our Colab instance is using.
We’ll then step through the hardware.
import platform
print(platform.platform())
import sys
print("Python version:", sys.version)
CPU#
import os
cpu_cores = os.cpu_count()
print("Number of cores:", cpu_cores)
GPU#
We have to enable the GPU first, as described here
Navigate to Edit→Notebook Settings
select GPU from the Hardware Accelerator drop-down
# The % tells the instance to run this bash command inside the virtual environment
%pip install GPUtil
import GPUtil
gpus = GPUtil.getGPUs()
for gpu in gpus:
print(
"GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(
gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil * 100, gpu.memoryTotal
)
)
1.4. TensorFlow#
Let’s confirm that we can import Tensorflow and it can find the GPU.
If this doesn’t work, see the previous step and enable the GPU in Edit -> Notebook Settings
import tensorflow as tf
print("Running Tensorflow version", tf.__version__)
device_name = tf.test.gpu_device_name()
if device_name != "/device:GPU:0":
raise SystemError("GPU device not found")
print("Found GPU at: {}".format(device_name))
TensorFlow speedup on GPU relative to CPU#
This example constructs a typical convolutional neural network layer over a random image and manually places the resulting ops on either the CPU or the GPU to compare execution speed.
import tensorflow as tf
import timeit
device_name = tf.test.gpu_device_name()
if device_name != "/device:GPU:0":
print(
"\n\nThis error most likely means that this notebook is not "
"configured to use a GPU. Change this in Notebook Settings via the "
"command palette (cmd/ctrl-shift-P) or the Edit menu.\n\n"
)
raise SystemError("GPU device not found")
def cpu():
with tf.device("/cpu:0"):
random_image_cpu = tf.random.normal((100, 100, 100, 3))
net_cpu = tf.keras.layers.Conv2D(32, 7)(random_image_cpu)
return tf.math.reduce_sum(net_cpu)
def gpu():
with tf.device("/device:GPU:0"):
random_image_gpu = tf.random.normal((100, 100, 100, 3))
net_gpu = tf.keras.layers.Conv2D(32, 7)(random_image_gpu)
return tf.math.reduce_sum(net_gpu)
# We run each op once to warm up; see: https://stackoverflow.com/a/45067900
cpu()
gpu()
# Run the op several times.
print(
"Time (s) to convolve 32x7x7x3 filter over random 100x100x100x3 images "
"(batch x height x width x channel). Sum of ten runs."
)
print("CPU (s):")
cpu_time = timeit.timeit("cpu()", number=10, setup="from __main__ import cpu")
print(cpu_time)
print("GPU (s):")
gpu_time = timeit.timeit("gpu()", number=10, setup="from __main__ import gpu")
print(gpu_time)
print("GPU speedup over CPU: {}x".format(int(cpu_time / gpu_time)))