This lesson is being piloted (Beta version)

Machine Learning on GPU

Key Points

Introduction
  • GPUs are great for highly-parallel processing.

  • CPUs are more flexible than GPUs.

  • GPUs are most useful for neural network applications in machine learning.

Is a GPU available?
  • A GPU needs to be available in order for you to use it.

  • Not all GPUs are the same.

Using the GPU
  • Both the model and the data must be moved onto the GPU for training.

  • Data should be moved onto the GPU in batches.

Comfort break!
  • You’ll be back.

  • Squirrels may mock you.

Run time comparisons
  • Using a GPU will not improve your ML performance.

  • Using a GPU will improve your run time only under certain circumstances.

  • GPU processes are asynchronous.

Memory considerations
  • GPU memory is not the only consideration when setting the batch size.

  • Memory limits will depend on both allocated and reserved memory.

Comfort break!
  • You’ll be back.

  • They’re the jedi of the sea.

OPTIONAL: Going Parallel
  • Using Multiple-GPUs with PyTorch is trivial

  • Multi-GPU training is subject to processing bottlenecks.