top of page
  • Writer's pictureZafer Arıcan

Transfer Learning Training on Jetson Nano with PyTorch

Updated: Jul 31, 2019

Jetson Nano is a CUDA-capable Single Board Computer (SBC) from Nvidia. It is designed to perform fast deep learning inference on a small-size-factor board. Although it mostly aims to be an edge device to use already trained models, it is also possible to perform training on a Jetson Nano.


In this post, I explain how to setup Jetson Nano to perform transfer learning training using PyTorch. I use the tutorial available on PyTorch Transfer Learning Tutorial.


I assume that you already have OS installed, ready to run, and internet connection either through ethernet cable or by using a usb wifi dongle or M.2 Wifi Card is available. It is recommended to burn the OS into a fast micro SD Card.


We first prepare the OS to have enough swap area to avoid memory problems during processing. (If no swap area is provided, it gives memory error when training. )


It is possible to add swap area using a swapfile. I follow the instructions given here. To generate a 8GB swap area


sudo dd if=/dev/zero of=/swap bs=1024 count=8M

sudo chmod 600 /swap

sudo mkswap /swap

sudo swapon /swap


To make the swap area persistent at each reboot, we add it to /etc/fstab. By using your favourite editor, add the following line to /etc/fstab,save and exit your editor.


/swap swap swap defaults 0 0


To check if the swap is available you can run the command free. It should show a swap area.



Once the swap area is ready, we install required packages to install PyTorch.


sudo apt install python3-pip

sudo apt install libjpeg-dev (This one is necessary to install Pillow)

pip3 install pillow==5.4.1

pip3 install numpy


❗We install an older version of pillow as pillow 6.0 causes an error when performing transfer learning. See ( this issue).



After pillow is installed, we download and install PyTorch specific to Jetson Nano using the info given here.


wget https://nvidia.box.com/shared/static/veo87trfaawj5pfwuqvhl6mzc5b55fbj.whl -O torch-1.1.0a0+b457266-cp36-cp36m-linux_aarch64.whl

pip3 install numpy torch-1.1.0a0+b457266-cp36-cp36m-linux_aarch64.whl


To install torch-vision:


git clone https://github.com/pytorch/vision

cd vision

sudo python3 setup.py install


You can check if it is installed and cuda is available by running

python3


and run


>> import torch

>> print(torch.cuda.is_available())


It should return True.



As PyTorch is installed, it is time to install Jupyter to run transfer learning tutorial.


To install Jupyter,


sudo apt install libfreetype6-dev pkg-config libpng-dev (These are installed for matplotlib)

pip3 install jupyter

pip3 install matplotlib


After installation, we need the transfer learning tutorial files. We download the ipynb file from here.

We also need to download the data files. We download it from here.

We create a data folder where you download the notebook and extract hymenoptera_data.zip in it.



Now, We configure the Jupyter, so that we can access it from another machine.


❗Unfortunately, running on the localhost and display using a monitor connected via HDMI cable causes a crash while training. (Probably, either GPU memory or processing is not enough to support both the display and the training).


To configure:

jupyter notebook --generate-config

(generates a default config file. This is one time only)


jupyter notebook password

(create a password. This is one time only too.)


jupyter notebook --ip=0.0.0.0

(although not the most secure way, it will let you connect to the Jupiter server located on the Jetson Nano)



Using a web browser on another machine, visit the Jupyter server.


http://<IP address of the Jetson Nano>:8888


It will ask the password. Provide the password you set when configuring Jupyter.



Find the ipynb file you downloaded and open it.


❗❗Before executing, unplug the HDMI cable if it is connected. It causes a crash when training.


❗❗❗Training with CUDA on Jetson Nano requires significant amount of power. You should choose proper power adaptor and cable. If Jetson Nano shuts down (power led turns off) during training, it is probably due to insufficient power supply.


Run each cell.


When executing the cell with

"model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,

num_epochs=25)"

it will train the model. On my Jetson Nano, it took 12 minutes 48 seconds using CUDA. For such a compact machine, I think it is really impressive.


Note that it does not save the retrained model. To save it, you can use


torch.save(model_ft.state_dict(), 'model_retrained')


This concludes the setup to perform transfer learning training on Jetson Nano using PyTorch. This also proves that this SBC can be used to try prototypes you develop before training large dataset on more powerful machines.











7,241 views2 comments

Recent Posts

See All
bottom of page