
You can follow these steps to make a Linux system ready for AI from scratch.
You need CUDA to use NVIDIA GPUs for general purpose processing. As NVIDIA puts it: “CUDA is a parallel computing platform and programming model that makes using a GPU for general purpose computing simple and elegant.” More information can be found under this link.
CUDA
Let’s check to see if CUDA is installed or not and if yes, which version. We can do that by:
nvcc --version
If nvcc is not available, you can install it by the following command:
sudo apt install nvidia-cuda-toolkit
If CUDA is not installed or you want an updated version installed, you can follow the steps on the NVIDIA website to install it.

For example, here are the instructions for installing CUDA Toolkit version 11.0. We are assuming Ubuntu 18.04 with x86_64 architecture. The installation method is runfile (local). You can find further instructions and options under this link.
wget http://developer.download.nvidia.com/compute/cuda/11.0.2/local_installers/cuda_11.0.2_450.51.05_linux.run
sudo sh cuda_11.0.2_450.51.05_linux.run
If you like to install CUDA 10.1, the instructions can be found here. I have found the runfile (local) method of installation to be prone to less errors.
The last step is to make sure that the path to CUDA is correct. Let’s quickly check the CUDA version again by running the following command:
nvcc --version
If the version is what you expected, we are good to go. Otherwise, let’s fix the path. You can find the list of installed CUDA versions under /usr/local/ folder by running the following command:
ls /usr/local/
Trending AI Articles:
1. How to automatically deskew (straighten) a text image using OpenCV
3. 5 Best Artificial Intelligence Online Courses for Beginners in 2020
4. A Non Mathematical guide to the mathematics behind Machine Learning
Assuming that you want to use cuda-11.0. You can run the following commands to fix the path:
sudo rm -rf /usr/local/cuda
sudo ln -sf /usr/local/cuda-11.0 /usr/local/cuda
echo 'export CUDA_HOME=/usr/local/cuda' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64' >> ~/.bashrc
echo 'export PATH=$CUDA_HOME/bin:$PATH' >> ~/.bashrc
source ~/.bashrc
What we are doing here is that we remove the current cuda folder and we symlink the newly installed cuda-11.0 in its place. We then add the path to the .bashrc file (assuming using bash).
Anaconda
To have independent environments for different projects, we can use Anaconda. It is especially useful when you are working on multiple projects with different dependencies and you want to switch between them quickly.
If you do not have pip installed, you can install it via:
sudo apt install python-pip
To install Anaconda, you can follow the steps under this link. For example, to install the 2020.07 distribution for x86_64 architecture, you can run the following commands:
sudo apt-get install libgl1-mesa-glx libegl1-mesa libxrandr2 libxrandr2 libxss1 libxcursor1 libxcomposite1 libasound2 libxi6 libxtst6
wget https://repo.anaconda.com/archive/Anaconda3-2020.07-Linux-x86_64.sh
bash Anaconda3-2020.07-Linux-x86_64.sh
You can choose different distributions of Anaconda from this repository.
Dependencies
This is the last step to create a conda environment and install the dependencies.
Let’s open a new terminal and create and activate a new conda environment, named ai, with the following commands:
conda create --name ai python=3.7 --yes
conda activate ai
Please note that you can edit the choice of name and Python version for the environment.
PyTorch can be installed by following the instructions under the official website. For example, you can install the latest version of PyTorch with assumption of CUDA 11.0 using the following command:
conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch
TensorFlow can be installed by following the instructions of the official website, by simply using the command:
pip install tensorflow
Other dependencies can be installed as follows:
# FastAI
conda install -c fastai fastai
# OpenCV
pip install opencv-python
# Jupyter Notebook
conda install -c conda-forge notebook
# Transformers
pip install transformers
You can install further dependencies.
To deactivate the conda environment, you can run:
conda deactivate ai
Then you can switch to another environment, say ai2 by the following command:
conda activate ai2
Please take a moment to leave a comment or clap to let me know if this article was helpful.
What extra information would have been helpful? What else would you like to learn about?
Don’t forget to give us your ? !



Linux Server for AI: The Step-by-Step Guide was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.
