1. Preparation:
- Jetpack 3.2
- Download file from JetPack3.2.
- Java
- Bazel
2. Installing Step:
Step 1: Install JetPack3.2
- Download this file and install like this Url.
Step 2: Install Java
1
2
3sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
Step 3: Install More Stuff
1 | # For Python2 |
Step 4: Install Bazel
Download the Bazel-0.11.1-dist
Unzip the package
1
2# Up to your download file path.
sudo unzip ./bazel.0.11.1-dist.zip -d ./bazel.0.11.1-distCd bazel-0.11.1-dist
1
cd ./bazel.0.11.1-dist
Start the compilation process by issuing
1
./compile.sh
If you get the error : Closure Rules requires Bazel >=0.4.5 but was 0.11.1
, see like this bazel#4834. Change all the error file the version 0.4.5 to 0.0.0. Then you will build it successfully.
If you get the error : /home/nvidia/.cache/bazel/_bazel_nvidia/d2751a49dacf4cb14a513ec663770624/external/io_bazel_rules_closure/closure/stylesheets/closure_css_library.bzl:27:13: The function 'set' has been removed in favor of 'depset', please use the latter. You can temporarily refer to the old 'set' constructor from unexecuted code by using --incompatible_disallow_uncalled_set_constructor=false
, see like this issue bazel#4828. Build it again in another time.
- Copy the build to your system bin folder
1
sudo cp output/bazel /usr/local/bin
Step 5: Configure Tensorflow and build
Download Tensorflow1.5.0
1
2
3
4
5
6
7
8
9
10
11
12
13#Download From https://github.com/tensorflow/tensorflow.git
git clone https://github.com/tensorflow/tensorflow.git
cd ./tensorflow
git checkout v1.5.0
# Change some INFO for Jetson TX
sudo gedit tensorflow/stream_executor/cuda/cuda_gpu_executor.cc
#In the function static intTryToReadNumaNode(conststring &pci_bus_id,intdevice_ordinal)add the following lines at the start of the function.
#ADD just like this(you add it into you edit file where the function is.):
LOG(INFO) << "ARM has no NUMA node, hardcoding to return zero";
return 0;
#Then cp the cudnn.h
sudo mkdir /usr/lib/aarch64-linux-gnu/include/
sudo cp /usr/include/cudnn.h /usr/lib/aarch64-linux-gnu/include/cudnn.hConfigure Tensorflow
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36./configure
#Then configure it what you need. There are my options.
Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3
Please specify optimization flags to use during compilation [Default is -march=native]:
Do you wish to use jemalloc as the malloc implementation? (Linux only) [Y/n] y
jemalloc enabled on Linux
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] n
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N] n
No Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N] n
XLA JIT support will be enabled for TensorFlow
Found possible Python library paths:
/usr/local/lib/python2.7/dist-packages
/usr/lib/python2.7/dist-packages
Please input the desired Python library path to use. Default is [/usr/local/lib/python3/dist-packages]
Using python library path: /usr/local/lib/python2.7/dist-packages
Do you wish to build TensorFlow with OpenCL support? [y/N] n
No OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] y
CUDA support will be enabled for TensorFlow
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to use system default]: 7.0.5
Please specify the location where CUDA toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify the Cudnn version you want to use. [Leave empty to use system default]:
Please specify the location where cuDNN library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/lib/aarch64-linux-gnu
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
Extracting Bazel installation...
.......................
INFO: Starting clean (this may take a while). Consider using --expunge_async if the clean takes more than several minutes.
.......................
INFO: All external dependencies fetched successfully.
Configuration finishedOnce your configuration is done, run like this:
1
bazel build -c opt --local_resources 3072,4.0,1.0 --verbose_failures --config=cuda //tensorflow/tools/pip_package:build_pip_package
Once Tensorflow is compiled, build the pip package:
1
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
Move the pip wheel from the tmp directory if you want to save it:
1
mv /tmp/tensorflow_pkg/tensorflow-1.5.0-cp35-cp35mu-linux_aarch64.whl $HOME
Install the pip wheel:
1
sudo pip3 install $HOME/tensorflow-1.5.0-cp35-cp35mu-linux_aarch64.whl
3. Test the Tensorflow
1 | python3 |