NVIDIA provides precompiled Tensorflow pip wheel packages for Jetson devices. They can be found at Download Center. Although releases come out monthly, the build which includes the very recent version of TensorFlow might not be available so we need to build it ourselves. I needed to use TensorFlow 2.0 and at that time such NVIDIA's build was not available.
I put here the terminal output from my journey but TL;TR version is:
There are couple of online articles which describe process of building TF on Jetson:
compile deeplearning libraries for jetson nano
I am posting here Terminal output from my own journey. I made mistakes and I learned from them. I hope my experience will help some of you not to make the same mistakes and save you some time.
So here we go. First steps were these:
$ sudo apt update
$ sudo apt install -y python3-venv
What is Bazel in TensorFlow? When do I need to build again? - Stack Overflow
gensoft.pasteur.fr/docs/bazel/0.3.0/bazel-user-manual.html
Initially, I downloaded and installed the latest version (but that proved a mistake, so don't do it!):
$ unzip bazel-1.2.1-dist.zip -d bazel bazel-1.2.1-dist
$ cd bazel-1.2.1-dist/
$ env EXTRA_BAZEL_ARGS="--host_javabase=@local_jdk//:jdk" bash ./compile.sh
🍃 Building Bazel from scratch......
🍃 Building Bazel with Bazel.
.WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.protobuf.UnsafeUtil (file:/tmp/bazel_bf35kYvZ/archive/libblaze.jar) to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of com.google.protobuf.UnsafeUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
DEBUG: /tmp/bazel_bf35kYvZ/out/external/bazel_toolchains/rules/rbe_repo/version_check.bzl:59:9:
Current running Bazel is not a release version and one was not defined explicitly in rbe_autoconfig target. Falling back to '0.28.1'
DEBUG: /tmp/bazel_bf35kYvZ/out/external/bazel_toolchains/rules/rbe_repo/checked_in.bzl:103:9: rbe_ubuntu1804_java11 not using checked in configs as detect_java_home was set to True
DEBUG: /tmp/bazel_bf35kYvZ/out/external/bazel_toolchains/rules/rbe_repo/version_check.bzl:59:9:
Current running Bazel is not a release version and one was not defined explicitly in rbe_autoconfig target. Falling back to '0.28.1'
DEBUG: /tmp/bazel_bf35kYvZ/out/external/bazel_toolchains/rules/rbe_repo/checked_in.bzl:103:9: rbe_ubuntu1604_java8 not using checked in configs as detect_java_home was set to True
DEBUG: /tmp/bazel_bf35kYvZ/out/external/build_bazel_rules_nodejs/internal/common/check_bazel_version.bzl:49:5:
Current Bazel is not a release version, cannot check for compatibility.
DEBUG: /tmp/bazel_bf35kYvZ/out/external/build_bazel_rules_nodejs/internal/common/check_bazel_version.bzl:51:5: Make sure that you are running at least Bazel 0.17.1.
INFO: Analyzed target //src:bazel_nojdk (235 packages loaded, 10473 targets configured).
INFO: Found 1 target...
INFO: From Compiling src/main/cpp/blaze_util_posix.cc:
src/main/cpp/blaze_util_posix.cc: In function 'uint64_t blaze::AcquireLock(const blaze_util::Path&, bool, bool, blaze::BlazeLock*)':
src/main/cpp/blaze_util_posix.cc:680:3: warning: ignoring return value of 'int ftruncate(int, __off_t)', declared with attribute warn_unused_result [-Wunused-result]
(void) ftruncate(lockfd, 0);
^~~~~~~~~~~~~~~~~~~~~~~~~~~
INFO: From Compiling src/main/tools/daemonize.c:
src/main/tools/daemonize.c: In function 'WritePidFile':
src/main/tools/daemonize.c:95:3: warning: ignoring return value of 'write', declared with attribute warn_unused_result [-Wunused-result]
write(pid_done_fd, &dummy, sizeof(dummy));
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
INFO: From Generating Java (Immutable) proto_library //src/main/protobuf:execution_statistics_proto:
[libprotobuf WARNING external/com_google_protobuf/src/google/protobuf/compiler/java/java_file.cc:228] The optimize_for = LITE_RUNTIME option is no longer supported by protobuf Java code generator and may generate broken code. It will be ignored by protoc in the future and protoc will always generate full runtime code for Java. To use Java Lite runtime, users should use the Java Lite plugin instead. See:
https://github.com/google/protobuf/blob/master/java/lite.md
INFO: From JavacBootstrap src/main/java/com/google/devtools/build/lib/shell/libshell-skylark.jar [for host]:
warning: Implicitly compiled files were not subject to annotation processing.
Use -proc:none to disable annotation processing or -implicit to specify a policy for implicit compilation.
1 warning
INFO: From JavacBootstrap src/java_tools/singlejar/java/com/google/devtools/build/singlejar/libbootstrap.jar [for host]:
warning: Implicitly compiled files were not subject to annotation processing.
Use -proc:none to disable annotation processing or -implicit to specify a policy for implicit compilation.
1 warning
INFO: From JavacBootstrap src/java_tools/buildjar/java/com/google/devtools/build/buildjar/libskylark-deps.jar [for host]:
warning: Implicitly compiled files were not subject to annotation processing.
Use -proc:none to disable annotation processing or -implicit to specify a policy for implicit compilation.
Note: Some input files use or override a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
1 warning
INFO: From JavacBootstrap src/java_tools/buildjar/java/com/google/devtools/build/buildjar/libbootstrap_VanillaJavaBuilder.jar [for host]:
warning: Implicitly compiled files were not subject to annotation processing.
Use -proc:none to disable annotation processing or -implicit to specify a policy for implicit compilation.
Note: src/java_tools/buildjar/java/com/google/devtools/build/buildjar/VanillaJavaBuilder.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
1 warning
Target //src:bazel_nojdk up-to-date:
bazel-bin/src/bazel_nojdk
INFO: Elapsed time: 1662.452s, Critical Path: 339.20s
INFO: 1771 processes: 1524 local, 247 worker.
INFO: Build completed successfully, 1821 total actions
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.protobuf.UnsafeUtil (file:/tmp/bazel_bf35kYvZ/archive/libblaze.jar) to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of com.google.protobuf.UnsafeUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
DEBUG: /tmp/bazel_bf35kYvZ/out/external/bazel_toolchains/rules/rbe_repo/version_check.bzl:59:9:
Current running Bazel is not a release version and one was not defined explicitly in rbe_autoconfig target. Falling back to '0.28.1'
DEBUG: /tmp/bazel_bf35kYvZ/out/external/bazel_toolchains/rules/rbe_repo/checked_in.bzl:103:9: rbe_ubuntu1804_java11 not using checked in configs as detect_java_home was set to True
DEBUG: /tmp/bazel_bf35kYvZ/out/external/bazel_toolchains/rules/rbe_repo/version_check.bzl:59:9:
Current running Bazel is not a release version and one was not defined explicitly in rbe_autoconfig target. Falling back to '0.28.1'
DEBUG: /tmp/bazel_bf35kYvZ/out/external/bazel_toolchains/rules/rbe_repo/checked_in.bzl:103:9: rbe_ubuntu1604_java8 not using checked in configs as detect_java_home was set to True
DEBUG: /tmp/bazel_bf35kYvZ/out/external/build_bazel_rules_nodejs/internal/common/check_bazel_version.bzl:49:5:
Current Bazel is not a release version, cannot check for compatibility.
DEBUG: /tmp/bazel_bf35kYvZ/out/external/build_bazel_rules_nodejs/internal/common/check_bazel_version.bzl:51:5: Make sure that you are running at least Bazel 0.17.1.
Build successful! Binary is here: /home/nvidia/bazel-1.2.1-dist/output/bazel
(tensorflow-demo) nvidia@nvidia-nano:~/bazel-1.2.1-dist$ sudo cp output/bazel /usr/local/bin
(tensorflow-demo) nvidia@nvidia-nano:~/bazel-1.2.1-dist$ cd output
(tensorflow-demo) nvidia@nvidia-nano:~/bazel-1.2.1-dist/output$ ls
bazel
(tensorflow-demo) nvidia@nvidia-nano:~/bazel-1.2.1-dist/output$ cd ..
(tensorflow-demo) nvidia@nvidia-nano:~/bazel-1.2.1-dist$ bazel version
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
Build label: 1.2.1- (@non-git)
Build target: bazel-out/aarch64-opt/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Sun Dec 8 00:34:35 2019 (1575765275)
Build timestamp: 1575765275
Build timestamp as int: 1575765275
I downloaded Tensorflow 2.0 and tried to configure its build:
$ wget https://github.com/tensorflow/tensorflow/archive/v2.0.0.zip -O tensorflow-v2.0.0.zip
$ unzip tensorflow-v2.0.0.zip
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads$ cd tensorflow-2.0.0/
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ ls
$ cd bazel-0.26.1-dist/
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/bazel-0.26.1-dist$ env EXTRA_BAZEL_ARGS="--host_javabase=@local_jdk//:jdk" bash ./compile.sh
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/bazel-0.26.1-dist$ sudo cp output/bazel /usr/local/bin
I then run Tensorflow build configuration again (but I missed to set on):
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ pip install numpy
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ python
Finally, building TF pip package was successful:
Activating environment:
Checking wheel location:
Installing h5py:
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ pip install --no-cache-dir h5py
If this doesn't help then another suggestion is to install these packages [source]:
$ sudo apt-get install libhdf5-serial-dev hdf5-tools
Installing TensorFlow wheel:
Testing importing Python TensorFlow module:
Build and installation of Tensorflow 2.0 on Jetson Nano was successful. But all this work rendered irrelevant for one small but very important detail I missed: as the log above states, Jetson Nano uses NVIDIA Tegra X1 chip which has CUDA capability of 5.3 but I left default values of 3.5, 7.0 when configuring Tensorflow build. This made this TF installation unusable on Jetson Nano.
For example, I tried to run my Tensorflow demo which uses TF2 and OpenCV to perform an inference on a webcam stream. Initially, I experienced an issue caused by the lack of RAM memory:
(tensorflow-demo) nvidia@nvidia-nano:~/dev/github/tensorflow-demo$ python3 src/tf-demo.py
WARNING:tensorflow:From /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages/tensorflow_core/python/compat/v2_compat.py:65: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
cwd = /home/nvidia/dev/github/tensorflow-demo
ssd_inception_v2_coco_2017_11_17
ssd_inception_v2_coco_2017_11_17/model.ckpt.index
ssd_inception_v2_coco_2017_11_17/model.ckpt.meta
ssd_inception_v2_coco_2017_11_17/frozen_inference_graph.pb
ssd_inception_v2_coco_2017_11_17/model.ckpt.data-00000-of-00001
ssd_inception_v2_coco_2017_11_17/saved_model
ssd_inception_v2_coco_2017_11_17/saved_model/saved_model.pb
ssd_inception_v2_coco_2017_11_17/saved_model/variables
ssd_inception_v2_coco_2017_11_17/checkpoint
2019-12-20 02:01:47.933465: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2019-12-20 02:01:47.990591: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 02:01:47.990764: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2019-12-20 02:01:47.997118: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2019-12-20 02:01:47.997336: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2019-12-20 02:01:47.997454: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2019-12-20 02:01:48.106108: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2019-12-20 02:01:48.241369: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2019-12-20 02:01:48.315745: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2019-12-20 02:01:48.316148: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2019-12-20 02:01:48.316895: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 02:01:48.317712: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 02:01:48.317948: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-12-20 02:01:48.342062: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2019-12-20 02:01:48.342623: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2e9105f0 executing computations on platform Host. Devices:
2019-12-20 02:01:48.342708: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version
2019-12-20 02:01:48.455941: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 02:01:48.456237: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2e8b5740 executing computations on platform CUDA. Devices:
2019-12-20 02:01:48.456292: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): NVIDIA Tegra X1, Compute Capability 5.3
2019-12-20 02:01:48.456869: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 02:01:48.456991: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2019-12-20 02:01:48.457236: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2019-12-20 02:01:48.457317: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2019-12-20 02:01:48.457375: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2019-12-20 02:01:48.457462: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2019-12-20 02:01:48.457543: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2019-12-20 02:01:48.457621: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2019-12-20 02:01:48.457676: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2019-12-20 02:01:48.457925: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 02:01:48.458199: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 02:01:48.458278: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-12-20 02:01:48.458442: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2019-12-20 02:01:48.460445: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-12-20 02:01:48.460499: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0
2019-12-20 02:01:48.460529: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N
2019-12-20 02:01:48.461862: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 02:01:48.462230: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 02:01:48.462432: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 150 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
2019-12-20 02:01:50.693255: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 20127744 exceeds 10% of system memory.
2019-12-20 02:01:50.702178: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 10063872 exceeds 10% of system memory.
2019-12-20 02:01:51.519460: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 20127744 exceeds 10% of system memory.
2019-12-20 02:01:51.528819: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 10063872 exceeds 10% of system memory.
2019-12-20 02:01:52.341950: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 20127744 exceeds 10% of system memory.
^C^Z
[1]+ Stopped python3 src/tf-demo.py
Nvidia Jetson Nano: Custom Object Detection from scratch using Tensorflow and OpenCV
I then changed underlying model to lighter ssdlite_mobile_v2 and tried again:
(tensorflow-demo) nvidia@nvidia-nano:~/dev/github/tensorflow-demo$ python3 src/tf-demo.py
WARNING:tensorflow:From /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages/tensorflow_core/python/compat/v2_compat.py:65: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
cwd = /home/nvidia/dev/github/tensorflow-demo
ssdlite_mobilenet_v2_coco_2018_05_09/checkpoint
ssdlite_mobilenet_v2_coco_2018_05_09/model.ckpt.data-00000-of-00001
ssdlite_mobilenet_v2_coco_2018_05_09/model.ckpt.meta
ssdlite_mobilenet_v2_coco_2018_05_09/model.ckpt.index
ssdlite_mobilenet_v2_coco_2018_05_09/saved_model/saved_model.pb
ssdlite_mobilenet_v2_coco_2018_05_09/pipeline.config
ssdlite_mobilenet_v2_coco_2018_05_09/frozen_inference_graph.pb
ssdlite_mobilenet_v2_coco_2018_05_09
ssdlite_mobilenet_v2_coco_2018_05_09/saved_model/variables
ssdlite_mobilenet_v2_coco_2018_05_09/saved_model
2019-12-20 12:15:04.017619: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2019-12-20 12:15:04.079977: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 12:15:04.080137: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2019-12-20 12:15:04.089744: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2019-12-20 12:15:04.090144: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2019-12-20 12:15:04.090346: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2019-12-20 12:15:04.199821: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2019-12-20 12:15:04.319588: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2019-12-20 12:15:04.387395: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2019-12-20 12:15:04.388051: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2019-12-20 12:15:04.388775: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 12:15:04.389548: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 12:15:04.389765: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-12-20 12:15:04.416122: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2019-12-20 12:15:04.416643: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x62e4ae0 executing computations on platform Host. Devices:
2019-12-20 12:15:04.416701: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version
2019-12-20 12:15:04.533731: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 12:15:04.534034: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x62b6580 executing computations on platform CUDA. Devices:
2019-12-20 12:15:04.534090: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): NVIDIA Tegra X1, Compute Capability 5.3
2019-12-20 12:15:04.534772: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 12:15:04.534908: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2019-12-20 12:15:04.535168: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2019-12-20 12:15:04.535268: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2019-12-20 12:15:04.535337: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2019-12-20 12:15:04.535429: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2019-12-20 12:15:04.535521: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2019-12-20 12:15:04.535613: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2019-12-20 12:15:04.535714: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2019-12-20 12:15:04.536014: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 12:15:04.536334: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 12:15:04.536425: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-12-20 12:15:04.536676: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2019-12-20 12:15:04.538900: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-12-20 12:15:04.538957: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0
2019-12-20 12:15:04.538987: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N
2019-12-20 12:15:04.541158: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 12:15:04.541517: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 12:15:04.541671: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 70 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
2019-12-20 12:18:51.429526: F tensorflow/stream_executor/cuda/cuda_driver.cc:175] Check failed: err == cudaSuccess || err == cudaErrorInvalidValue Unexpected CUDA error: unknown error
Aborted (core dumped)
I should have built TF with compute capability 5.3 only. I found couple of examples which confirm this:
I put here the terminal output from my journey but TL;TR version is:
- check first if Tensorflow wheel of desired version is already available at Download Center [https://developer.download.nvidia.com/compute/redist/jp/v42/tensorflow-gpu/]; if so, use that one and save yourself some time and hair pulls
- use recommended bazel installation version: 0.26.1 or lower
- set proper CUDA compute capabilities when configuring TF build
There are couple of online articles which describe process of building TF on Jetson:
compile deeplearning libraries for jetson nano
I am posting here Terminal output from my own journey. I made mistakes and I learned from them. I hope my experience will help some of you not to make the same mistakes and save you some time.
So here we go. First steps were these:
$ sudo apt update
$ sudo apt upgrade
$ sudo apt install python3-pip$ sudo apt install -y python3-venv
$ python3 -m venv ~/python-envs/tensorflow-demo
$ source ~/python-envs/tensorflow-demo/bin/activate
$ pip3 install wheel
$ pip install -U --user pip six numpy wheel setuptools moc
$ pip install -U --user keras_applications keras_preprocessing --no-deps
Compiling Bazel (1st round...with wrong Bazel version...)
What is Bazel in TensorFlow? When do I need to build again? - Stack Overflow
gensoft.pasteur.fr/docs/bazel/0.3.0/bazel-user-manual.html
Initially, I downloaded and installed the latest version (but that proved a mistake, so don't do it!):
$ wget https://github.com/bazelbuild/bazel/releases/download/1.2.1/bazel-1.2.1-dist.zip
$ unzip bazel-1.2.1-dist.zip -d bazel bazel-1.2.1-dist
$ cd bazel-1.2.1-dist/
$ env EXTRA_BAZEL_ARGS="--host_javabase=@local_jdk//:jdk" bash ./compile.sh
🍃 Building Bazel from scratch......
🍃 Building Bazel with Bazel.
.WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.protobuf.UnsafeUtil (file:/tmp/bazel_bf35kYvZ/archive/libblaze.jar) to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of com.google.protobuf.UnsafeUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
DEBUG: /tmp/bazel_bf35kYvZ/out/external/bazel_toolchains/rules/rbe_repo/version_check.bzl:59:9:
Current running Bazel is not a release version and one was not defined explicitly in rbe_autoconfig target. Falling back to '0.28.1'
DEBUG: /tmp/bazel_bf35kYvZ/out/external/bazel_toolchains/rules/rbe_repo/checked_in.bzl:103:9: rbe_ubuntu1804_java11 not using checked in configs as detect_java_home was set to True
DEBUG: /tmp/bazel_bf35kYvZ/out/external/bazel_toolchains/rules/rbe_repo/version_check.bzl:59:9:
Current running Bazel is not a release version and one was not defined explicitly in rbe_autoconfig target. Falling back to '0.28.1'
DEBUG: /tmp/bazel_bf35kYvZ/out/external/bazel_toolchains/rules/rbe_repo/checked_in.bzl:103:9: rbe_ubuntu1604_java8 not using checked in configs as detect_java_home was set to True
DEBUG: /tmp/bazel_bf35kYvZ/out/external/build_bazel_rules_nodejs/internal/common/check_bazel_version.bzl:49:5:
Current Bazel is not a release version, cannot check for compatibility.
DEBUG: /tmp/bazel_bf35kYvZ/out/external/build_bazel_rules_nodejs/internal/common/check_bazel_version.bzl:51:5: Make sure that you are running at least Bazel 0.17.1.
INFO: Analyzed target //src:bazel_nojdk (235 packages loaded, 10473 targets configured).
INFO: Found 1 target...
INFO: From Compiling src/main/cpp/blaze_util_posix.cc:
src/main/cpp/blaze_util_posix.cc: In function 'uint64_t blaze::AcquireLock(const blaze_util::Path&, bool, bool, blaze::BlazeLock*)':
src/main/cpp/blaze_util_posix.cc:680:3: warning: ignoring return value of 'int ftruncate(int, __off_t)', declared with attribute warn_unused_result [-Wunused-result]
(void) ftruncate(lockfd, 0);
^~~~~~~~~~~~~~~~~~~~~~~~~~~
INFO: From Compiling src/main/tools/daemonize.c:
src/main/tools/daemonize.c: In function 'WritePidFile':
src/main/tools/daemonize.c:95:3: warning: ignoring return value of 'write', declared with attribute warn_unused_result [-Wunused-result]
write(pid_done_fd, &dummy, sizeof(dummy));
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
INFO: From Generating Java (Immutable) proto_library //src/main/protobuf:execution_statistics_proto:
[libprotobuf WARNING external/com_google_protobuf/src/google/protobuf/compiler/java/java_file.cc:228] The optimize_for = LITE_RUNTIME option is no longer supported by protobuf Java code generator and may generate broken code. It will be ignored by protoc in the future and protoc will always generate full runtime code for Java. To use Java Lite runtime, users should use the Java Lite plugin instead. See:
https://github.com/google/protobuf/blob/master/java/lite.md
INFO: From JavacBootstrap src/main/java/com/google/devtools/build/lib/shell/libshell-skylark.jar [for host]:
warning: Implicitly compiled files were not subject to annotation processing.
Use -proc:none to disable annotation processing or -implicit to specify a policy for implicit compilation.
1 warning
INFO: From JavacBootstrap src/java_tools/singlejar/java/com/google/devtools/build/singlejar/libbootstrap.jar [for host]:
warning: Implicitly compiled files were not subject to annotation processing.
Use -proc:none to disable annotation processing or -implicit to specify a policy for implicit compilation.
1 warning
INFO: From JavacBootstrap src/java_tools/buildjar/java/com/google/devtools/build/buildjar/libskylark-deps.jar [for host]:
warning: Implicitly compiled files were not subject to annotation processing.
Use -proc:none to disable annotation processing or -implicit to specify a policy for implicit compilation.
Note: Some input files use or override a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
1 warning
INFO: From JavacBootstrap src/java_tools/buildjar/java/com/google/devtools/build/buildjar/libbootstrap_VanillaJavaBuilder.jar [for host]:
warning: Implicitly compiled files were not subject to annotation processing.
Use -proc:none to disable annotation processing or -implicit to specify a policy for implicit compilation.
Note: src/java_tools/buildjar/java/com/google/devtools/build/buildjar/VanillaJavaBuilder.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
1 warning
Target //src:bazel_nojdk up-to-date:
bazel-bin/src/bazel_nojdk
INFO: Elapsed time: 1662.452s, Critical Path: 339.20s
INFO: 1771 processes: 1524 local, 247 worker.
INFO: Build completed successfully, 1821 total actions
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.protobuf.UnsafeUtil (file:/tmp/bazel_bf35kYvZ/archive/libblaze.jar) to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of com.google.protobuf.UnsafeUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
DEBUG: /tmp/bazel_bf35kYvZ/out/external/bazel_toolchains/rules/rbe_repo/version_check.bzl:59:9:
Current running Bazel is not a release version and one was not defined explicitly in rbe_autoconfig target. Falling back to '0.28.1'
DEBUG: /tmp/bazel_bf35kYvZ/out/external/bazel_toolchains/rules/rbe_repo/checked_in.bzl:103:9: rbe_ubuntu1804_java11 not using checked in configs as detect_java_home was set to True
DEBUG: /tmp/bazel_bf35kYvZ/out/external/bazel_toolchains/rules/rbe_repo/version_check.bzl:59:9:
Current running Bazel is not a release version and one was not defined explicitly in rbe_autoconfig target. Falling back to '0.28.1'
DEBUG: /tmp/bazel_bf35kYvZ/out/external/bazel_toolchains/rules/rbe_repo/checked_in.bzl:103:9: rbe_ubuntu1604_java8 not using checked in configs as detect_java_home was set to True
DEBUG: /tmp/bazel_bf35kYvZ/out/external/build_bazel_rules_nodejs/internal/common/check_bazel_version.bzl:49:5:
Current Bazel is not a release version, cannot check for compatibility.
DEBUG: /tmp/bazel_bf35kYvZ/out/external/build_bazel_rules_nodejs/internal/common/check_bazel_version.bzl:51:5: Make sure that you are running at least Bazel 0.17.1.
Build successful! Binary is here: /home/nvidia/bazel-1.2.1-dist/output/bazel
(tensorflow-demo) nvidia@nvidia-nano:~/bazel-1.2.1-dist$ cd output
(tensorflow-demo) nvidia@nvidia-nano:~/bazel-1.2.1-dist/output$ ls
bazel
(tensorflow-demo) nvidia@nvidia-nano:~/bazel-1.2.1-dist/output$ cd ..
(tensorflow-demo) nvidia@nvidia-nano:~/bazel-1.2.1-dist$ bazel version
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
Build label: 1.2.1- (@non-git)
Build target: bazel-out/aarch64-opt/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Sun Dec 8 00:34:35 2019 (1575765275)
Build timestamp: 1575765275
Build timestamp as int: 1575765275
Configuring Tensorflow Build (1st round, with wrong Bazel version...)
I downloaded Tensorflow 2.0 and tried to configure its build:
$ wget https://github.com/tensorflow/tensorflow/archive/v2.0.0.zip -O tensorflow-v2.0.0.zip
$ unzip tensorflow-v2.0.0.zip
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads$ ls
tensorflow-2.0.0 tensorflow-v2.0.0.zip
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads$ cd tensorflow-2.0.0/
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ ls
ACKNOWLEDGMENTS arm_compiler.BUILD BUILD CODEOWNERS configure.cmd CONTRIBUTING.md ISSUE_TEMPLATE.md models.BUILD RELEASE.md tensorflow tools
ADOPTERS.md AUTHORS CODE_OF_CONDUCT.md configure configure.py ISSUES.md LICENSE README.md SECURITY.md third_party WORKSPACE
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ ./configure
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 1.2.1- (@non-git) installed.
Please downgrade your bazel installation to version 0.26.1 or lower to build TensorFlow! To downgrade: download the installer for the old version (from https://github.com/bazelbuild/bazel/releases) then run the installer.
This is the reason why we should not use the latest Bazel version but 0.26.1 or earlier. So I downloaded and built the correct version.
$ wget https://github.com/bazelbuild/bazel/releases/download/0.26.1/bazel-0.26.1-dist.zip
Compiling Bazel (2nd round...with correct Bazel version...)
$ wget https://github.com/bazelbuild/bazel/releases/download/0.26.1/bazel-0.26.1-dist.zip
$ unzip bazel-0.26.1-dist.zip -d bazel-0.26.1-dist
$ cd bazel-0.26.1-dist/
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads$ cd bazel-0.26.1-dist/
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/bazel-0.26.1-dist$ env EXTRA_BAZEL_ARGS="--host_javabase=@local_jdk//:jdk" bash ./compile.sh
🍃 Building Bazel from scratch......
🍃 Building Bazel with Bazel.
.WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.protobuf.UnsafeUtil (file:/tmp/bazel_tZK4adoz/archive/libblaze.jar) to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of com.google.protobuf.UnsafeUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
DEBUG: /tmp/bazel_tZK4adoz/out/external/build_bazel_rules_nodejs/internal/common/check_bazel_version.bzl:49:5:
Current Bazel is not a release version, cannot check for compatibility.
DEBUG: /tmp/bazel_tZK4adoz/out/external/build_bazel_rules_nodejs/internal/common/check_bazel_version.bzl:51:5: Make sure that you are running at least Bazel 0.17.1.
INFO: Analyzed target //src:bazel_nojdk (211 packages loaded, 10327 targets configured).
INFO: Found 1 target...
INFO: From Compiling src/main/cpp/blaze_util_posix.cc:
src/main/cpp/blaze_util_posix.cc: In function 'uint64_t blaze::AcquireLock(const string&, bool, bool, blaze::BlazeLock*)':
src/main/cpp/blaze_util_posix.cc:650:3: warning: ignoring return value of 'int ftruncate(int, __off_t)', declared with attribute warn_unused_result [-Wunused-result]
(void) ftruncate(lockfd, 0);
^~~~~~~~~~~~~~~~~~~~~~~~~~~
INFO: From Generating Java (Immutable) proto_library //src/main/protobuf:execution_statistics_proto:
[libprotobuf WARNING external/com_google_protobuf/src/google/protobuf/compiler/java/java_file.cc:228] The optimize_for = LITE_RUNTIME option is no longer supported by protobuf Java code generator and may generate broken code. It will be ignored by protoc in the future and protoc will always generate full runtime code for Java. To use Java Lite runtime, users should use the Java Lite plugin instead. See:
https://github.com/google/protobuf/blob/master/java/lite.md
INFO: From JavacBootstrap src/main/java/com/google/devtools/build/lib/shell/libshell-skylark.jar [for host]:
warning: Implicitly compiled files were not subject to annotation processing.
Use -proc:none to disable annotation processing or -implicit to specify a policy for implicit compilation.
1 warning
INFO: From Compiling src/main/tools/daemonize.c:
src/main/tools/daemonize.c: In function 'WritePidFile':
src/main/tools/daemonize.c:95:3: warning: ignoring return value of 'write', declared with attribute warn_unused_result [-Wunused-result]
write(pid_done_fd, &dummy, sizeof(dummy));
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
INFO: From JavacBootstrap src/java_tools/singlejar/java/com/google/devtools/build/singlejar/libbootstrap.jar [for host]:
warning: Implicitly compiled files were not subject to annotation processing.
Use -proc:none to disable annotation processing or -implicit to specify a policy for implicit compilation.
1 warning
INFO: From JavacBootstrap src/java_tools/buildjar/java/com/google/devtools/build/buildjar/libskylark-deps.jar [for host]:
warning: Implicitly compiled files were not subject to annotation processing.
Use -proc:none to disable annotation processing or -implicit to specify a policy for implicit compilation.
Note: Some input files use or override a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
1 warning
INFO: From JavacBootstrap src/java_tools/buildjar/java/com/google/devtools/build/buildjar/libbootstrap_VanillaJavaBuilder.jar [for host]:
warning: Implicitly compiled files were not subject to annotation processing.
Use -proc:none to disable annotation processing or -implicit to specify a policy for implicit compilation.
Note: src/java_tools/buildjar/java/com/google/devtools/build/buildjar/VanillaJavaBuilder.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
1 warning
Target //src:bazel_nojdk up-to-date:
bazel-bin/src/bazel_nojdk
INFO: Elapsed time: 1667.082s, Critical Path: 321.41s
INFO: 1751 processes: 1516 local, 235 worker.
INFO: Build completed successfully, 1794 total actions
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.protobuf.UnsafeUtil (file:/tmp/bazel_tZK4adoz/archive/libblaze.jar) to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of com.google.protobuf.UnsafeUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
DEBUG: /tmp/bazel_tZK4adoz/out/external/build_bazel_rules_nodejs/internal/common/check_bazel_version.bzl:49:5:
Current Bazel is not a release version, cannot check for compatibility.
DEBUG: /tmp/bazel_tZK4adoz/out/external/build_bazel_rules_nodejs/internal/common/check_bazel_version.bzl:51:5: Make sure that you are running at least Bazel 0.17.1.
Build successful! Binary is here: /home/nvidia/Downloads/bazel-0.26.1-dist/output/bazel
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/bazel-0.26.1-dist$ sudo cp output/bazel /usr/local/bin
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/bazel-0.26.1-dist$ bazel version
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
Build label: 0.26.1- (@non-git)
Build target: bazel-out/aarch64-opt/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Sun Dec 8 23:18:31 2019 (1575847111)
Build timestamp: 1575847111
Build timestamp as int: 1575847111
Configuring Tensorflow Build (2nd round, with correct Bazel version but with default CUDA compute capabilities which are NOT compatible with Jetson Nano...)
I then run Tensorflow build configuration again (but I missed to set on):
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ ./configure
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.protobuf.UnsafeUtil (file:/home/nvidia/.cache/bazel/_bazel_nvidia/install/2987a38454f9fc9ae002dc0ee19a4d18/_embedded_binaries/A-server.jar) to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of com.google.protobuf.UnsafeUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 0.26.1- (@non-git) installed.
Please specify the location of python. [Default is /home/nvidia/python-envs/tensorflow-demo/bin/python]:
Found possible Python library paths:
/home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages
Please input the desired Python library path to use. Default is [/home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages]
Do you wish to build TensorFlow with XLA JIT support? [Y/n]: y
XLA JIT support will be enabled for TensorFlow.
Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n
No OpenCL SYCL support will be enabled for TensorFlow.
Do you wish to build TensorFlow with ROCm support? [y/N]: n
No ROCm support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.
Do you wish to build TensorFlow with TensorRT support? [y/N]: y
TensorRT support will be enabled for TensorFlow.
Found CUDA 10.0 in:
/usr/local/cuda/lib64
/usr/local/cuda/include
Found cuDNN 7 in:
/usr/lib/aarch64-linux-gnu
/usr/include
Found TensorRT 5 in:
/usr/lib/aarch64-linux-gnu
/usr/include/aarch64-linux-gnu
Please specify a list of comma-separated CUDA compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size, and that TensorFlow only supports compute capabilities >= 3.5 [Default is: 3.5,7.0]:
Do you want to use clang as CUDA compiler? [y/N]: n
nvcc will be used as CUDA compiler.
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:
Do you wish to build TensorFlow with MPI support? [y/N]: n
No MPI support will be enabled for TensorFlow.
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]:
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
Not configuring the WORKSPACE for Android builds.
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
--config=mkl # Build with MKL support.
--config=monolithic # Config for mostly static monolithic build.
--config=gdr # Build with GDR support.
--config=verbs # Build with libverbs support.
--config=ngraph # Build with Intel nGraph support.
--config=numa # Build with NUMA support.
--config=dynamic_kernels # (Experimental) Build kernels into separate shared objects.
--config=v2 # Build TensorFlow 2.x instead of 1.x.
Preconfigured Bazel build configs to DISABLE default on features:
--config=noaws # Disable AWS S3 filesystem support.
--config=nogcp # Disable GCP support.
--config=nohdfs # Disable HDFS support.
--config=noignite # Disable Apache Ignite support.
--config=nokafka # Disable Apache Kafka support.
--config=nonccl # Disable NVIDIA NCCL support.
Configuration finished
bazel build \
--local_ram_resources=2048 \
--config=opt \
--config=cuda \
--config=noignite \
--config=nokafka \
--config=noaws \
--config=nohdfs \
--config=nonccl \
//tensorflow/tools/pip_package:build_pip_package
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ python
Python 3.6.9 (default, Nov 7 2019, 10:44:02)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> include numpy
File "<stdin>", line 1
include numpy
^
SyntaxError: invalid syntax
>>> import numpy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'numpy'
>>> quit()
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ pip install numpy
Collecting numpy
Installing collected packages: numpy
Successfully installed numpy-1.17.4
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ python
Python 3.6.9 (default, Nov 7 2019, 10:44:02)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> numpy.__file__
'/home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages/numpy/__init__.py'
>>> quit()
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ bazel build --local_ram_resources=2048 --config=opt --config=cuda --config=noignite --config=nokafka --config=noaws --config=nohdfs --config=nonccl --action_env PYTHON_BIN_PATH=/home/nvidia/python-envs/tensorflow-demo/bin/python //tensorflow/tools/pip_package:build_pip_package
...
external/com_google_absl/absl/strings/string_view.h(495): warning: expression has no effect
...
ERROR: /home/nvidia/Downloads/tensorflow-2.0.0/tensorflow/core/kernels/BUILD:3717:1: C++ compilation of rule '//tensorflow/core/kernels:cwise_op' failed (Exit 4)
gcc: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-7/README.Bugs> for instructions.
Target //tensorflow/tools/pip_package:build_pip_package failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 8816.556s, Critical Path: 493.61s
INFO: 3103 processes: 3103 local.
FAILED: Build did NOT complete successfully
gcc needs at least 6 GB of RAM to compile TDLib. You can use clang instead, which needs less than 2 GB.
Please note that you need to delete the whole build directory and recreate it to be able to change compiler.
$ bazel build --local_ram_resources=2048 --config=opt --config=cuda --config=noignite --config=nokafka --config=noaws --config=nohdfs --config=nonccl --action_env PYTHON_BIN_PATH=/home/nvidia/python-envs/tensorflow-demo/bin/python --local_resources 1024.0,0.5,0.5 //tensorflow/tools/pip_package:build_pip_package
...
ERROR: /home/nvidia/Downloads/tensorflow-2.0.0/tensorflow/python/keras/api/BUILD:29:1: Executing genrule //tensorflow/python/keras/api:keras_python_api_gen_compat_v1 failed (Exit 1)
Traceback (most recent call last):
File "/home/nvidia/.cache/bazel/_bazel_nvidia/0c75cc683915a1db7f4f8a4da90fb148/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/python/keras/api/create_tensorflow.python_api_1_keras_python_api_gen_compat_v1.runfiles/org_tensorflow/tensorflow/python/tools/api/generator/create_python_api.py", line 27, in <module>
from tensorflow.python.tools.api.generator import doc_srcs
File "/home/nvidia/.cache/bazel/_bazel_nvidia/0c75cc683915a1db7f4f8a4da90fb148/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/python/keras/api/create_tensorflow.python_api_1_keras_python_api_gen_compat_v1.runfiles/org_tensorflow/tensorflow/python/__init__.py", line 83, in <module>
from tensorflow.python import keras
File "/home/nvidia/.cache/bazel/_bazel_nvidia/0c75cc683915a1db7f4f8a4da90fb148/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/python/keras/api/create_tensorflow.python_api_1_keras_python_api_gen_compat_v1.runfiles/org_tensorflow/tensorflow/python/keras/__init__.py", line 32, in <module>
from tensorflow.python.keras import datasets
File "/home/nvidia/.cache/bazel/_bazel_nvidia/0c75cc683915a1db7f4f8a4da90fb148/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/python/keras/api/create_tensorflow.python_api_1_keras_python_api_gen_compat_v1.runfiles/org_tensorflow/tensorflow/python/keras/datasets/__init__.py", line 25, in <module>
from tensorflow.python.keras.datasets import imdb
File "/home/nvidia/.cache/bazel/_bazel_nvidia/0c75cc683915a1db7f4f8a4da90fb148/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/python/keras/api/create_tensorflow.python_api_1_keras_python_api_gen_compat_v1.runfiles/org_tensorflow/tensorflow/python/keras/datasets/imdb.py", line 25, in <module>
from tensorflow.python.keras.preprocessing.sequence import _remove_long_seq
File "/home/nvidia/.cache/bazel/_bazel_nvidia/0c75cc683915a1db7f4f8a4da90fb148/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/python/keras/api/create_tensorflow.python_api_1_keras_python_api_gen_compat_v1.runfiles/org_tensorflow/tensorflow/python/keras/preprocessing/__init__.py", line 21, in <module>
import keras_preprocessing
ModuleNotFoundError: No module named 'keras_preprocessing'
Target //tensorflow/tools/pip_package:build_pip_package failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 245049.805s, Critical Path: 1606.15s
INFO: 16019 processes: 16019 local.
FAILED: Build did NOT complete successfully
ERROR: /home/nvidia/Downloads/tensorflow-2.0.0/tensorflow/python/keras/api/BUILD:29:1: Executing genrule //tensorflow/python/keras/api:keras_python_api_gen_compat_v1 failed (Exit 1)
Traceback (most recent call last):
File "/home/nvidia/.cache/bazel/_bazel_nvidia/0c75cc683915a1db7f4f8a4da90fb148/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/python/keras/api/create_tensorflow.python_api_1_keras_python_api_gen_compat_v1.runfiles/org_tensorflow/tensorflow/python/tools/api/generator/create_python_api.py", line 27, in <module>
from tensorflow.python.tools.api.generator import doc_srcs
File "/home/nvidia/.cache/bazel/_bazel_nvidia/0c75cc683915a1db7f4f8a4da90fb148/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/python/keras/api/create_tensorflow.python_api_1_keras_python_api_gen_compat_v1.runfiles/org_tensorflow/tensorflow/python/__init__.py", line 83, in <module>
from tensorflow.python import keras
File "/home/nvidia/.cache/bazel/_bazel_nvidia/0c75cc683915a1db7f4f8a4da90fb148/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/python/keras/api/create_tensorflow.python_api_1_keras_python_api_gen_compat_v1.runfiles/org_tensorflow/tensorflow/python/keras/__init__.py", line 32, in <module>
from tensorflow.python.keras import datasets
File "/home/nvidia/.cache/bazel/_bazel_nvidia/0c75cc683915a1db7f4f8a4da90fb148/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/python/keras/api/create_tensorflow.python_api_1_keras_python_api_gen_compat_v1.runfiles/org_tensorflow/tensorflow/python/keras/datasets/__init__.py", line 25, in <module>
from tensorflow.python.keras.datasets import imdb
File "/home/nvidia/.cache/bazel/_bazel_nvidia/0c75cc683915a1db7f4f8a4da90fb148/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/python/keras/api/create_tensorflow.python_api_1_keras_python_api_gen_compat_v1.runfiles/org_tensorflow/tensorflow/python/keras/datasets/imdb.py", line 25, in <module>
from tensorflow.python.keras.preprocessing.sequence import _remove_long_seq
File "/home/nvidia/.cache/bazel/_bazel_nvidia/0c75cc683915a1db7f4f8a4da90fb148/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/python/keras/api/create_tensorflow.python_api_1_keras_python_api_gen_compat_v1.runfiles/org_tensorflow/tensorflow/python/keras/preprocessing/__init__.py", line 21, in <module>
import keras_preprocessing
ModuleNotFoundError: No module named 'keras_preprocessing'
Target //tensorflow/tools/pip_package:build_pip_package failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 245049.805s, Critical Path: 1606.15s
INFO: 16019 processes: 16019 local.
FAILED: Build did NOT complete successfully
I tried to fix this error with:
nvidia@nvidia-nano:~/python-envs/tensorflow-demo/bin$ source ~/python-envs/tensorflow-demo/bin/activate
(tensorflow-demo) nvidia@nvidia-nano:~$ pip --version
pip 9.0.1 from /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (python 3.6)
(tensorflow-demo) nvidia@nvidia-nano:~$ pip3 --version
pip 9.0.1 from /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (python 3.6)
(tensorflow-demo) nvidia@nvidia-nano:~$ pip list
DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning.
numpy (1.17.4)
pip (9.0.1)
pkg-resources (0.0.0)
setuptools (39.0.1)
wheel (0.33.6)
(tensorflow-demo) nvidia@nvidia-nano:~$ pip install keras_preprocessing
Collecting keras_preprocessing
Using cached https://files.pythonhosted.org/packages/28/6a/8c1f62c37212d9fc441a7e26736df51ce6f0e38455816445471f10da4f0a/Keras_Preprocessing-1.1.0-py2.py3-none-any.whl
Collecting six>=1.9.0 (from keras_preprocessing)
Using cached https://files.pythonhosted.org/packages/65/26/32b8464df2a97e6dd1b656ed26b2c194606c16fe163c695a992b36c11cdf/six-1.13.0-py2.py3-none-any.whl
Requirement already satisfied: numpy>=1.9.1 in ./python-envs/tensorflow-demo/lib/python3.6/site-packages (from keras_preprocessing)
Installing collected packages: six, keras-preprocessing
Successfully installed keras-preprocessing-1.1.0 six-1.13.0
(tensorflow-demo) nvidia@nvidia-nano:~$ pip list
DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning.
Keras-Preprocessing (1.1.0)
numpy (1.17.4)
pip (9.0.1)
pkg-resources (0.0.0)
setuptools (39.0.1)
six (1.13.0)
wheel (0.33.6)
nvidia@nvidia-nano:~/python-envs/tensorflow-demo/bin$ source ~/python-envs/tensorflow-demo/bin/activate
(tensorflow-demo) nvidia@nvidia-nano:~$ pip --version
pip 9.0.1 from /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (python 3.6)
(tensorflow-demo) nvidia@nvidia-nano:~$ pip3 --version
pip 9.0.1 from /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (python 3.6)
(tensorflow-demo) nvidia@nvidia-nano:~$ pip list
DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning.
numpy (1.17.4)
pip (9.0.1)
pkg-resources (0.0.0)
setuptools (39.0.1)
wheel (0.33.6)
(tensorflow-demo) nvidia@nvidia-nano:~$ pip install keras_preprocessing
Collecting keras_preprocessing
Using cached https://files.pythonhosted.org/packages/28/6a/8c1f62c37212d9fc441a7e26736df51ce6f0e38455816445471f10da4f0a/Keras_Preprocessing-1.1.0-py2.py3-none-any.whl
Collecting six>=1.9.0 (from keras_preprocessing)
Using cached https://files.pythonhosted.org/packages/65/26/32b8464df2a97e6dd1b656ed26b2c194606c16fe163c695a992b36c11cdf/six-1.13.0-py2.py3-none-any.whl
Requirement already satisfied: numpy>=1.9.1 in ./python-envs/tensorflow-demo/lib/python3.6/site-packages (from keras_preprocessing)
Installing collected packages: six, keras-preprocessing
Successfully installed keras-preprocessing-1.1.0 six-1.13.0
(tensorflow-demo) nvidia@nvidia-nano:~$ pip list
DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning.
Keras-Preprocessing (1.1.0)
numpy (1.17.4)
pip (9.0.1)
pkg-resources (0.0.0)
setuptools (39.0.1)
six (1.13.0)
wheel (0.33.6)
Finally, building TF pip package was successful:
nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ bazel build --local_ram_resources=2048 --config=opt --config=cuda --config=noignite --config=nokafka --config=noaws --config=nohdfs --config=nonccl --action_env PYTHON_BIN_PATH=/home/nvidia/python-envs/tensorflow-demo/bin/python --local_resources 2048.0,1,1 //tensorflow/tools/pip_package:build_pip_package
...
INFO: From Executing genrule //tensorflow/lite/python/testdata:gather_string:
2019-12-16 22:53:45.890906: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 1 operators, 4 arrays (0 quantized)
2019-12-16 22:53:45.891223: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 1 operators, 4 arrays (0 quantized)
2019-12-16 22:53:45.891336: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 1 operators, 3 arrays (0 quantized)
2019-12-16 22:53:45.891404: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Group bidirectional sequence lstm/rnn: 1 operators, 3 arrays (0 quantized)
2019-12-16 22:53:45.891448: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 1 operators, 3 arrays (0 quantized)
2019-12-16 22:53:45.891495: I tensorflow/lite/toco/allocate_transient_arrays.cc:345] Total transient array allocated size: 0 bytes, theoretical optimal value: 0 bytes.
2019-12-16 22:53:45.891549: I tensorflow/lite/toco/toco_tooling.cc:436] Estimated count of arithmetic ops: 0 billion (note that a multiply-add is counted as 2 ops).
Target //tensorflow/tools/pip_package:build_pip_package up-to-date:
bazel-bin/tensorflow/tools/pip_package/build_pip_package
INFO: Elapsed time: 1906.713s, Critical Path: 137.05s
INFO: 316 processes: 316 local.
INFO: Build completed successfully, 330 total actions
nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ ls
ACKNOWLEDGMENTS bazel-bin bazel-testlogs configure ISSUES.md README.md third_party
ADOPTERS.md bazel-genfiles BUILD configure.cmd ISSUE_TEMPLATE.md RELEASE.md tools
arm_compiler.BUILD bazel-out CODE_OF_CONDUCT.md configure.py LICENSE SECURITY.md WORKSPACE
AUTHORS bazel-tensorflow-2.0.0 CODEOWNERS CONTRIBUTING.md models.BUILD tensorflow
Building TF wheel:
nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ ./bazel-bin/tensorflow/tools/pip_package/build_pip_package --gpu /tmp/tensorflow_pkg
Tue 17 Dec 00:54:41 GMT 2019 : === Preparing sources in dir: /tmp/tmp.4a99vToIPm
~/Downloads/tensorflow-2.0.0 ~/Downloads/tensorflow-2.0.0
~/Downloads/tensorflow-2.0.0
/tmp/tmp.4a99vToIPm/tensorflow/include ~/Downloads/tensorflow-2.0.0
~/Downloads/tensorflow-2.0.0
Tue 17 Dec 00:55:16 GMT 2019 : === Building wheel
warning: no files found matching 'README'
warning: no files found matching '*.pyd' under directory '*'
warning: no files found matching '*.pd' under directory '*'
warning: no files found matching '*.dylib' under directory '*'
warning: no files found matching '*.dll' under directory '*'
warning: no files found matching '*.lib' under directory '*'
warning: no files found matching '*.csv' under directory '*'
warning: no files found matching '*.h' under directory 'tensorflow_core/include/tensorflow'
warning: no files found matching '*' under directory 'tensorflow_core/include/third_party'
Tue 17 Dec 00:59:16 GMT 2019 : === Output wheel file is in: /tmp/tensorflow_pkg
ACKNOWLEDGMENTS bazel-bin bazel-testlogs configure ISSUES.md README.md third_party
ADOPTERS.md bazel-genfiles BUILD configure.cmd ISSUE_TEMPLATE.md RELEASE.md tools
arm_compiler.BUILD bazel-out CODE_OF_CONDUCT.md configure.py LICENSE SECURITY.md WORKSPACE
AUTHORS bazel-tensorflow-2.0.0 CODEOWNERS CONTRIBUTING.md models.BUILD tensorflow
Building TF wheel:
nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ ./bazel-bin/tensorflow/tools/pip_package/build_pip_package --gpu /tmp/tensorflow_pkg
Tue 17 Dec 00:54:41 GMT 2019 : === Preparing sources in dir: /tmp/tmp.4a99vToIPm
~/Downloads/tensorflow-2.0.0 ~/Downloads/tensorflow-2.0.0
~/Downloads/tensorflow-2.0.0
/tmp/tmp.4a99vToIPm/tensorflow/include ~/Downloads/tensorflow-2.0.0
~/Downloads/tensorflow-2.0.0
Tue 17 Dec 00:55:16 GMT 2019 : === Building wheel
warning: no files found matching 'README'
warning: no files found matching '*.pyd' under directory '*'
warning: no files found matching '*.pd' under directory '*'
warning: no files found matching '*.dylib' under directory '*'
warning: no files found matching '*.dll' under directory '*'
warning: no files found matching '*.lib' under directory '*'
warning: no files found matching '*.csv' under directory '*'
warning: no files found matching '*.h' under directory 'tensorflow_core/include/tensorflow'
warning: no files found matching '*' under directory 'tensorflow_core/include/third_party'
Tue 17 Dec 00:59:16 GMT 2019 : === Output wheel file is in: /tmp/tensorflow_pkg
Installing Tensorflow wheel
pip was not available outside the environment:
nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ pip --version
Traceback (most recent call last):
File "/home/nvidia/.local/bin/pip", line 7, in <module>
from pip._internal.main import main
ModuleNotFoundError: No module named 'pip._internal'
Activating environment:
nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ source ~/python-envs/tensorflow-demo/bin/activate
Checking pip version:
Checking pip version:
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ pip --version
pip 9.0.1 from /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (python 3.6)
Checking wheel location:
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ ls -la /tmp/tensorflow_pkg/
total 172116
drwxr-xr-x 2 nvidia nvidia 4096 Dec 17 00:59 .
drwxrwxrwt 19 root root 16384 Dec 17 01:03 ..
-rw-r--r-- 1 nvidia nvidia 176218283 Dec 17 00:59 tensorflow_gpu-2.0.0-cp36-cp36m-linux_aarch64.whl
Installing TF wheel (1st attempt):
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ pip install --user /tmp/tensorflow_pkg/tensorflow_gpu-2.0.0-cp36-cp36m-linux_aarch64.whl
Processing /tmp/tensorflow_pkg/tensorflow_gpu-2.0.0-cp36-cp36m-linux_aarch64.whl
Collecting protobuf>=3.6.1 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/4e/26/1517e42a81de4f28ed0f3cdadb628c1b72f3a28f38323a05e251f5df0a29/protobuf-3.11.1-py2.py3-none-any.whl (434kB)
100% |████████████████████████████████| 440kB 895kB/s
Collecting grpcio>=1.8.6 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/e4/60/40c4d2b61d9e4349bc89445deb8d04cc000b10a63446c42d311e0d21d127/grpcio-1.25.0.tar.gz (15.4MB)
100% |████████████████████████████████| 15.4MB 32kB/s
Collecting tensorflow-estimator<2.1.0,>=2.0.0 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/fc/08/8b927337b7019c374719145d1dceba21a8bb909b93b1ad6f8fb7d22c1ca1/tensorflow_estimator-2.0.1-py2.py3-none-any.whl (449kB)
100% |████████████████████████████████| 450kB 804kB/s
Collecting keras-applications>=1.0.8 (from tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/71/e3/19762fdfc62877ae9102edf6342d71b28fbfd9dea3d2f96a882ce099b03f/Keras_Applications-1.0.8-py3-none-any.whl
Requirement already satisfied: keras-preprocessing>=1.0.5 in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from tensorflow-gpu==2.0.0)
Collecting astor>=0.6.0 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/c3/88/97eef84f48fa04fbd6750e62dcceafba6c63c81b7ac1420856c8dcc0a3f9/astor-0.8.1-py2.py3-none-any.whl
Collecting wrapt>=1.11.1 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/23/84/323c2415280bc4fc880ac5050dddfb3c8062c2552b34c2e512eb4aa68f79/wrapt-1.11.2.tar.gz
Requirement already satisfied: six>=1.10.0 in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from tensorflow-gpu==2.0.0)
Collecting termcolor>=1.1.0 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/8a/48/a76be51647d0eb9f10e2a4511bf3ffb8cc1e6b14e9e4fab46173aa79f981/termcolor-1.1.0.tar.gz
Requirement already satisfied: numpy<2.0,>=1.16.0 in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from tensorflow-gpu==2.0.0)
Collecting opt-einsum>=2.3.2 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/b8/83/755bd5324777875e9dff19c2e59daec837d0378c09196634524a3d7269ac/opt_einsum-3.1.0.tar.gz (69kB)
100% |████████████████████████████████| 71kB 2.1MB/s
Collecting tensorboard<2.1.0,>=2.0.0 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/76/54/99b9d5d52d5cb732f099baaaf7740403e83fe6b0cedde940fabd2b13d75a/tensorboard-2.0.2-py3-none-any.whl (3.8MB)
100% |████████████████████████████████| 3.8MB 121kB/s
Requirement already satisfied: wheel>=0.26 in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from tensorflow-gpu==2.0.0)
Collecting google-pasta>=0.1.6 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/c3/fd/1e86bc4837cc9a3a5faf3db9b1854aa04ad35b5f381f9648fbe81a6f94e4/google_pasta-0.1.8-py3-none-any.whl (57kB)
100% |████████████████████████████████| 61kB 2.3MB/s
Collecting absl-py>=0.7.0 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/3b/72/e6e483e2db953c11efa44ee21c5fdb6505c4dffa447b4263ca8af6676b62/absl-py-0.8.1.tar.gz (103kB)
100% |████████████████████████████████| 112kB 1.4MB/s
Collecting gast==0.2.2 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/4e/35/11749bf99b2d4e3cceb4d55ca22590b0d7c2c62b9de38ac4a4a7f4687421/gast-0.2.2.tar.gz
Requirement already satisfied: setuptools in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from protobuf>=3.6.1->tensorflow-gpu==2.0.0)
Collecting h5py (from keras-applications>=1.0.8->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/5f/97/a58afbcf40e8abecededd9512978b4e4915374e5b80049af082f49cebe9a/h5py-2.10.0.tar.gz (301kB)
100% |████████████████████████████████| 307kB 1.1MB/s
Collecting google-auth-oauthlib<0.5,>=0.4.1 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/7b/b8/88def36e74bee9fce511c9519571f4e485e890093ab7442284f4ffaef60b/google_auth_oauthlib-0.4.1-py2.py3-none-any.whl
Collecting werkzeug>=0.11.15 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/ce/42/3aeda98f96e85fd26180534d36570e4d18108d62ae36f87694b476b83d6f/Werkzeug-0.16.0-py2.py3-none-any.whl (327kB)
100% |████████████████████████████████| 327kB 1.1MB/s
Collecting requests<3,>=2.21.0 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/51/bd/23c926cd341ea6b7dd0b2a00aba99ae0f828be89d72b2190f27c11d4b7fb/requests-2.22.0-py2.py3-none-any.whl (57kB)
100% |████████████████████████████████| 61kB 3.4MB/s
Collecting markdown>=2.6.8 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/c0/4e/fd492e91abdc2d2fcb70ef453064d980688762079397f779758e055f6575/Markdown-3.1.1-py2.py3-none-any.whl (87kB)
100% |████████████████████████████████| 92kB 2.3MB/s
Collecting google-auth<2,>=1.6.3 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/17/83/3cb31033e1ea0bdb8991b6ef327a5bf4960bd3dd31ff355881bfb0ddf199/google_auth-1.9.0-py2.py3-none-any.whl (75kB)
100% |████████████████████████████████| 81kB 2.5MB/s
Collecting requests-oauthlib>=0.7.0 (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/a3/12/b92740d845ab62ea4edf04d2f4164d82532b5a0b03836d4d4e71c6f3d379/requests_oauthlib-1.3.0-py2.py3-none-any.whl
Collecting certifi>=2017.4.17 (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/b9/63/df50cac98ea0d5b006c55a399c3bf1db9da7b5a24de7890bc9cfd5dd9e99/certifi-2019.11.28-py2.py3-none-any.whl (156kB)
100% |████████████████████████████████| 163kB 1.7MB/s
Collecting chardet<3.1.0,>=3.0.2 (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl (133kB)
100% |████████████████████████████████| 143kB 1.6MB/s
Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/b4/40/a9837291310ee1ccc242ceb6ebfd9eb21539649f193a7c8c86ba15b98539/urllib3-1.25.7-py2.py3-none-any.whl (125kB)
100% |████████████████████████████████| 133kB 2.0MB/s
Collecting idna<2.9,>=2.5 (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/14/2c/cd551d81dbe15200be1cf41cd03869a46fe7226e7450af7a6545bfc474c9/idna-2.8-py2.py3-none-any.whl (58kB)
100% |████████████████████████████████| 61kB 1.7MB/s
Collecting rsa<4.1,>=3.1.4 (from google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/02/e5/38518af393f7c214357079ce67a317307936896e961e35450b70fad2a9cf/rsa-4.0-py2.py3-none-any.whl
Collecting pyasn1-modules>=0.2.1 (from google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/52/50/bb4cefca37da63a0c52218ba2cb1b1c36110d84dcbae8aa48cd67c5e95c2/pyasn1_modules-0.2.7-py2.py3-none-any.whl (131kB)
100% |████████████████████████████████| 133kB 1.7MB/s
Collecting cachetools<3.2,>=2.0.0 (from google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/2f/a6/30b0a0bef12283e83e58c1d6e7b5aabc7acfc4110df81a4471655d33e704/cachetools-3.1.1-py2.py3-none-any.whl
Collecting oauthlib>=3.0.0 (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/05/57/ce2e7a8fa7c0afb54a0581b14a65b56e62b5759dbc98e80627142b8a3704/oauthlib-3.1.0-py2.py3-none-any.whl (147kB)
100% |████████████████████████████████| 153kB 1.8MB/s
Collecting pyasn1>=0.1.3 (from rsa<4.1,>=3.1.4->google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/62/1e/a94a8d635fa3ce4cfc7f506003548d0a2447ae76fd5ca53932970fe3053f/pyasn1-0.4.8-py2.py3-none-any.whl (77kB)
100% |████████████████████████████████| 81kB 1.3MB/s
Building wheels for collected packages: grpcio, wrapt, termcolor, opt-einsum, absl-py, gast, h5py
Running setup.py bdist_wheel for grpcio ... done
Stored in directory: /home/nvidia/.cache/pip/wheels/aa/e9/e5/cc06a54786ef3ad42c41c1c0b9ff2db4b51a5b1f7584810030
Running setup.py bdist_wheel for wrapt ... done
Stored in directory: /home/nvidia/.cache/pip/wheels/d7/de/2e/efa132238792efb6459a96e85916ef8597fcb3d2ae51590dfd
Running setup.py bdist_wheel for termcolor ... done
Stored in directory: /home/nvidia/.cache/pip/wheels/7c/06/54/bc84598ba1daf8f970247f550b175aaaee85f68b4b0c5ab2c6
Running setup.py bdist_wheel for opt-einsum ... done
Stored in directory: /home/nvidia/.cache/pip/wheels/2c/b1/94/43d03e130b929aae7ba3f8d15cbd7bc0d1cb5bb38a5c721833
Running setup.py bdist_wheel for absl-py ... done
Stored in directory: /home/nvidia/.cache/pip/wheels/a7/15/a0/0a0561549ad11cdc1bc8fa1191a353efd30facf6bfb507aefc
Running setup.py bdist_wheel for gast ... done
Stored in directory: /home/nvidia/.cache/pip/wheels/5c/2e/7e/a1d4d4fcebe6c381f378ce7743a3ced3699feb89bcfbdadadd
Running setup.py bdist_wheel for h5py ... error
Complete output from command /home/nvidia/python-envs/tensorflow-demo/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-ab3n2bu5/h5py/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/tmp6jzfjus4pip-wheel- --python-tag cp36:
Unable to find pgen, not compiling formal grammar.
Compiling /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Plex/Scanners.py because it changed.
Compiling /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Plex/Actions.py because it changed.
Compiling /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Compiler/Scanning.py because it changed.
Compiling /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Compiler/Visitor.py because it changed.
Compiling /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Compiler/FlowControl.py because it changed.
Compiling /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Runtime/refnanny.pyx because it changed.
Compiling /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Compiler/FusedNode.py because it changed.
Compiling /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Tempita/_tempita.py because it changed.
[1/8] Cythonizing /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Compiler/FlowControl.py
[2/8] Cythonizing /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Compiler/FusedNode.py
[3/8] Cythonizing /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Compiler/Scanning.py
[4/8] Cythonizing /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Compiler/Visitor.py
[5/8] Cythonizing /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Plex/Actions.py
[6/8] Cythonizing /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Plex/Scanners.py
[7/8] Cythonizing /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Runtime/refnanny.pyx
[8/8] Cythonizing /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Tempita/_tempita.py
warning: no files found matching 'Doc/*'
warning: no files found matching '*.pyx' under directory 'Cython/Debugger/Tests'
warning: no files found matching '*.pxd' under directory 'Cython/Debugger/Tests'
warning: no files found matching '*.pxd' under directory 'Cython/Utility'
warning: no files found matching 'pyximport/README'
Installed /tmp/pip-build-ab3n2bu5/h5py/.eggs/Cython-0.29.14-py3.6-linux-aarch64.egg
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-aarch64-3.6
creating build/lib.linux-aarch64-3.6/h5py
copying h5py/ipy_completer.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/version.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/__init__.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/h5py_warnings.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/highlevel.py -> build/lib.linux-aarch64-3.6/h5py
creating build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/compat.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/__init__.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/files.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/group.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/base.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/datatype.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/selections2.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/filters.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/selections.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/vds.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/attrs.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/dims.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/dataset.py -> build/lib.linux-aarch64-3.6/h5py/_hl
creating build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_file_image.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_slicing.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5f.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_attribute_create.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5pl.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_threads.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/__init__.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_file.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_base.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5p.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_group.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_completions.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dataset_swmr.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_objects.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dimension_scales.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dtype.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dataset_getitem.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_deprecation.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5d_direct_chunk.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_filters.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5t.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_attrs_data.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dims_dimensionproxy.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_file2.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_attrs.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dataset.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_datatype.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/common.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_selections.py -> build/lib.linux-aarch64-3.6/h5py/tests
creating build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/test_lowlevel_vds.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/__init__.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/test_highlevel_vds.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/test_virtual_source.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
running build_ext
Loading library to get version: libhdf5.so
error: libhdf5.so: cannot open shared object file: No such file or directory
----------------------------------------
Failed building wheel for h5py
Running setup.py clean for h5py
Successfully built grpcio wrapt termcolor opt-einsum absl-py gast
Failed to build h5py
Installing collected packages: protobuf, grpcio, tensorflow-estimator, h5py, keras-applications, astor, wrapt, termcolor, opt-einsum, oauthlib, certifi, chardet, urllib3, idna, requests, requests-oauthlib, pyasn1, rsa, pyasn1-modules, cachetools, google-auth, google-auth-oauthlib, werkzeug, markdown, absl-py, tensorboard, google-pasta, gast, tensorflow-gpu
Running setup.py install for h5py ... error
Complete output from command /home/nvidia/python-envs/tensorflow-demo/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-ab3n2bu5/h5py/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-r7_8h_bb-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/nvidia/python-envs/tensorflow-demo/include/site/python3.6/h5py --user --prefix=:
running install
running build
running build_py
creating build
creating build/lib.linux-aarch64-3.6
creating build/lib.linux-aarch64-3.6/h5py
copying h5py/ipy_completer.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/version.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/__init__.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/h5py_warnings.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/highlevel.py -> build/lib.linux-aarch64-3.6/h5py
creating build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/compat.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/__init__.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/files.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/group.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/base.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/datatype.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/selections2.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/filters.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/selections.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/vds.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/attrs.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/dims.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/dataset.py -> build/lib.linux-aarch64-3.6/h5py/_hl
creating build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_file_image.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_slicing.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5f.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_attribute_create.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5pl.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_threads.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/__init__.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_file.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_base.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5p.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_group.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_completions.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dataset_swmr.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_objects.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dimension_scales.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dtype.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dataset_getitem.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_deprecation.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5d_direct_chunk.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_filters.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5t.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_attrs_data.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dims_dimensionproxy.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_file2.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_attrs.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dataset.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_datatype.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/common.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_selections.py -> build/lib.linux-aarch64-3.6/h5py/tests
creating build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/test_lowlevel_vds.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/__init__.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/test_highlevel_vds.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/test_virtual_source.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
running build_ext
Loading library to get version: libhdf5.so
error: libhdf5.so: cannot open shared object file: No such file or directory
----------------------------------------
Command "/home/nvidia/python-envs/tensorflow-demo/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-ab3n2bu5/h5py/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-r7_8h_bb-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/nvidia/python-envs/tensorflow-demo/include/site/python3.6/h5py --user --prefix=" failed with error code 1 in /tmp/pip-build-ab3n2bu5/h5py/
total 172116
drwxr-xr-x 2 nvidia nvidia 4096 Dec 17 00:59 .
drwxrwxrwt 19 root root 16384 Dec 17 01:03 ..
-rw-r--r-- 1 nvidia nvidia 176218283 Dec 17 00:59 tensorflow_gpu-2.0.0-cp36-cp36m-linux_aarch64.whl
Installing TF wheel (1st attempt):
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ pip install --user /tmp/tensorflow_pkg/tensorflow_gpu-2.0.0-cp36-cp36m-linux_aarch64.whl
Processing /tmp/tensorflow_pkg/tensorflow_gpu-2.0.0-cp36-cp36m-linux_aarch64.whl
Collecting protobuf>=3.6.1 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/4e/26/1517e42a81de4f28ed0f3cdadb628c1b72f3a28f38323a05e251f5df0a29/protobuf-3.11.1-py2.py3-none-any.whl (434kB)
100% |████████████████████████████████| 440kB 895kB/s
Collecting grpcio>=1.8.6 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/e4/60/40c4d2b61d9e4349bc89445deb8d04cc000b10a63446c42d311e0d21d127/grpcio-1.25.0.tar.gz (15.4MB)
100% |████████████████████████████████| 15.4MB 32kB/s
Collecting tensorflow-estimator<2.1.0,>=2.0.0 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/fc/08/8b927337b7019c374719145d1dceba21a8bb909b93b1ad6f8fb7d22c1ca1/tensorflow_estimator-2.0.1-py2.py3-none-any.whl (449kB)
100% |████████████████████████████████| 450kB 804kB/s
Collecting keras-applications>=1.0.8 (from tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/71/e3/19762fdfc62877ae9102edf6342d71b28fbfd9dea3d2f96a882ce099b03f/Keras_Applications-1.0.8-py3-none-any.whl
Requirement already satisfied: keras-preprocessing>=1.0.5 in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from tensorflow-gpu==2.0.0)
Collecting astor>=0.6.0 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/c3/88/97eef84f48fa04fbd6750e62dcceafba6c63c81b7ac1420856c8dcc0a3f9/astor-0.8.1-py2.py3-none-any.whl
Collecting wrapt>=1.11.1 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/23/84/323c2415280bc4fc880ac5050dddfb3c8062c2552b34c2e512eb4aa68f79/wrapt-1.11.2.tar.gz
Requirement already satisfied: six>=1.10.0 in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from tensorflow-gpu==2.0.0)
Collecting termcolor>=1.1.0 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/8a/48/a76be51647d0eb9f10e2a4511bf3ffb8cc1e6b14e9e4fab46173aa79f981/termcolor-1.1.0.tar.gz
Requirement already satisfied: numpy<2.0,>=1.16.0 in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from tensorflow-gpu==2.0.0)
Collecting opt-einsum>=2.3.2 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/b8/83/755bd5324777875e9dff19c2e59daec837d0378c09196634524a3d7269ac/opt_einsum-3.1.0.tar.gz (69kB)
100% |████████████████████████████████| 71kB 2.1MB/s
Collecting tensorboard<2.1.0,>=2.0.0 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/76/54/99b9d5d52d5cb732f099baaaf7740403e83fe6b0cedde940fabd2b13d75a/tensorboard-2.0.2-py3-none-any.whl (3.8MB)
100% |████████████████████████████████| 3.8MB 121kB/s
Requirement already satisfied: wheel>=0.26 in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from tensorflow-gpu==2.0.0)
Collecting google-pasta>=0.1.6 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/c3/fd/1e86bc4837cc9a3a5faf3db9b1854aa04ad35b5f381f9648fbe81a6f94e4/google_pasta-0.1.8-py3-none-any.whl (57kB)
100% |████████████████████████████████| 61kB 2.3MB/s
Collecting absl-py>=0.7.0 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/3b/72/e6e483e2db953c11efa44ee21c5fdb6505c4dffa447b4263ca8af6676b62/absl-py-0.8.1.tar.gz (103kB)
100% |████████████████████████████████| 112kB 1.4MB/s
Collecting gast==0.2.2 (from tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/4e/35/11749bf99b2d4e3cceb4d55ca22590b0d7c2c62b9de38ac4a4a7f4687421/gast-0.2.2.tar.gz
Requirement already satisfied: setuptools in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from protobuf>=3.6.1->tensorflow-gpu==2.0.0)
Collecting h5py (from keras-applications>=1.0.8->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/5f/97/a58afbcf40e8abecededd9512978b4e4915374e5b80049af082f49cebe9a/h5py-2.10.0.tar.gz (301kB)
100% |████████████████████████████████| 307kB 1.1MB/s
Collecting google-auth-oauthlib<0.5,>=0.4.1 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/7b/b8/88def36e74bee9fce511c9519571f4e485e890093ab7442284f4ffaef60b/google_auth_oauthlib-0.4.1-py2.py3-none-any.whl
Collecting werkzeug>=0.11.15 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/ce/42/3aeda98f96e85fd26180534d36570e4d18108d62ae36f87694b476b83d6f/Werkzeug-0.16.0-py2.py3-none-any.whl (327kB)
100% |████████████████████████████████| 327kB 1.1MB/s
Collecting requests<3,>=2.21.0 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/51/bd/23c926cd341ea6b7dd0b2a00aba99ae0f828be89d72b2190f27c11d4b7fb/requests-2.22.0-py2.py3-none-any.whl (57kB)
100% |████████████████████████████████| 61kB 3.4MB/s
Collecting markdown>=2.6.8 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/c0/4e/fd492e91abdc2d2fcb70ef453064d980688762079397f779758e055f6575/Markdown-3.1.1-py2.py3-none-any.whl (87kB)
100% |████████████████████████████████| 92kB 2.3MB/s
Collecting google-auth<2,>=1.6.3 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/17/83/3cb31033e1ea0bdb8991b6ef327a5bf4960bd3dd31ff355881bfb0ddf199/google_auth-1.9.0-py2.py3-none-any.whl (75kB)
100% |████████████████████████████████| 81kB 2.5MB/s
Collecting requests-oauthlib>=0.7.0 (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/a3/12/b92740d845ab62ea4edf04d2f4164d82532b5a0b03836d4d4e71c6f3d379/requests_oauthlib-1.3.0-py2.py3-none-any.whl
Collecting certifi>=2017.4.17 (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/b9/63/df50cac98ea0d5b006c55a399c3bf1db9da7b5a24de7890bc9cfd5dd9e99/certifi-2019.11.28-py2.py3-none-any.whl (156kB)
100% |████████████████████████████████| 163kB 1.7MB/s
Collecting chardet<3.1.0,>=3.0.2 (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl (133kB)
100% |████████████████████████████████| 143kB 1.6MB/s
Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/b4/40/a9837291310ee1ccc242ceb6ebfd9eb21539649f193a7c8c86ba15b98539/urllib3-1.25.7-py2.py3-none-any.whl (125kB)
100% |████████████████████████████████| 133kB 2.0MB/s
Collecting idna<2.9,>=2.5 (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/14/2c/cd551d81dbe15200be1cf41cd03869a46fe7226e7450af7a6545bfc474c9/idna-2.8-py2.py3-none-any.whl (58kB)
100% |████████████████████████████████| 61kB 1.7MB/s
Collecting rsa<4.1,>=3.1.4 (from google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/02/e5/38518af393f7c214357079ce67a317307936896e961e35450b70fad2a9cf/rsa-4.0-py2.py3-none-any.whl
Collecting pyasn1-modules>=0.2.1 (from google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/52/50/bb4cefca37da63a0c52218ba2cb1b1c36110d84dcbae8aa48cd67c5e95c2/pyasn1_modules-0.2.7-py2.py3-none-any.whl (131kB)
100% |████████████████████████████████| 133kB 1.7MB/s
Collecting cachetools<3.2,>=2.0.0 (from google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/2f/a6/30b0a0bef12283e83e58c1d6e7b5aabc7acfc4110df81a4471655d33e704/cachetools-3.1.1-py2.py3-none-any.whl
Collecting oauthlib>=3.0.0 (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/05/57/ce2e7a8fa7c0afb54a0581b14a65b56e62b5759dbc98e80627142b8a3704/oauthlib-3.1.0-py2.py3-none-any.whl (147kB)
100% |████████████████████████████████| 153kB 1.8MB/s
Collecting pyasn1>=0.1.3 (from rsa<4.1,>=3.1.4->google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Downloading https://files.pythonhosted.org/packages/62/1e/a94a8d635fa3ce4cfc7f506003548d0a2447ae76fd5ca53932970fe3053f/pyasn1-0.4.8-py2.py3-none-any.whl (77kB)
100% |████████████████████████████████| 81kB 1.3MB/s
Building wheels for collected packages: grpcio, wrapt, termcolor, opt-einsum, absl-py, gast, h5py
Running setup.py bdist_wheel for grpcio ... done
Stored in directory: /home/nvidia/.cache/pip/wheels/aa/e9/e5/cc06a54786ef3ad42c41c1c0b9ff2db4b51a5b1f7584810030
Running setup.py bdist_wheel for wrapt ... done
Stored in directory: /home/nvidia/.cache/pip/wheels/d7/de/2e/efa132238792efb6459a96e85916ef8597fcb3d2ae51590dfd
Running setup.py bdist_wheel for termcolor ... done
Stored in directory: /home/nvidia/.cache/pip/wheels/7c/06/54/bc84598ba1daf8f970247f550b175aaaee85f68b4b0c5ab2c6
Running setup.py bdist_wheel for opt-einsum ... done
Stored in directory: /home/nvidia/.cache/pip/wheels/2c/b1/94/43d03e130b929aae7ba3f8d15cbd7bc0d1cb5bb38a5c721833
Running setup.py bdist_wheel for absl-py ... done
Stored in directory: /home/nvidia/.cache/pip/wheels/a7/15/a0/0a0561549ad11cdc1bc8fa1191a353efd30facf6bfb507aefc
Running setup.py bdist_wheel for gast ... done
Stored in directory: /home/nvidia/.cache/pip/wheels/5c/2e/7e/a1d4d4fcebe6c381f378ce7743a3ced3699feb89bcfbdadadd
Running setup.py bdist_wheel for h5py ... error
Complete output from command /home/nvidia/python-envs/tensorflow-demo/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-ab3n2bu5/h5py/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/tmp6jzfjus4pip-wheel- --python-tag cp36:
Unable to find pgen, not compiling formal grammar.
Compiling /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Plex/Scanners.py because it changed.
Compiling /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Plex/Actions.py because it changed.
Compiling /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Compiler/Scanning.py because it changed.
Compiling /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Compiler/Visitor.py because it changed.
Compiling /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Compiler/FlowControl.py because it changed.
Compiling /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Runtime/refnanny.pyx because it changed.
Compiling /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Compiler/FusedNode.py because it changed.
Compiling /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Tempita/_tempita.py because it changed.
[1/8] Cythonizing /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Compiler/FlowControl.py
[2/8] Cythonizing /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Compiler/FusedNode.py
[3/8] Cythonizing /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Compiler/Scanning.py
[4/8] Cythonizing /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Compiler/Visitor.py
[5/8] Cythonizing /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Plex/Actions.py
[6/8] Cythonizing /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Plex/Scanners.py
[7/8] Cythonizing /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Runtime/refnanny.pyx
[8/8] Cythonizing /tmp/easy_install-vttxj99l/Cython-0.29.14/Cython/Tempita/_tempita.py
warning: no files found matching 'Doc/*'
warning: no files found matching '*.pyx' under directory 'Cython/Debugger/Tests'
warning: no files found matching '*.pxd' under directory 'Cython/Debugger/Tests'
warning: no files found matching '*.pxd' under directory 'Cython/Utility'
warning: no files found matching 'pyximport/README'
Installed /tmp/pip-build-ab3n2bu5/h5py/.eggs/Cython-0.29.14-py3.6-linux-aarch64.egg
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-aarch64-3.6
creating build/lib.linux-aarch64-3.6/h5py
copying h5py/ipy_completer.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/version.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/__init__.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/h5py_warnings.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/highlevel.py -> build/lib.linux-aarch64-3.6/h5py
creating build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/compat.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/__init__.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/files.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/group.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/base.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/datatype.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/selections2.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/filters.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/selections.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/vds.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/attrs.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/dims.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/dataset.py -> build/lib.linux-aarch64-3.6/h5py/_hl
creating build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_file_image.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_slicing.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5f.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_attribute_create.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5pl.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_threads.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/__init__.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_file.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_base.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5p.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_group.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_completions.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dataset_swmr.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_objects.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dimension_scales.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dtype.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dataset_getitem.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_deprecation.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5d_direct_chunk.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_filters.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5t.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_attrs_data.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dims_dimensionproxy.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_file2.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_attrs.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dataset.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_datatype.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/common.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_selections.py -> build/lib.linux-aarch64-3.6/h5py/tests
creating build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/test_lowlevel_vds.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/__init__.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/test_highlevel_vds.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/test_virtual_source.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
running build_ext
Loading library to get version: libhdf5.so
error: libhdf5.so: cannot open shared object file: No such file or directory
----------------------------------------
Failed building wheel for h5py
Running setup.py clean for h5py
Successfully built grpcio wrapt termcolor opt-einsum absl-py gast
Failed to build h5py
Installing collected packages: protobuf, grpcio, tensorflow-estimator, h5py, keras-applications, astor, wrapt, termcolor, opt-einsum, oauthlib, certifi, chardet, urllib3, idna, requests, requests-oauthlib, pyasn1, rsa, pyasn1-modules, cachetools, google-auth, google-auth-oauthlib, werkzeug, markdown, absl-py, tensorboard, google-pasta, gast, tensorflow-gpu
Running setup.py install for h5py ... error
Complete output from command /home/nvidia/python-envs/tensorflow-demo/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-ab3n2bu5/h5py/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-r7_8h_bb-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/nvidia/python-envs/tensorflow-demo/include/site/python3.6/h5py --user --prefix=:
running install
running build
running build_py
creating build
creating build/lib.linux-aarch64-3.6
creating build/lib.linux-aarch64-3.6/h5py
copying h5py/ipy_completer.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/version.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/__init__.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/h5py_warnings.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/highlevel.py -> build/lib.linux-aarch64-3.6/h5py
creating build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/compat.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/__init__.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/files.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/group.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/base.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/datatype.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/selections2.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/filters.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/selections.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/vds.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/attrs.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/dims.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/dataset.py -> build/lib.linux-aarch64-3.6/h5py/_hl
creating build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_file_image.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_slicing.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5f.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_attribute_create.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5pl.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_threads.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/__init__.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_file.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_base.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5p.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_group.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_completions.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dataset_swmr.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_objects.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dimension_scales.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dtype.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dataset_getitem.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_deprecation.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5d_direct_chunk.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_filters.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5t.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_attrs_data.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dims_dimensionproxy.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_file2.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_attrs.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dataset.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_datatype.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/common.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_selections.py -> build/lib.linux-aarch64-3.6/h5py/tests
creating build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/test_lowlevel_vds.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/__init__.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/test_highlevel_vds.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/test_virtual_source.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
running build_ext
Loading library to get version: libhdf5.so
error: libhdf5.so: cannot open shared object file: No such file or directory
----------------------------------------
Command "/home/nvidia/python-envs/tensorflow-demo/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-ab3n2bu5/h5py/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-r7_8h_bb-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/nvidia/python-envs/tensorflow-demo/include/site/python3.6/h5py --user --prefix=" failed with error code 1 in /tmp/pip-build-ab3n2bu5/h5py/
https://github.com/lhelontra/tensorflow-on-arm/issues/40
https://github.com/tensorflow/tensorflow/issues/23967
https://github.com/tensorflow/tensorflow/issues/23967
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ sudo apt-get install python-h5py
...didn't help - same error appeared.
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ pip install h5py
Collecting h5py
Using cached https://files.pythonhosted.org/packages/5f/97/a58afbcf40e8abecededd9512978b4e4915374e5b80049af082f49cebe9a/h5py-2.10.0.tar.gz
Requirement already satisfied: numpy>=1.7 in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from h5py)
Requirement already satisfied: six in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from h5py)
Building wheels for collected packages: h5py
Running setup.py bdist_wheel for h5py ... error
Complete output from command /home/nvidia/python-envs/tensorflow-demo/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-t4s7mh8t/h5py/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/tmpq8wtc61mpip-wheel- --python-tag cp36:
Unable to find pgen, not compiling formal grammar.
Compiling /tmp/easy_install-ou8zlp83/Cython-0.29.14/Cython/Plex/Scanners.py because it changed.
Compiling /tmp/easy_install-ou8zlp83/Cython-0.29.14/Cython/Plex/Actions.py because it changed.
Compiling /tmp/easy_install-ou8zlp83/Cython-0.29.14/Cython/Compiler/Scanning.py because it changed.
Compiling /tmp/easy_install-ou8zlp83/Cython-0.29.14/Cython/Compiler/Visitor.py because it changed.
Compiling /tmp/easy_install-ou8zlp83/Cython-0.29.14/Cython/Compiler/FlowControl.py because it changed.
Compiling /tmp/easy_install-ou8zlp83/Cython-0.29.14/Cython/Runtime/refnanny.pyx because it changed.
Compiling /tmp/easy_install-ou8zlp83/Cython-0.29.14/Cython/Compiler/FusedNode.py because it changed.
Compiling /tmp/easy_install-ou8zlp83/Cython-0.29.14/Cython/Tempita/_tempita.py because it changed.
[1/8] Cythonizing /tmp/easy_install-ou8zlp83/Cython-0.29.14/Cython/Compiler/FlowControl.py
[2/8] Cythonizing /tmp/easy_install-ou8zlp83/Cython-0.29.14/Cython/Compiler/FusedNode.py
[3/8] Cythonizing /tmp/easy_install-ou8zlp83/Cython-0.29.14/Cython/Compiler/Scanning.py
[4/8] Cythonizing /tmp/easy_install-ou8zlp83/Cython-0.29.14/Cython/Compiler/Visitor.py
[5/8] Cythonizing /tmp/easy_install-ou8zlp83/Cython-0.29.14/Cython/Plex/Actions.py
[6/8] Cythonizing /tmp/easy_install-ou8zlp83/Cython-0.29.14/Cython/Plex/Scanners.py
[7/8] Cythonizing /tmp/easy_install-ou8zlp83/Cython-0.29.14/Cython/Runtime/refnanny.pyx
[8/8] Cythonizing /tmp/easy_install-ou8zlp83/Cython-0.29.14/Cython/Tempita/_tempita.py
warning: no files found matching 'Doc/*'
warning: no files found matching '*.pyx' under directory 'Cython/Debugger/Tests'
warning: no files found matching '*.pxd' under directory 'Cython/Debugger/Tests'
warning: no files found matching '*.pxd' under directory 'Cython/Utility'
warning: no files found matching 'pyximport/README'
Installed /tmp/pip-build-t4s7mh8t/h5py/.eggs/Cython-0.29.14-py3.6-linux-aarch64.egg
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-aarch64-3.6
creating build/lib.linux-aarch64-3.6/h5py
copying h5py/ipy_completer.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/version.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/__init__.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/h5py_warnings.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/highlevel.py -> build/lib.linux-aarch64-3.6/h5py
creating build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/compat.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/__init__.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/files.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/group.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/base.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/datatype.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/selections2.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/filters.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/selections.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/vds.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/attrs.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/dims.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/dataset.py -> build/lib.linux-aarch64-3.6/h5py/_hl
creating build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_file_image.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_slicing.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5f.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_attribute_create.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5pl.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_threads.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/__init__.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_file.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_base.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5p.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_group.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_completions.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dataset_swmr.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_objects.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dimension_scales.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dtype.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dataset_getitem.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_deprecation.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5d_direct_chunk.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_filters.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5t.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_attrs_data.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dims_dimensionproxy.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_file2.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_attrs.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dataset.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_datatype.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/common.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_selections.py -> build/lib.linux-aarch64-3.6/h5py/tests
creating build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/test_lowlevel_vds.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/__init__.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/test_highlevel_vds.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/test_virtual_source.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
running build_ext
Loading library to get version: libhdf5.so
error: libhdf5.so: cannot open shared object file: No such file or directory
----------------------------------------
Failed building wheel for h5py
Running setup.py clean for h5py
Failed to build h5py
Installing collected packages: h5py
Running setup.py install for h5py ... error
Complete output from command /home/nvidia/python-envs/tensorflow-demo/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-t4s7mh8t/h5py/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-9yf5wx07-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/nvidia/python-envs/tensorflow-demo/include/site/python3.6/h5py:
running install
running build
running build_py
creating build
creating build/lib.linux-aarch64-3.6
creating build/lib.linux-aarch64-3.6/h5py
copying h5py/ipy_completer.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/version.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/__init__.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/h5py_warnings.py -> build/lib.linux-aarch64-3.6/h5py
copying h5py/highlevel.py -> build/lib.linux-aarch64-3.6/h5py
creating build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/compat.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/__init__.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/files.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/group.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/base.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/datatype.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/selections2.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/filters.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/selections.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/vds.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/attrs.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/dims.py -> build/lib.linux-aarch64-3.6/h5py/_hl
copying h5py/_hl/dataset.py -> build/lib.linux-aarch64-3.6/h5py/_hl
creating build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_file_image.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_slicing.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5f.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_attribute_create.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5pl.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_threads.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/__init__.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_file.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_base.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5p.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_group.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_completions.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dataset_swmr.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_objects.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dimension_scales.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dtype.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dataset_getitem.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_deprecation.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5d_direct_chunk.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_filters.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_h5t.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_attrs_data.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dims_dimensionproxy.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_file2.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_attrs.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_dataset.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_datatype.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/common.py -> build/lib.linux-aarch64-3.6/h5py/tests
copying h5py/tests/test_selections.py -> build/lib.linux-aarch64-3.6/h5py/tests
creating build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/test_lowlevel_vds.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/__init__.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/test_highlevel_vds.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
copying h5py/tests/test_vds/test_virtual_source.py -> build/lib.linux-aarch64-3.6/h5py/tests/test_vds
running build_ext
Loading library to get version: libhdf5.so
error: libhdf5.so: cannot open shared object file: No such file or directory
----------------------------------------
Command "/home/nvidia/python-envs/tensorflow-demo/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-t4s7mh8t/h5py/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-9yf5wx07-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/nvidia/python-envs/tensorflow-demo/include/site/python3.6/h5py" failed with error code 1 in /tmp/pip-build-t4s7mh8t/h5py/
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ sudo apt-get install libhdf5-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
apt-clone archdetect-deb bogl-bterm busybox-static cryptsetup-bin dpkg-repack gir1.2-timezonemap-1.0 gir1.2-xkl-1.0 grub-common
kde-window-manager kinit kio kpackagetool5 kwayland-data kwin-common kwin-data kwin-x11 libdebian-installer4 libkdecorations2-5v5
libkdecorations2private5v5 libkf5activities5 libkf5attica5 libkf5completion-data libkf5completion5 libkf5declarative-data
libkf5declarative5 libkf5doctools5 libkf5globalaccel-data libkf5globalaccel5 libkf5globalaccelprivate5 libkf5idletime5
libkf5jobwidgets-data libkf5jobwidgets5 libkf5kcmutils-data libkf5kcmutils5 libkf5kiocore5 libkf5kiontlm5 libkf5kiowidgets5
libkf5newstuff-data libkf5newstuff5 libkf5newstuffcore5 libkf5package-data libkf5package5 libkf5plasma5 libkf5quickaddons5 libkf5solid5
libkf5solid5-data libkf5sonnet5-data libkf5sonnetcore5 libkf5sonnetui5 libkf5textwidgets-data libkf5textwidgets5 libkf5waylandclient5
libkf5waylandserver5 libkf5xmlgui-bin libkf5xmlgui-data libkf5xmlgui5 libkscreenlocker5 libkwin4-effect-builtins1 libkwineffects11
libkwinglutils11 libkwinxrenderutils11 libqgsttools-p1 libqt5designer5 libqt5help5 libqt5multimedia5 libqt5multimedia5-plugins
libqt5multimediaquick-p5 libqt5multimediawidgets5 libqt5positioning5 libqt5qml5 libqt5quick5 libqt5quickwidgets5 libqt5sensors5
libqt5webchannel5 libqt5webkit5 libxcb-composite0 libxcb-cursor0 libxcb-damage0 os-prober python3-dbus.mainloop.pyqt5 python3-icu
python3-pam python3-pyqt5 python3-pyqt5.qtsvg python3-pyqt5.qtwebkit python3-sip qml-module-org-kde-kquickcontrolsaddons
qml-module-qtmultimedia qml-module-qtquick2 rdate tasksel tasksel-data
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
hdf5-helpers libaec-dev libhdf5-cpp-100
Suggested packages:
libhdf5-doc
The following NEW packages will be installed
hdf5-helpers libaec-dev libhdf5-cpp-100 libhdf5-dev
0 to upgrade, 4 to newly install, 0 to remove and 2 not to upgrade.
Need to get 2,631 kB of archives.
After this operation, 14.7 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 hdf5-helpers arm64 1.10.0-patch1+docs-4 [12.3 kB]
Get:2 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 libhdf5-cpp-100 arm64 1.10.0-patch1+docs-4 [102 kB]
Get:3 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 libaec-dev arm64 0.3.2-2 [14.8 kB]
Get:4 http://ports.ubuntu.com/ubuntu-ports bionic/universe arm64 libhdf5-dev arm64 1.10.0-patch1+docs-4 [2,502 kB]
Fetched 2,631 kB in 1s (2,601 kB/s)
debconf: Delaying package configuration, since apt-utils is not installed.
Selecting previously unselected package hdf5-helpers.
(Reading database ... 143693 files and directories currently installed.)
Preparing to unpack .../hdf5-helpers_1.10.0-patch1+docs-4_arm64.deb ...
Unpacking hdf5-helpers (1.10.0-patch1+docs-4) ...
Selecting previously unselected package libhdf5-cpp-100:arm64.
Preparing to unpack .../libhdf5-cpp-100_1.10.0-patch1+docs-4_arm64.deb ...
Unpacking libhdf5-cpp-100:arm64 (1.10.0-patch1+docs-4) ...
Selecting previously unselected package libaec-dev:arm64.
Preparing to unpack .../libaec-dev_0.3.2-2_arm64.deb ...
Unpacking libaec-dev:arm64 (0.3.2-2) ...
Selecting previously unselected package libhdf5-dev.
Preparing to unpack .../libhdf5-dev_1.10.0-patch1+docs-4_arm64.deb ...
Unpacking libhdf5-dev (1.10.0-patch1+docs-4) ...
Setting up libhdf5-cpp-100:arm64 (1.10.0-patch1+docs-4) ...
Setting up libaec-dev:arm64 (0.3.2-2) ...
Setting up hdf5-helpers (1.10.0-patch1+docs-4) ...
Setting up libhdf5-dev (1.10.0-patch1+docs-4) ...
update-alternatives: using /usr/lib/aarch64-linux-gnu/pkgconfig/hdf5-serial.pc to provide /usr/lib/aarch64-linux-gnu/pkgconfig/hdf5.pc (hdf5.pc) in auto mode
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Installing h5py:
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ pip install --no-cache-dir h5py
Collecting h5py
Downloading https://files.pythonhosted.org/packages/5f/97/a58afbcf40e8abecededd9512978b4e4915374e5b80049af082f49cebe9a/h5py-2.10.0.tar.gz (301kB)
100% |████████████████████████████████| 307kB 2.7MB/s
Requirement already satisfied: numpy>=1.7 in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from h5py)
Requirement already satisfied: six in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from h5py)
Installing collected packages: h5py
Running setup.py install for h5py ... done
Successfully installed h5py-2.10.0
If this doesn't help then another suggestion is to install these packages [source]:
$ sudo apt-get install libhdf5-serial-dev hdf5-tools
Installing TensorFlow wheel:
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ pip install --user /tmp/tensorflow_pkg/tensorflow_gpu-2.0.0-cp36-cp36m-linux_aarch64.whl
Processing /tmp/tensorflow_pkg/tensorflow_gpu-2.0.0-cp36-cp36m-linux_aarch64.whl
Collecting astor>=0.6.0 (from tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/c3/88/97eef84f48fa04fbd6750e62dcceafba6c63c81b7ac1420856c8dcc0a3f9/astor-0.8.1-py2.py3-none-any.whl
Requirement already satisfied: six>=1.10.0 in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from tensorflow-gpu==2.0.0)
Collecting grpcio>=1.8.6 (from tensorflow-gpu==2.0.0)
Collecting google-pasta>=0.1.6 (from tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/c3/fd/1e86bc4837cc9a3a5faf3db9b1854aa04ad35b5f381f9648fbe81a6f94e4/google_pasta-0.1.8-py3-none-any.whl
Collecting absl-py>=0.7.0 (from tensorflow-gpu==2.0.0)
Collecting opt-einsum>=2.3.2 (from tensorflow-gpu==2.0.0)
Collecting tensorboard<2.1.0,>=2.0.0 (from tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/76/54/99b9d5d52d5cb732f099baaaf7740403e83fe6b0cedde940fabd2b13d75a/tensorboard-2.0.2-py3-none-any.whl
Collecting termcolor>=1.1.0 (from tensorflow-gpu==2.0.0)
Requirement already satisfied: keras-preprocessing>=1.0.5 in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from tensorflow-gpu==2.0.0)
Requirement already satisfied: wheel>=0.26 in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from tensorflow-gpu==2.0.0)
Collecting keras-applications>=1.0.8 (from tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/71/e3/19762fdfc62877ae9102edf6342d71b28fbfd9dea3d2f96a882ce099b03f/Keras_Applications-1.0.8-py3-none-any.whl
Requirement already satisfied: numpy<2.0,>=1.16.0 in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from tensorflow-gpu==2.0.0)
Collecting wrapt>=1.11.1 (from tensorflow-gpu==2.0.0)
Collecting gast==0.2.2 (from tensorflow-gpu==2.0.0)
Collecting tensorflow-estimator<2.1.0,>=2.0.0 (from tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/fc/08/8b927337b7019c374719145d1dceba21a8bb909b93b1ad6f8fb7d22c1ca1/tensorflow_estimator-2.0.1-py2.py3-none-any.whl
Collecting protobuf>=3.6.1 (from tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/4e/26/1517e42a81de4f28ed0f3cdadb628c1b72f3a28f38323a05e251f5df0a29/protobuf-3.11.1-py2.py3-none-any.whl
Collecting markdown>=2.6.8 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/c0/4e/fd492e91abdc2d2fcb70ef453064d980688762079397f779758e055f6575/Markdown-3.1.1-py2.py3-none-any.whl
Collecting werkzeug>=0.11.15 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/ce/42/3aeda98f96e85fd26180534d36570e4d18108d62ae36f87694b476b83d6f/Werkzeug-0.16.0-py2.py3-none-any.whl
Collecting google-auth<2,>=1.6.3 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/17/83/3cb31033e1ea0bdb8991b6ef327a5bf4960bd3dd31ff355881bfb0ddf199/google_auth-1.9.0-py2.py3-none-any.whl
Collecting requests<3,>=2.21.0 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/51/bd/23c926cd341ea6b7dd0b2a00aba99ae0f828be89d72b2190f27c11d4b7fb/requests-2.22.0-py2.py3-none-any.whl
Will not install to the user site because it will lack sys.path precedence to setuptools in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages
Installing TensorFlow wheel (3rd attempt - succesful):
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ pip install /tmp/tensorflow_pkg/tensorflow_gpu-2.0.0-cp36-cp36m-linux_aarch64.whl
Processing /tmp/tensorflow_pkg/tensorflow_gpu-2.0.0-cp36-cp36m-linux_aarch64.whl
Collecting gast==0.2.2 (from tensorflow-gpu==2.0.0)
Collecting tensorflow-estimator<2.1.0,>=2.0.0 (from tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/fc/08/8b927337b7019c374719145d1dceba21a8bb909b93b1ad6f8fb7d22c1ca1/tensorflow_estimator-2.0.1-py2.py3-none-any.whl
Requirement already satisfied: six>=1.10.0 in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from tensorflow-gpu==2.0.0)
Collecting tensorboard<2.1.0,>=2.0.0 (from tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/76/54/99b9d5d52d5cb732f099baaaf7740403e83fe6b0cedde940fabd2b13d75a/tensorboard-2.0.2-py3-none-any.whl
Collecting absl-py>=0.7.0 (from tensorflow-gpu==2.0.0)
Collecting opt-einsum>=2.3.2 (from tensorflow-gpu==2.0.0)
Requirement already satisfied: wheel>=0.26 in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from tensorflow-gpu==2.0.0)
Collecting protobuf>=3.6.1 (from tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/4e/26/1517e42a81de4f28ed0f3cdadb628c1b72f3a28f38323a05e251f5df0a29/protobuf-3.11.1-py2.py3-none-any.whl
Collecting keras-applications>=1.0.8 (from tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/71/e3/19762fdfc62877ae9102edf6342d71b28fbfd9dea3d2f96a882ce099b03f/Keras_Applications-1.0.8-py3-none-any.whl
Collecting wrapt>=1.11.1 (from tensorflow-gpu==2.0.0)
Requirement already satisfied: keras-preprocessing>=1.0.5 in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from tensorflow-gpu==2.0.0)
Collecting termcolor>=1.1.0 (from tensorflow-gpu==2.0.0)
Requirement already satisfied: numpy<2.0,>=1.16.0 in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from tensorflow-gpu==2.0.0)
Collecting google-pasta>=0.1.6 (from tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/c3/fd/1e86bc4837cc9a3a5faf3db9b1854aa04ad35b5f381f9648fbe81a6f94e4/google_pasta-0.1.8-py3-none-any.whl
Collecting astor>=0.6.0 (from tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/c3/88/97eef84f48fa04fbd6750e62dcceafba6c63c81b7ac1420856c8dcc0a3f9/astor-0.8.1-py2.py3-none-any.whl
Collecting grpcio>=1.8.6 (from tensorflow-gpu==2.0.0)
Collecting requests<3,>=2.21.0 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/51/bd/23c926cd341ea6b7dd0b2a00aba99ae0f828be89d72b2190f27c11d4b7fb/requests-2.22.0-py2.py3-none-any.whl
Collecting google-auth-oauthlib<0.5,>=0.4.1 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/7b/b8/88def36e74bee9fce511c9519571f4e485e890093ab7442284f4ffaef60b/google_auth_oauthlib-0.4.1-py2.py3-none-any.whl
Collecting werkzeug>=0.11.15 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/ce/42/3aeda98f96e85fd26180534d36570e4d18108d62ae36f87694b476b83d6f/Werkzeug-0.16.0-py2.py3-none-any.whl
Collecting markdown>=2.6.8 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/c0/4e/fd492e91abdc2d2fcb70ef453064d980688762079397f779758e055f6575/Markdown-3.1.1-py2.py3-none-any.whl
Collecting setuptools>=41.0.0 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/54/28/c45d8b54c1339f9644b87663945e54a8503cfef59cf0f65b3ff5dd17cf64/setuptools-42.0.2-py2.py3-none-any.whl
Collecting google-auth<2,>=1.6.3 (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/17/83/3cb31033e1ea0bdb8991b6ef327a5bf4960bd3dd31ff355881bfb0ddf199/google_auth-1.9.0-py2.py3-none-any.whl
Requirement already satisfied: h5py in /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages (from keras-applications>=1.0.8->tensorflow-gpu==2.0.0)
Collecting certifi>=2017.4.17 (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/b9/63/df50cac98ea0d5b006c55a399c3bf1db9da7b5a24de7890bc9cfd5dd9e99/certifi-2019.11.28-py2.py3-none-any.whl
Collecting chardet<3.1.0,>=3.0.2 (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl
Collecting idna<2.9,>=2.5 (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/14/2c/cd551d81dbe15200be1cf41cd03869a46fe7226e7450af7a6545bfc474c9/idna-2.8-py2.py3-none-any.whl
Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 (from requests<3,>=2.21.0->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/b4/40/a9837291310ee1ccc242ceb6ebfd9eb21539649f193a7c8c86ba15b98539/urllib3-1.25.7-py2.py3-none-any.whl
Collecting requests-oauthlib>=0.7.0 (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/a3/12/b92740d845ab62ea4edf04d2f4164d82532b5a0b03836d4d4e71c6f3d379/requests_oauthlib-1.3.0-py2.py3-none-any.whl
Collecting cachetools<3.2,>=2.0.0 (from google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/2f/a6/30b0a0bef12283e83e58c1d6e7b5aabc7acfc4110df81a4471655d33e704/cachetools-3.1.1-py2.py3-none-any.whl
Collecting pyasn1-modules>=0.2.1 (from google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/52/50/bb4cefca37da63a0c52218ba2cb1b1c36110d84dcbae8aa48cd67c5e95c2/pyasn1_modules-0.2.7-py2.py3-none-any.whl
Collecting rsa<4.1,>=3.1.4 (from google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/02/e5/38518af393f7c214357079ce67a317307936896e961e35450b70fad2a9cf/rsa-4.0-py2.py3-none-any.whl
Collecting oauthlib>=3.0.0 (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/05/57/ce2e7a8fa7c0afb54a0581b14a65b56e62b5759dbc98e80627142b8a3704/oauthlib-3.1.0-py2.py3-none-any.whl
Collecting pyasn1<0.5.0,>=0.4.6 (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<2.1.0,>=2.0.0->tensorflow-gpu==2.0.0)
Using cached https://files.pythonhosted.org/packages/62/1e/a94a8d635fa3ce4cfc7f506003548d0a2447ae76fd5ca53932970fe3053f/pyasn1-0.4.8-py2.py3-none-any.whl
Installing collected packages: gast, tensorflow-estimator, absl-py, certifi, chardet, idna, urllib3, requests, setuptools, cachetools, pyasn1, pyasn1-modules, rsa, google-auth, oauthlib, requests-oauthlib, google-auth-oauthlib, werkzeug, markdown, grpcio, protobuf, tensorboard, opt-einsum, keras-applications, wrapt, termcolor, google-pasta, astor, tensorflow-gpu
Found existing installation: setuptools 39.0.1
Uninstalling setuptools-39.0.1:
Successfully uninstalled setuptools-39.0.1
Successfully installed absl-py-0.8.1 astor-0.8.1 cachetools-3.1.1 certifi-2019.11.28 chardet-3.0.4 gast-0.2.2 google-auth-1.9.0 google-auth-oauthlib-0.4.1 google-pasta-0.1.8 grpcio-1.25.0 idna-2.8 keras-applications-1.0.8 markdown-3.1.1 oauthlib-3.1.0 opt-einsum-3.1.0 protobuf-3.11.1 pyasn1-0.4.8 pyasn1-modules-0.2.7 requests-2.22.0 requests-oauthlib-1.3.0 rsa-4.0 setuptools-42.0.2 tensorboard-2.0.2 tensorflow-estimator-2.0.1 tensorflow-gpu-2.0.0 termcolor-1.1.0 urllib3-1.25.7 werkzeug-0.16.0 wrapt-1.11.2
(tensorflow-demo) nvidia@nvidia-nano:~/Downloads/tensorflow-2.0.0$ python3 -c "import tensorflow as tf; print(tf.test.is_gpu_available())"
Traceback (most recent call last):
File "/home/nvidia/Downloads/tensorflow-2.0.0/tensorflow/python/platform/self_check.py", line 25, in <module>
from tensorflow.python.platform import build_info
ImportError: cannot import name 'build_info'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/nvidia/Downloads/tensorflow-2.0.0/tensorflow/__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "/home/nvidia/Downloads/tensorflow-2.0.0/tensorflow/python/__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "/home/nvidia/Downloads/tensorflow-2.0.0/tensorflow/python/pywrap_tensorflow.py", line 25, in <module>
from tensorflow.python.platform import self_check
File "/home/nvidia/Downloads/tensorflow-2.0.0/tensorflow/python/platform/self_check.py", line 27, in <module>
raise ImportError("Could not import tensorflow. Do not import tensorflow "
ImportError: Could not import tensorflow. Do not import tensorflow from its source directory; change directory to outside the TensorFlow source tree, and relaunch your Python interpreter from there.
Testing importing Python TensorFlow module:
(tensorflow-demo) nvidia@nvidia-nano:~$ python3 -c "import tensorflow as tf; print(tf.test.is_gpu_available())"
2019-12-17 08:32:42.533728: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2019-12-17 08:32:42.534829: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x19783840 executing computations on platform Host. Devices:
2019-12-17 08:32:42.534896: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version
2019-12-17 08:32:42.654341: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2019-12-17 08:32:42.828922: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-17 08:32:42.829220: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x18315c30 executing computations on platform CUDA. Devices:
2019-12-17 08:32:42.829303: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): NVIDIA Tegra X1, Compute Capability 5.3
2019-12-17 08:32:42.829859: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-17 08:32:42.829999: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2019-12-17 08:32:42.850630: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2019-12-17 08:32:42.938075: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2019-12-17 08:32:43.022347: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2019-12-17 08:32:43.135783: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2019-12-17 08:32:43.258571: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2019-12-17 08:32:43.329671: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2019-12-17 08:32:43.556961: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2019-12-17 08:32:43.557371: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-17 08:32:43.557769: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-17 08:32:43.557932: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-12-17 08:32:43.558165: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2019-12-17 08:32:43.559929: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-12-17 08:32:43.559980: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0
2019-12-17 08:32:43.560009: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N
2019-12-17 08:32:43.562075: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-17 08:32:43.562319: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-17 08:32:43.562498: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/device:GPU:0 with 546 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
True
(tensorflow-demo)
Build and installation of Tensorflow 2.0 on Jetson Nano was successful. But all this work rendered irrelevant for one small but very important detail I missed: as the log above states, Jetson Nano uses NVIDIA Tegra X1 chip which has CUDA capability of 5.3 but I left default values of 3.5, 7.0 when configuring Tensorflow build. This made this TF installation unusable on Jetson Nano.
For example, I tried to run my Tensorflow demo which uses TF2 and OpenCV to perform an inference on a webcam stream. Initially, I experienced an issue caused by the lack of RAM memory:
(tensorflow-demo) nvidia@nvidia-nano:~/dev/github/tensorflow-demo$ python3 src/tf-demo.py
WARNING:tensorflow:From /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages/tensorflow_core/python/compat/v2_compat.py:65: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
cwd = /home/nvidia/dev/github/tensorflow-demo
ssd_inception_v2_coco_2017_11_17
ssd_inception_v2_coco_2017_11_17/model.ckpt.index
ssd_inception_v2_coco_2017_11_17/model.ckpt.meta
ssd_inception_v2_coco_2017_11_17/frozen_inference_graph.pb
ssd_inception_v2_coco_2017_11_17/model.ckpt.data-00000-of-00001
ssd_inception_v2_coco_2017_11_17/saved_model
ssd_inception_v2_coco_2017_11_17/saved_model/saved_model.pb
ssd_inception_v2_coco_2017_11_17/saved_model/variables
ssd_inception_v2_coco_2017_11_17/checkpoint
2019-12-20 02:01:47.933465: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2019-12-20 02:01:47.990591: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 02:01:47.990764: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2019-12-20 02:01:47.997118: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2019-12-20 02:01:47.997336: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2019-12-20 02:01:47.997454: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2019-12-20 02:01:48.106108: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2019-12-20 02:01:48.241369: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2019-12-20 02:01:48.315745: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2019-12-20 02:01:48.316148: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2019-12-20 02:01:48.316895: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 02:01:48.317712: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 02:01:48.317948: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-12-20 02:01:48.342062: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2019-12-20 02:01:48.342623: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2e9105f0 executing computations on platform Host. Devices:
2019-12-20 02:01:48.342708: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version
2019-12-20 02:01:48.455941: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 02:01:48.456237: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2e8b5740 executing computations on platform CUDA. Devices:
2019-12-20 02:01:48.456292: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): NVIDIA Tegra X1, Compute Capability 5.3
2019-12-20 02:01:48.456869: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 02:01:48.456991: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2019-12-20 02:01:48.457236: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2019-12-20 02:01:48.457317: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2019-12-20 02:01:48.457375: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2019-12-20 02:01:48.457462: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2019-12-20 02:01:48.457543: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2019-12-20 02:01:48.457621: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2019-12-20 02:01:48.457676: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2019-12-20 02:01:48.457925: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 02:01:48.458199: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 02:01:48.458278: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-12-20 02:01:48.458442: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2019-12-20 02:01:48.460445: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-12-20 02:01:48.460499: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0
2019-12-20 02:01:48.460529: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N
2019-12-20 02:01:48.461862: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 02:01:48.462230: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 02:01:48.462432: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 150 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
2019-12-20 02:01:50.693255: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 20127744 exceeds 10% of system memory.
2019-12-20 02:01:50.702178: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 10063872 exceeds 10% of system memory.
2019-12-20 02:01:51.519460: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 20127744 exceeds 10% of system memory.
2019-12-20 02:01:51.528819: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 10063872 exceeds 10% of system memory.
2019-12-20 02:01:52.341950: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 20127744 exceeds 10% of system memory.
^C^Z
[1]+ Stopped python3 src/tf-demo.py
Nvidia Jetson Nano: Custom Object Detection from scratch using Tensorflow and OpenCV
In our experience, the Jetson Nano could not even run the SDD Inception V2 model as it crashed upon running out of memory most of the times we tried it out.
I then changed underlying model to lighter ssdlite_mobile_v2 and tried again:
(tensorflow-demo) nvidia@nvidia-nano:~/dev/github/tensorflow-demo$ python3 src/tf-demo.py
WARNING:tensorflow:From /home/nvidia/python-envs/tensorflow-demo/lib/python3.6/site-packages/tensorflow_core/python/compat/v2_compat.py:65: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
cwd = /home/nvidia/dev/github/tensorflow-demo
ssdlite_mobilenet_v2_coco_2018_05_09/checkpoint
ssdlite_mobilenet_v2_coco_2018_05_09/model.ckpt.data-00000-of-00001
ssdlite_mobilenet_v2_coco_2018_05_09/model.ckpt.meta
ssdlite_mobilenet_v2_coco_2018_05_09/model.ckpt.index
ssdlite_mobilenet_v2_coco_2018_05_09/saved_model/saved_model.pb
ssdlite_mobilenet_v2_coco_2018_05_09/pipeline.config
ssdlite_mobilenet_v2_coco_2018_05_09/frozen_inference_graph.pb
ssdlite_mobilenet_v2_coco_2018_05_09
ssdlite_mobilenet_v2_coco_2018_05_09/saved_model/variables
ssdlite_mobilenet_v2_coco_2018_05_09/saved_model
2019-12-20 12:15:04.017619: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2019-12-20 12:15:04.079977: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 12:15:04.080137: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2019-12-20 12:15:04.089744: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2019-12-20 12:15:04.090144: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2019-12-20 12:15:04.090346: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2019-12-20 12:15:04.199821: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2019-12-20 12:15:04.319588: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2019-12-20 12:15:04.387395: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2019-12-20 12:15:04.388051: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2019-12-20 12:15:04.388775: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 12:15:04.389548: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 12:15:04.389765: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-12-20 12:15:04.416122: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2019-12-20 12:15:04.416643: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x62e4ae0 executing computations on platform Host. Devices:
2019-12-20 12:15:04.416701: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version
2019-12-20 12:15:04.533731: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 12:15:04.534034: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x62b6580 executing computations on platform CUDA. Devices:
2019-12-20 12:15:04.534090: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): NVIDIA Tegra X1, Compute Capability 5.3
2019-12-20 12:15:04.534772: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 12:15:04.534908: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2019-12-20 12:15:04.535168: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2019-12-20 12:15:04.535268: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2019-12-20 12:15:04.535337: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2019-12-20 12:15:04.535429: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2019-12-20 12:15:04.535521: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2019-12-20 12:15:04.535613: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2019-12-20 12:15:04.535714: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2019-12-20 12:15:04.536014: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 12:15:04.536334: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 12:15:04.536425: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-12-20 12:15:04.536676: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2019-12-20 12:15:04.538900: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-12-20 12:15:04.538957: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0
2019-12-20 12:15:04.538987: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N
2019-12-20 12:15:04.541158: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 12:15:04.541517: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-20 12:15:04.541671: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 70 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
2019-12-20 12:18:51.429526: F tensorflow/stream_executor/cuda/cuda_driver.cc:175] Check failed: err == cudaSuccess || err == cudaErrorInvalidValue Unexpected CUDA error: unknown error
Aborted (core dumped)
Memory was not an issue here:
nvidia@nvidia-nano:~/dev/github/tensorflow-demo$ tegrastats --help
Usage: tegrastats [-option]
Options:
--help : print this help screen
--interval <millisec> : sample the information in <milliseconds>
--logfile <filename> : dump the output of tegrastats to <filename>
--load_cfg <filename> : load the information from <filename>
--save_cfg <filename> : save the information to <filename>
--start : run tegrastats as a daemon process in the background
--stop : stop any running instances of tegrastats
--verbose : print verbose message
nvidia@nvidia-nano:~/dev/github/tensorflow-demo$ tegrastats --interval 5000
RAM 3776/3963MB (lfb 9x2MB) SWAP 2822/3963MB (cached 1MB) CPU [50%@1428,6%@1428,3%@1428,56%@1428] EMC_FREQ 0% GR3D_FREQ 29% PLL@30C CPU@35C PMIC@100C GPU@32C AO@39.5C thermal@33.25C POM_5V_IN 3330/3330 POM_5V_GPU 120/120 POM_5V_CPU 1163/1163
RAM 3770/3963MB (lfb 9x2MB) SWAP 2831/3963MB (cached 1MB) CPU [100%@1428,5%@1428,7%@1428,7%@1428] EMC_FREQ 0% GR3D_FREQ 0% PLL@29C CPU@34C PMIC@100C GPU@32C AO@39.5C thermal@33.25C POM_5V_IN 2629/2979 POM_5V_GPU 40/80 POM_5V_CPU 726/944
nvidia@nvidia-nano:~/dev/github/tensorflow-demo$ tegrastats --help
Usage: tegrastats [-option]
Options:
--help : print this help screen
--interval <millisec> : sample the information in <milliseconds>
--logfile <filename> : dump the output of tegrastats to <filename>
--load_cfg <filename> : load the information from <filename>
--save_cfg <filename> : save the information to <filename>
--start : run tegrastats as a daemon process in the background
--stop : stop any running instances of tegrastats
--verbose : print verbose message
nvidia@nvidia-nano:~/dev/github/tensorflow-demo$ tegrastats --interval 5000
RAM 3776/3963MB (lfb 9x2MB) SWAP 2822/3963MB (cached 1MB) CPU [50%@1428,6%@1428,3%@1428,56%@1428] EMC_FREQ 0% GR3D_FREQ 29% PLL@30C CPU@35C PMIC@100C GPU@32C AO@39.5C thermal@33.25C POM_5V_IN 3330/3330 POM_5V_GPU 120/120 POM_5V_CPU 1163/1163
RAM 3770/3963MB (lfb 9x2MB) SWAP 2831/3963MB (cached 1MB) CPU [100%@1428,5%@1428,7%@1428,7%@1428] EMC_FREQ 0% GR3D_FREQ 0% PLL@29C CPU@34C PMIC@100C GPU@32C AO@39.5C thermal@33.25C POM_5V_IN 2629/2979 POM_5V_GPU 40/80 POM_5V_CPU 726/944
Please remember to add Nano GPU capacity when building the library.
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,7.0]: 5.3,7.2
GPU Compute Capability
Jetson Nano 5.3
I should have built TF with compute capability 5.3 only. I found couple of examples which confirm this:
The SDK is built on OpenCV 3.4.2 and CUDA Compute Capability 5.3/7.2/6.2.
No comments:
Post a Comment