Friday, 15 January 2021

Using Python Virtual Environments

Different Python projects depend on different packages. To keep them separated and avoid polluting the global space with package installations we can use the concept of virtual environments. There are several ways to work with virtual environments in Python:
  • virtualenv
  • virtualenvwrapper
  • Conda (part of Anaconda)

virtualenv



To use virtualenv we first need to install all necessary components:

$ sudo apt update
$ sudo apt install python3-dev python3-pip
$ sudo pip3 install -U virtualenv  # system-wide install

Verification:

$ python3 --version
$ pip3 --version
$ virtualenv --version

or

$ virtualenv
usage: virtualenv [--version] [--with-traceback] [-v | -q] [--read-only-app-data] [--app-data APP_DATA] [--reset-app-data] [--upgrade-embed-wheels] [--discovery {builtin}] [-p py] [--try-first-with py_exe]
                  [--creator {builtin,cpython3-posix,venv}] [--seeder {app-data,pip}] [--no-seed] [--activators comma_sep_list] [--clear] [--no-vcs-ignore] [--system-site-packages] [--symlinks | --copies] [--no-download | --download]
                  [--extra-search-dir d [d ...]] [--pip version] [--setuptools version] [--wheel version] [--no-pip] [--no-setuptools] [--no-wheel] [--no-periodic-update] [--symlink-app-data] [--prompt prompt] [-h]
                  dest
virtualenv: error: the following arguments are required: dest
SystemExit: 2

or

$ virtualenv --help
usage: virtualenv [--version] [--with-traceback] [-v | -q] [--read-only-app-data] [--app-data APP_DATA] [--reset-app-data] [--upgrade-embed-wheels] [--discovery {builtin}] [-p py] [--try-first-with py_exe]
                  [--creator {builtin,cpython3-posix,venv}] [--seeder {app-data,pip}] [--no-seed] [--activators comma_sep_list] [--clear] [--no-vcs-ignore] [--system-site-packages] [--symlinks | --copies] [--no-download | --download]
                  [--extra-search-dir d [d ...]] [--pip version] [--setuptools version] [--wheel version] [--no-pip] [--no-setuptools] [--no-wheel] [--no-periodic-update] [--symlink-app-data] [--prompt prompt] [-h]
                  dest

optional arguments:
  --version                     display the version of the virtualenv package and its location, then exit
  --with-traceback              on failure also display the stacktrace internals of virtualenv (default: False)
  --read-only-app-data          use app data folder in read-only mode (write operations will fail with error) (default: False)
  --app-data APP_DATA           a data folder used as cache by the virtualenv (default: /home/bojan/.local/share/virtualenv)
  --reset-app-data              start with empty app data folder (default: False)
  --upgrade-embed-wheels        trigger a manual update of the embedded wheels (default: False)
  -h, --help                    show this help message and exit

verbosity:
  verbosity = verbose - quiet, default INFO, mapping => CRITICAL=0, ERROR=1, WARNING=2, INFO=3, DEBUG=4, NOTSET=5

  -v, --verbose                 increase verbosity (default: 2)
  -q, --quiet                   decrease verbosity (default: 0)

discovery:
  discover and provide a target interpreter

  --discovery {builtin}         interpreter discovery method (default: builtin)
  -p py, --python py            interpreter based on what to create environment (path/identifier) - by default use the interpreter where the tool is installed - first found wins (default: [])
  --try-first-with py_exe       try first these interpreters before starting the discovery (default: [])

creator:
  options for creator builtin

  --creator {builtin,cpython3-posix,venv}
                                create environment via (builtin = cpython3-posix) (default: builtin)
  dest                          directory to create virtualenv at
  --clear                       remove the destination directory if exist before starting (will overwrite files otherwise) (default: False)
  --no-vcs-ignore               don't create VCS ignore directive in the destination directory (default: False)
  --system-site-packages        give the virtual environment access to the system site-packages dir (default: False)
  --symlinks                    try to use symlinks rather than copies, when symlinks are not the default for the platform (default: True)
  --copies, --always-copy       try to use copies rather than symlinks, even when symlinks are the default for the platform (default: False)

seeder:
  options for seeder app-data

  --seeder {app-data,pip}       seed packages install method (default: app-data)
  --no-seed, --without-pip      do not install seed packages (default: False)
  --no-download, --never-download
                                pass to disable download of the latest pip/setuptools/wheel from PyPI (default: True)
  --download                    pass to enable download of the latest pip/setuptools/wheel from PyPI (default: False)
  --extra-search-dir d [d ...]  a path containing wheels to extend the internal wheel list (can be set 1+ times) (default: [])
  --pip version                 version of pip to install as seed: embed, bundle or exact version (default: bundle)
  --setuptools version          version of setuptools to install as seed: embed, bundle or exact version (default: bundle)
  --wheel version               version of wheel to install as seed: embed, bundle or exact version (default: bundle)
  --no-pip                      do not install pip (default: False)
  --no-setuptools               do not install setuptools (default: False)
  --no-wheel                    do not install wheel (default: False)
  --no-periodic-update          disable the periodic (once every 14 days) update of the embedded wheels (default: False)
  --symlink-app-data            symlink the python packages from the app-data folder (requires seed pip>=19.3) (default: False)

activators:
  options for activation scripts

  --activators comma_sep_list   activators to generate - default is all supported (default: bash,cshell,fish,powershell,python,xonsh)
  --prompt prompt               provides an alternative prompt prefix for this environment (default: None)

config file /home/bojan/.config/virtualenv/virtualenv.ini missing (change via env var VIRTUALENV_CONFIG_FILE)


Create virtual environment (we chose to use python3 as Python interpreter and venv as the directory where to store virtual environment):

virtualenv -p python3 ./venv_dir_name

or

virtualenv --system-site-packages -p python3 ./venv_dir_name


If you build with virtualenv --system-site-packages ENV, your virtual environment will inherit packages from /usr/lib/python2.7/site-packages (or wherever your global site-packages directory is).

This can be used if you have control over the global site-packages directory, and you want to depend on the packages there. If you want isolation from the global system, do not use this flag. BK: This flag should be avoided as it defeats the entire point of virtualenvs (which is why --no-site-packages was made the default).

Example:

$ virtualenv -p python3 ./venv
created virtual environment CPython3.8.5.final.0-64 in 253ms
  creator CPython3Posix(dest=/home/bojan/dev/github/python-demo/venv, clear=False, no_vcs_ignore=False, global=False)
  seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/bojan/.local/share/virtualenv)
    added seed packages: pip==20.3.3, setuptools==51.1.2, wheel==0.36.2
  activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator


I once experienced this error when tried to create a virtual environment:

$ virtualenv -p python3 ./venv
Traceback (most recent call last):
  File "/usr/local/bin/virtualenv", line 5, in <module>
    from virtualenv.__main__ import run_with_catch
  File "/usr/local/lib/python3.8/dist-packages/virtualenv/__init__.py", line 3, in <module>
    from .run import cli_run, session_via_cli
  File "/usr/local/lib/python3.8/dist-packages/virtualenv/run/__init__.py", line 6, in <module>
    from ..app_data import make_app_data
  File "/usr/local/lib/python3.8/dist-packages/virtualenv/app_data/__init__.py", line 9, in <module>
    from appdirs import user_data_dir
ModuleNotFoundError: No module named 'appdirs'

I then installed the missing module via pip but this showed another error:

$ sudo pip3 install appdirs
Collecting appdirs
  Downloading appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB)
ERROR: virtualenv 20.3.1 requires filelock<4,>=3.0.0, which is not installed.
Installing collected packages: appdirs
Successfully installed appdirs-1.4.4

I resolved this by reinstalling the virtualenv:

$ sudo pip3 install virtualenv
Requirement already satisfied: virtualenv in /usr/local/lib/python3.8/dist-packages (20.3.1)
Requirement already satisfied: appdirs<2,>=1.4.3 in /usr/local/lib/python3.8/dist-packages (from virtualenv) (1.4.4)
Requirement already satisfied: six<2,>=1.9.0 in /usr/lib/python3/dist-packages (from virtualenv) (1.14.0)
Requirement already satisfied: distlib<1,>=0.3.1 in /usr/local/lib/python3.8/dist-packages (from virtualenv) (0.3.1)
Collecting filelock<4,>=3.0.0
  Downloading filelock-3.0.12-py3-none-any.whl (7.6 kB)
Installing collected packages: filelock
Successfully installed filelock-3.0.12

Once virtual environment is created, we'll have venv/bin/activate script created. Let's explore it:

$ cat venv/bin/activate
...

deactivate() {
...
}
...
VIRTUAL_ENV="/home/bojan/dev/github/tensorflow-demo/venv"
export VIRTUAL_ENV

_OLD_VIRTUAL_PATH="$PATH"
PATH="$VIRTUAL_ENV/bin:$PATH"
export PATH
...
pydoc () {
    python -m pydoc "$@"
}
...


We can see that we'll now have a new environment variable set - VIRTUAL_ENV and also a deactivate commmand.

To activate virtual environment:

$ source ./venv/bin/activate

We can also use a dot command:

$ . ./venv/bin/activate

When virtual environment is active, your shell prompt is prefixed with its name in form: (venv).

(venv) $ which pip
/home/bojan/dev/github/python-demo/venv/bin/pip

(venv) $ which pip3
/home/bojan/dev/github/python-demo/venv/bin/pip3


If we've saved requirements.txt, we can install all dependencies now with on of the following commands (see Python Package Management with pip | My Public Notepad):

$ pip install -r requirements.txt
pip3 install -r requirements.txt

It is possible to install a single new package in the virtual environment:

$ pip install package_name


Example:

(venv) $ pip3 install psycopg2
Collecting psycopg2
  Using cached psycopg2-2.8.6-cp38-cp38-linux_x86_64.whl
Installing collected packages: psycopg2
Successfully installed psycopg2-2.8.6

To verify installation path:

(venv) $ pip3 show psycopg2
Name: psycopg2
Version: 2.8.6
Summary: psycopg2 - Python-PostgreSQL Database Adapter
Home-page: https://psycopg.org/
Author: Federico Di Gregorio
Author-email: fog@initd.org
License: LGPL with exceptions
Location: /home/user/dev/my-app/venv/lib/python3.8/site-packages
Requires: 
Required-by: 

To check the version of the package installed in virtual environment:

(venv) $ pip3 list | grep psycopg
psycopg2          2.8.6


To exit virtual environment:

(venv) $ deactivate

How to rename virtual environment?

directory - How to rename a virtualenv in Python? - Stack Overflow

From inside active virtual environment:

$ pip freeze > requirements.txt
$ deactivate

From python - Pip freeze vs. pip list - Stack Overflow
pip list shows ALL packages. 
pip freeze shows packages YOU installed via pip (or pipenv if using that tool) command in a requirements format.
Also, be aware of `$ pip freeze > requirements.txt` considered harmful

Then delete old environment directory. E.g.

$ rm -r ~/python-envs/tensorrt-5.1.5.1-tensorflow-1.14.0/

...and create a new one with correct name:

$ python3 -m virtualenv -p python3 ~/python-envs/tensorrt-5.1.5.1-tensorflow-1.15.0
Already using interpreter /usr/bin/python3
Using base prefix '/usr'
New python executable in /home/nvidia/python-envs/tensorrt-5.1.5.1-tensorflow-1.15.0/bin/python3
Also creating executable in /home/nvidia/python-envs/tensorrt-5.1.5.1-tensorflow-1.15.0/bin/python
Installing setuptools, pkg_resources, pip, wheel...done.

Activate it and install all packages from the saved list:

$ source ~/python-envs/tensorrt-5.1.5.1-tensorflow-1.15.0/bin/activate
$ pip install -r requirements.txt
$ pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v43 tensorflow-gpu==1.15.0+nv19.12


virtualenvwrapper


TBD...


Conda



References


Troubleshooting virtualenvwrapper errors

It was my mistake that I did not track immediately the cause of the error that one day started appearing each time I'd open a new Terminal:

/usr/bin/python3: Error while finding module specification for 'virtualenvwrapper.hook_loader' (ModuleNotFoundError: No module named 'virtualenvwrapper')
virtualenvwrapper.sh: There was a problem running the initialization hooks.
If Python could not import the module virtualenvwrapper.hook_loader,
check that virtualenvwrapper has been installed for
VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
and that PATH is
set properly.

virtualenvwrapper is, as its name says, a wrapper around virtualenv

When a new Terminal is opened, it starts a shell which sources the first of .bash_profile, .bash_login, .profile that exists and is readable. [bash - Why must I source .bashrc every time I open terminal for aliases to work? - Ask Different

In my case .bash_profile and .bash_login don't exist but .profile does and it sources ~/.bashrc:

$ cat ~/.profile 
# ~/.profile: executed by the command interpreter for login shells.
# This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login
# exists.
# see /usr/share/doc/bash/examples/startup-files for examples.
# the files are located in the bash-doc package.

# the default umask is set in /etc/profile; for setting the umask
# for ssh logins, install and configure the libpam-umask package.
#umask 022

# if running bash
if [ -n "$BASH_VERSION" ]; then
    # include .bashrc if it exists
    if [ -f "$HOME/.bashrc" ]; then
. "$HOME/.bashrc"
    fi
fi

# set PATH so it includes user's private bin if it exists
if [ -d "$HOME/bin" ] ; then
    PATH="$HOME/bin:$PATH"
fi

# set PATH so it includes user's private bin if it exists
if [ -d "$HOME/.local/bin" ] ; then
    PATH="$HOME/.local/bin:$PATH"
fi


So I looked at ~/.bashrc and I found that some time in past I changed it to include some additional environment variables:

cat ~/.bashrc
...
# B.Komazec added:
# virtualenv and virtualenvwrapper
export WORKON_HOME=$HOME/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
source /usr/local/bin/virtualenvwrapper.sh
...

Indeed, the error above was coming from virtualenvwrapper.sh, from function virtualenvwrapper_run_hook():

$ cat /usr/local/bin/virtualenvwrapper.sh
...
# Run the hooks
function virtualenvwrapper_run_hook {
    typeset hook_script
    typeset result

    hook_script="$(virtualenvwrapper_tempfile ${1}-hook)" || return 1

    # Use a subshell to run the python interpreter with hook_loader so
    # we can change the working directory. This avoids having the
    # Python 3 interpreter decide that its "prefix" is the virtualenv
    # if we happen to be inside the virtualenv when we start.
    ( \
        virtualenvwrapper_cd "$WORKON_HOME" &&
        "$VIRTUALENVWRAPPER_PYTHON" -m 'virtualenvwrapper.hook_loader' \
            ${HOOK_VERBOSE_OPTION:-} --script "$hook_script" "$@" \
    )
    result=$?

    if [ $result -eq 0 ]
    then
        if [ ! -f "$hook_script" ]
        then
            echo "ERROR: virtualenvwrapper_run_hook could not find temporary file $hook_script" 1>&2
            command \rm -f "$hook_script"
            return 2
        fi
        # cat "$hook_script"
        source "$hook_script"
    elif [ "${1}" = "initialize" ]
    then
        cat - 1>&2 <<EOF
virtualenvwrapper.sh: There was a problem running the initialization hooks.

If Python could not import the module virtualenvwrapper.hook_loader,
check that virtualenvwrapper has been installed for
VIRTUALENVWRAPPER_PYTHON=$VIRTUALENVWRAPPER_PYTHON and that PATH is
set properly.
EOF
    fi
    command \rm -f "$hook_script"
    return $result
}
...


I checked which packages are installed for pip3:

$ pip3 list

...but among them were neither virtualenv nor virtualenvwrapper. So I installed them:

$ sudo pip3 install virtualenv virtualenvwrapper 
Collecting virtualenv
  Downloading virtualenv-20.3.1-py2.py3-none-any.whl (5.7 MB)    
Collecting virtualenvwrapper
  Downloading virtualenvwrapper-4.8.4.tar.gz (334 kB)
Requirement already satisfied: six<2,>=1.9.0 in /usr/lib/python3/dist-packages (from virtualenv) (1.14.0)
Requirement already satisfied: appdirs<2,>=1.4.3 in /usr/lib/python3/dist-packages (from virtualenv) (1.4.3)
Requirement already satisfied: filelock<4,>=3.0.0 in /usr/lib/python3/dist-packages (from virtualenv) (3.0.12)
Collecting distlib<1,>=0.3.1
  Downloading distlib-0.3.1-py2.py3-none-any.whl (335 kB)
Collecting stevedore
  Downloading stevedore-3.3.0-py3-none-any.whl (49 kB)
Collecting virtualenv-clone
  Downloading virtualenv_clone-0.5.4-py2.py3-none-any.whl (6.6 kB)
Collecting pbr!=2.1.0,>=2.0.0
  Using cached pbr-5.5.1-py2.py3-none-any.whl (106 kB)
Building wheels for collected packages: virtualenvwrapper
  Building wheel for virtualenvwrapper (setup.py) ... done
  Created wheel for virtualenvwrapper: filename=virtualenvwrapper-4.8.4-py2.py3-none-any.whl size=24833 sha256=0e77420c4c5bd24518d388768cf102170bd8b0243fd3052fdbec24b01f3bb59a
  Stored in directory: /root/.cache/pip/wheels/47/15/3d/7a26eaf92e79f80a3df3ac5f8e0f0f5b7efdf24d313c594a44
Successfully built virtualenvwrapper
Installing collected packages: distlib, virtualenv, pbr, stevedore, virtualenv-clone, virtualenvwrapper
  Attempting uninstall: distlib
    Found existing installation: distlib 0.3.0
    Not uninstalling distlib at /usr/lib/python3/dist-packages, outside environment /usr
    Can't uninstall 'distlib'. No files were found to uninstall.
Successfully installed distlib-0.3.1 pbr-5.5.1 stevedore-3.3.0 virtualenv-20.3.1 virtualenv-clone-0.5.4 virtualenvwrapper-4.8.4


To verify that packages are installed indeed:

pip3 list
...
virtualenv             20.3.1              
virtualenv-clone       0.5.4               
virtualenvwrapper      4.8.4  
...

I opened a new Terminal and voila - the error is gone!

For a reference, my  Python paths are:

$ which python
/usr/bin/python

$ which python3
/usr/bin/python3


References


Sunday, 10 January 2021

Installing JetPack on Jetson TX2 from NVIDIA SDK Manager Docker container

For a very long time, if you wanted to flash your Jetson TX and install JetPack SDK you had to download and install NVIDIA JetPack and later, NVIDIA SDK Manager, on the Linux host computer first. Every package installation pollutes your host machine and also takes some disk space. To avoid this, NVIDA created a Docker image with SDK Manager so once JetPack is installed on Jetson, this Docker image can be deleted and your Linux host remains in the same state as before. This has been available since NVIDIA SDK Manager 1.4 (December 2020).

I had some of the older versions of JetPack installed on my Jetson TX2 and I wanted to install the most recent one (4.4.1 at the moment; 4.5 is announced for January 2021). One of the benefits I wanted to get is the upgrading to the next JetPack release via apt package management tool (this has been available since JetPack 4.4).

I want to share here my experience with the process of running NVIDIA SDK Manager Docker container and flashing the Jetson TX2 with it. I followed the official documentation about this process: Docker Images :: NVIDIA SDK Manager Documentation.

I logged in to NVIDIA Developer center and downloaded this Docker image from this URL: https://developer.nvidia.com/nvidia-sdk-manager-docker-image. The image came as an 942MB archive named sdkmanager-1.4.0.7363_docker.tar.gz.

$ docker load -i ./sdkmanager-1.4.0.7363_docker.tar.gz 
805802706667: Loading layer  65.61MB/65.61MB
3fd9df553184: Loading layer  15.87kB/15.87kB
7a694df0ad6c: Loading layer  3.072kB/3.072kB
2f694c79b042: Loading layer  148.2MB/148.2MB
26765aed7e25: Loading layer  502.3kB/502.3kB
b398b8335e67: Loading layer  6.015MB/6.015MB
08b68150484c: Loading layer  1.135MB/1.135MB
31fbfacf550e: Loading layer  1.135MB/1.135MB
84979f95b15f: Loading layer  502.3kB/502.3kB
cd9205d2e1f9: Loading layer  2.075MB/2.075MB
414d8a00c66e: Loading layer  66.07MB/66.07MB
cc3271f36011: Loading layer  84.83MB/84.83MB
5072b3ebcb77: Loading layer  2.108MB/2.108MB
38be13d541b9: Loading layer  1.781MB/1.781MB
ff63398d24ea: Loading layer  99.17MB/99.17MB
7cb4d04a8659: Loading layer [==================================================>]  462.3MB/462.3MB
cd2c9f6e22b0: Loading layer [==================================================>]  2.048kB/2.048kB
65cd593db96f: Loading layer [==================================================>]  3.584kB/3.584kB
8900dbcf5626: Loading layer [==================================================>]  3.584kB/3.584kB
9ac46abadb31: Loading layer [==================================================>]  3.072kB/3.072kB
a20c9aaeb9c3: Loading layer [==================================================>]  417.3kB/417.3kB
cae1bf65143a: Loading layer [==================================================>]  3.584kB/3.584kB
Loaded image: sdkmanager:1.4.0.7363

As this is the latest version of this Docker image, I tagged it with the latest tag:

$ docker tag sdkmanager:1.4.0.7363 sdkmanager:latest

I made sure that the image is listed among other Docker images on my machine:

$ docker images  
REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE
sdkmanager              1.4.0.7363          0e9d62e318ad        2 weeks ago         913MB
sdkmanager              latest              0e9d62e318ad        2 weeks ago         913MB
...

SDK Manager executable (sdkmanager) is the entrypoint of this Docker image and I wanted to test the Docker image by running it with some simple CLI commands listed here.

$ docker run -it --rm sdkmanager --help

NVIDIA SDK MANAGER

  NVIDIA SDK Manager is an all-in-one tool that bundles developer software and
  provides an end-to-end development environment setup solution for NVIDIA
  SDKs.

General Options

  -h, --help                             Displays this usage guide.
  --ver                                  Output the version of the installed SDK Manager client
  --settings                             Optional. Configure SDK Manager settings in the terminal.
  --query interactive|noninteractive     Prints all options available for the user. Must be executed with the --use or --offline settings
  --showallversions                      Prints all available product versions for the user.
  --logs                                 Optional. Set this option to export the log files when the process is complete.
  --exitonfinish                         Optional. Automatically exit from SDK Manager when the install/uninstall session is finished (skip user input). Intended for scripts/automation usage.
  --user email_address                   Optional. Set the user email to login. Valid only for NVOnline login.
  --password string                      Optional. Set the user login password. Valid only for NVOnline login.
  --logintype devzone|nvonline           Optional. Login with developer.nvidia.com or partners.nvidia.com account. Default is devzone.
  --staylogin true|false                 Optional. Keep the user account logged-in for next running session.
  --logout                               Logout user account from SDK Manager.
  --offline                              Optional. Skip login to NVIDIA servers. Install SDK from pre downloaded location, used with --downloadfolder option.
  --downloadfolder string                Optional. Set the download folder for the SDK components. Used for downloading the files and for locating the SDK components when using --offline.
  --archivedversions                     Optional. Display only archived versions.
  --cli install|uninstall|downloadonly   Mandatory. Set the requested action.
  --sudopassword string                  Optional. Set the sudo password to skip the authentication prompt.
  --datacollection enable|disable        Optional. Set to enable or disalbe usage data collection.

Specific arguments for install/uninstall:

  --product product_name                 Mandatory. Set the product name.
  --version string                       Mandatory. Set the product version. Use --query to get available version values.
  --targetos target_os                   Mandatory. Set the target hardware operating system.
  --host                                 Optional. Set if host side components need to be installed.
  --target target_hardware               Optional. Set the target hardware in use. Use hardware code name.
  --flash all|a|b|ab|skip                Optional. Set the flash operation mode, which of the Tegras should be flashed.
  --additionalsdk additional_sdk_title   Optional. Specify any additional SDK to install. Multiple entries are allowed.
  --select section_or_group_title        Optional. Specify section or group to installation list. Multiple entries are allowed.
  --deselect section_or_group_title      Optional. Specify section or group to exclude from installation list. Multiple entries are allowed.
  --license accept                       Optional. Set this option to accept the terms and conditions of SDK license agreements.
  --targetimagefolder string             Optional. Set the host location of the target hardware image for flashing.
  --responsefile string                  Optional. Set the response file path. Response file samples can be found in the product folder /opt/nvidia/sdkmanager.

Example

  $ sdkmanager [--user user@user.com] [--query]
  $ sdkmanager [--cli install|uninstall|downloadonly] [cli options] ...
  $ sdkmanager [--settings]
  $ sdkmanager [--help]                                        

$ docker run -it --rm sdkmanager --ver
1.4.0.7363

I connected Jetson TX2 to my Ubuntu host via USB cable and put Jetson into forced recovery mode (as described here: Jetson_X2_Developer_Kit_User_Guide.pdf).

I checked that Jetson is listed among other USB devices:

$ lsusb
...
Bus 002 Device 004: ID 0955:7c18 NVIDIA Corp. APX
...

I then run a query command on SDK manager to get a list of available install options:

$ docker run -it --rm sdkmanager --query
To initiate login process open https://static-login.nvidia.com/service/default/pin?user_code=36223035 in a browser (can be done on a different machine) and login with your NVIDIA Developer account. SDK Manager will start once done.
Login user code: 36223035. (valid for: 10 minutes).
? SDK Manager is waiting for you to complete login. 
  1) Generate a new login user code
  2) Cancel login
  Answer: 
Waiting for user information from NVIDIA authentication server...
Retrieving user information...
Loading and processing available products...
Login succeeded.
Loading user information...
User information loaded successfully.
Loading server data...
Server data loaded successfully.
Available options are:

 Jetson 4.4
sdkmanager --cli install --logintype devzone --product Jetson --version 4.4 --targetos Linux --host --target P2888-0001 --flash all --additionalsdk DeepStream
sdkmanager --cli install --logintype devzone --product Jetson --version 4.4 --targetos Linux --host --target P2888-0004 --flash all --additionalsdk DeepStream
sdkmanager --cli install --logintype devzone --product Jetson --version 4.4 --targetos Linux --host --target P2888-0006 --flash all --additionalsdk DeepStream
sdkmanager --cli install --logintype devzone --product Jetson --version 4.4 --targetos Linux --host --target P2888-0060 --flash all --additionalsdk DeepStream
sdkmanager --cli install --logintype devzone --product Jetson --version 4.4 --targetos Linux --host --target P3668-0000 --flash all --additionalsdk DeepStream
sdkmanager --cli install --logintype devzone --product Jetson --version 4.4 --targetos Linux --host --target P3668-0001 --flash all --additionalsdk DeepStream
sdkmanager --cli install --logintype devzone --product Jetson --version 4.4 --targetos Linux --host --target P3310-1000 --flash all --additionalsdk DeepStream
sdkmanager --cli install --logintype devzone --product Jetson --version 4.4 --targetos Linux --host --target P3489-0080 --flash all --additionalsdk DeepStream
sdkmanager --cli install --logintype devzone --product Jetson --version 4.4 --targetos Linux --host --target P3489-0888 --flash all --additionalsdk DeepStream
sdkmanager --cli install --logintype devzone --product Jetson --version 4.4 --targetos Linux --host --target P3489-0000 --flash all --additionalsdk DeepStream
sdkmanager --cli install --logintype devzone --product Jetson --version 4.4 --targetos Linux --host --target P2180-1000 --flash all --additionalsdk DeepStream
sdkmanager --cli install --logintype devzone --product Jetson --version 4.4 --targetos Linux --host --target P3448-0000 --flash all --additionalsdk DeepStream
sdkmanager --cli install --logintype devzone --product Jetson --version 4.4 --targetos Linux --host --target P3448-0002 --flash all --additionalsdk DeepStream
sdkmanager --cli install --logintype devzone --product Jetson --version 4.4 --targetos Linux --host --target P3448-0020 --flash all --additionalsdk DeepStream


Query completed.

I was not sure which target I should choose so I used table from NVIDIA Jetson Linux Developer Guide : Introduction | NVIDIA Docs to check the P-number of my Jetson TX2 and it was P3310-1000.

It was now the time to do the main part of the job. As a guide, I looked at the docker run command example used for flashing Jetson Nano (listed here: Docker Images :: NVIDIA SDK Manager Documentation) and modified it for Jetson TX2:

$ docker run -it --rm --privileged -v /dev/bus/usb:/dev/bus/usb/ --name JetPack_TX2_Devkit sdkmanager --cli install --logintype devzone --product Jetson --target P3310-1000 --targetos Linux --version 4.4.1 --flash all --license accept --staylogin true --datacollection enable --exitonfinish
To initiate login process open https://static-login.nvidia.com/service/default/pin?user_code=64552553 in a browser (can be done on a different machine) and login with your NVIDIA Developer account. SDK Manager will start once done.
Login user code: 61234563. (valid for: 10 minutes).
? SDK Manager is waiting for you to complete login. 
  1) Generate a new login user code
  2) Cancel login
  Answer: 
Waiting for user information from NVIDIA authentication server...
Retrieving user information...
Loading and processing available products...
Login succeeded.
Loading user information...
User information loaded successfully.
Loading server data...
Server data loaded successfully.
Session initialized...

Installation of this software is under the terms and conditions of the license agreements located in /opt/nvidia/sdkmanager/Eula/
  ===== INSTALLATION COMPLETED SUCCESSFULLY. ===== 
      - Drivers for Jetson: Installed
      - File System and OS: Installed
      - Device Mode Host Setup in Flash: Installed
      - Flash Jetson TX2: Installed
      - Device Mode Host Setup in Target SDK: Installed
      - DateTime Target Setup: Installed
      - CUDA Toolkit for L4T: Installed
      - cuDNN on Target: Installed
      - TensorRT on Target: Installed
      - OpenCV on Target: Installed
      - VisionWorks on Target: Installed
      - VPI on Target: Installed
      - NVIDIA Container Runtime with Docker integration (Beta): Installed
      - Multimedia API: Installed

  ===== Installation completed successfully - Total 14 components =====
  ===== 14 succeeded, 0 failed, 0 up-to-date, 0 skipped =====



Here are some screenshots of the SDK manager running from within Docker container which show the process of downloading the packages, flashing the OS and installing JetPack on Jetson board:



















Upon flashing the Jetson, L4T (Linux 4 Tegra) Ubuntu flavor setup appears:




...and after some typical Ubuntu setup steps, we can see something like this:


...and the final look of the desktop: