Saturday, 22 April 2017

How to run .NET Core console application in VSCode on Ubuntu


In the previous article I demonstrated how to create simple .NET Core "Hello, world!" console application and here I want to show how can we load, run and debug that project in VSCode.

In VSCode, open TestProject directory. All generated files are shown in the left pane. VSCode downloads and installs required packages:




Required assets are standard VSCode JSON files so after we answer Yes to the first question .vscode directory appears:


Clicking on Restore triggers restoring packages:


If we hit F5, VSCode will execute the program:


We can set the breakpoints as well:


How to create .NET Core Console application on Ubuntu


To create .NET project of desired type we can use .NET Core command line tool (dotnet). Let's see the list of all possible project types:

$ dotnet new

Template Instantiation Commands for .NET Core CLI.

Usage: dotnet new [arguments] [options]

Arguments:
template The template to instantiate.

Options:
-l|--list List templates containing the specified name.
-lang|--language Specifies the language of the template to create
-n|--name The name for the output being created. If no name is specified, the name of the current directory is used.
-o|--output Location to place the generated output.
-h|--help Displays help for this command.
-all|--show-all Shows all templates


Templates Short Name Language Tags
----------------------------------------------------------------------
Console Application console [C#], F# Common/Console
Class library classlib [C#], F# Common/Library
Unit Test Project mstest [C#], F# Test/MSTest
xUnit Test Project xunit [C#], F# Test/xUnit
ASP.NET Core Empty web [C#] Web/Empty
ASP.NET Core Web App mvc [C#], F# Web/MVC
ASP.NET Core Web API webapi [C#] Web/WebAPI
Solution File sln Solution

Examples:
dotnet new mvc --auth None --framework netcoreapp1.1
dotnet new classlib
dotnet new --help

To create Console application project we have to use console:
$ dotnet new console -o TestProject -n HelloWorld
Content generation time: 54.4945 ms
The template "Console Application" created successfully.

This creates a directory TestProject and in it project named HelloWorld and intial source code file:
$ ls
TestProject

$ cd TestProject/

/TestProject$ ls
HelloWorld.csproj Program.cs

HelloWorld.csproj:
/TestProject$ cat HelloWorld.csproj


Program.cs:
/TestProject$ cat Program.cs


Let's now update dependencies (NuGet packages) and tools specified in the project:
/TestProject$ dotnet restore
Restoring packages for /home/bojan/Downloads/test/TestProject/HelloWorld.csproj...
Generating MSBuild file /home/bojan/Downloads/test/TestProject/obj/HelloWorld.csproj.nuget.g.props.
Generating MSBuild file /home/bojan/Downloads/test/TestProject/obj/HelloWorld.csproj.nuget.g.targets.
Writing lock file to disk. Path: /home/bojan/Downloads/test/TestProject/obj/project.assets.json
Restore completed in 492.24 ms for /home/bojan/Downloads/test/TestProject/HelloWorld.csproj.

NuGet Config files used:
/home/bojan/.nuget/NuGet/NuGet.Config

Feeds used:
https://api.nuget.org/v3/index.json

This creates obj directory and various config files:
/TestProject$ ls
HelloWorld.csproj obj Program.cs

/TestProject$ cd obj/

/TestProject/obj$ ls
HelloWorld.csproj.nuget.g.props HelloWorld.csproj.nuget.g.targets project.assets.json

HelloWorld.csproj.nuget.g.props:
/TestProject/obj$ cat HelloWorld.csproj.nuget.g.props


HelloWorld.csproj.nuget.g.targets:
/TestProject/obj$ cat HelloWorld.csproj.nuget.g.targets


project.assets.json:
/TestProject/obj$ cat project.assets.json


We can now build the project and run the binary output:
/TestProject$ dotnet run
Hello World!

This command built the project and placed binary output and other build artifacts in newly created bin directory:
/TestProject$ ls
bin HelloWorld.csproj obj Program.cs

/TestProject$ cd bin

/TestProject/bin$ ls
Debug

/TestProject/bin$ cd Debug/

/TestProject/bin/Debug$ ls
netcoreapp1.1

/TestProject/bin/Debug$ cd netcoreapp1.1/

/TestProject/bin/Debug/netcoreapp1.1$ ls
HelloWorld.deps.json HelloWorld.dll HelloWorld.pdb HelloWorld.runtimeconfig.dev.json HelloWorld.runtimeconfig.json

.deps.json (dependencies JSON file) lists dependencies of the application:
/TestProject/bin/Debug/netcoreapp1.1$ cat HelloWorld.deps.json


.runtimeconfig.dev.json:
/TestProject/bin/Debug/netcoreapp1.1$ cat HelloWorld.runtimeconfig.dev.json


.runtimeconfig.json file specifies the shared runtime and its version for the application:
/TestProject/bin/Debug/netcoreapp1.1$cat HelloWorld.runtimeconfig.json


It might seem unexpected that the binary output is not .exe but .dll. This is because default .NET Core's deployment model is Framework-dependent deployment, where output assembly contains only compiled source and 3rd party dependencies but not .NET Core dependencies - assembly assumes that .NET Core Framework and runtime are installed on the target machine. This is why we have to use dotnet tool to run it:

/TestProject/bin/Debug/netcoreapp1.1$ dotnet HelloWorld.dll
Hello World!

The other type of deployment is Self-contained deployment in which case the output assembly is .exe and contains .NET Core dependencies and runtime - nothing else is necessary to be installed on the target system.


References:

.NET Core application deployment
.NET Core command-line interface (CLI) tools

How to install .NET Core on Ubuntu 16.04


.NET Core Installation


It is enough to follow .NET Core installation guide. Select Linux and distro Ubuntu, Mint and then follow instructions for Ubuntu 16.04.

Add .NET Core repository to the local repository list:
$ sudo sh -c 'echo "deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/dotnet-release/ xenial main" > /etc/apt/sources.list.d/dotnetdev.list'

Import key from the key server:
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 417A0893
Executing: /tmp/tmp.fX22g9wfIT/gpg.1.sh --keyserver
hkp://keyserver.ubuntu.com:80
--recv-keys
417A0893
gpg: requesting key 417A0893 from hkp server keyserver.ubuntu.com
gpg: key 417A0893: public key "MS Open Tech " imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)

Update package info:
$ sudo apt-get update

Install .NET Core:
$ sudo apt-get install dotnet-dev-1.0.1
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
linux-headers-4.4.0-66 linux-headers-4.4.0-66-generic linux-image-4.4.0-66-generic
linux-image-extra-4.4.0-66-generic
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
dotnet-host dotnet-hostfxr-1.0.1 dotnet-hostfxr-1.1.0 dotnet-sharedframework-microsoft.netcore.app-1.0.4
dotnet-sharedframework-microsoft.netcore.app-1.1.1 liblldb-3.6 libllvm3.6v5 liblttng-ust-ctl2 liblttng-ust0 liburcu4
The following NEW packages will be installed
dotnet-dev-1.0.1 dotnet-host dotnet-hostfxr-1.0.1 dotnet-hostfxr-1.1.0
dotnet-sharedframework-microsoft.netcore.app-1.0.4 dotnet-sharedframework-microsoft.netcore.app-1.1.1 liblldb-3.6
libllvm3.6v5 liblttng-ust-ctl2 liblttng-ust0 liburcu4
0 to upgrade, 11 to newly install, 0 to remove and 7 not to upgrade.
Need to get 113 MB of archives.
After this operation, 341 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 libllvm3.6v5 amd64 1:3.6.2-3ubuntu2 [8,075 kB]
Get:2 https://apt-mo.trafficmanager.net/repos/dotnet-release xenial/main amd64 dotnet-host amd64 1.1.0-preview1-001100-00-1 [33.7 kB]
Get:3 https://apt-mo.trafficmanager.net/repos/dotnet-release xenial/main amd64 dotnet-hostfxr-1.0.1 amd64 1.0.1-1 [123 kB]
Get:4 https://apt-mo.trafficmanager.net/repos/dotnet-release xenial/main amd64 dotnet-sharedframework-microsoft.netcore.app-1.0.4 amd64 1.0.4-1 [22.6 MB]
Get:5 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 liblldb-3.6 amd64 1:3.6.2-3ubuntu2 [7,303 kB]
Get:6 http://gb.archive.ubuntu.com/ubuntu xenial/universe amd64 liburcu4 amd64 0.9.1-3 [47.3 kB]
Get:7 http://gb.archive.ubuntu.com/ubuntu xenial/universe amd64 liblttng-ust-ctl2 amd64 2.7.1-1 [72.2 kB]
Get:8 http://gb.archive.ubuntu.com/ubuntu xenial/universe amd64 liblttng-ust0 amd64 2.7.1-1 [127 kB]
Get:9 https://apt-mo.trafficmanager.net/repos/dotnet-release xenial/main amd64 dotnet-hostfxr-1.1.0 amd64 1.1.0-1 [124 kB]
Get:10 https://apt-mo.trafficmanager.net/repos/dotnet-release xenial/main amd64 dotnet-sharedframework-microsoft.netcore.app-1.1.1 amd64 1.1.1-1 [22.9 MB]
Get:11 https://apt-mo.trafficmanager.net/repos/dotnet-release xenial/main amd64 dotnet-dev-1.0.1 amd64 1.0.1-1 [51.4 MB]
Fetched 113 MB in 13min 21s (141 kB/s)
Selecting previously unselected package libllvm3.6v5:amd64.
(Reading database ... 307849 files and directories currently installed.)
Preparing to unpack .../libllvm3.6v5_1%3a3.6.2-3ubuntu2_amd64.deb ...
Unpacking libllvm3.6v5:amd64 (1:3.6.2-3ubuntu2) ...
Selecting previously unselected package liblldb-3.6.
Preparing to unpack .../liblldb-3.6_1%3a3.6.2-3ubuntu2_amd64.deb ...
Unpacking liblldb-3.6 (1:3.6.2-3ubuntu2) ...
Selecting previously unselected package liburcu4:amd64.
Preparing to unpack .../liburcu4_0.9.1-3_amd64.deb ...
Unpacking liburcu4:amd64 (0.9.1-3) ...
Selecting previously unselected package liblttng-ust-ctl2:amd64.
Preparing to unpack .../liblttng-ust-ctl2_2.7.1-1_amd64.deb ...
Unpacking liblttng-ust-ctl2:amd64 (2.7.1-1) ...
Selecting previously unselected package liblttng-ust0:amd64.
Preparing to unpack .../liblttng-ust0_2.7.1-1_amd64.deb ...
Unpacking liblttng-ust0:amd64 (2.7.1-1) ...
Selecting previously unselected package dotnet-host.
Preparing to unpack .../dotnet-host_1.1.0-preview1-001100-00-1_amd64.deb ...
Unpacking dotnet-host (1.1.0-preview1-001100-00-1) ...
Selecting previously unselected package dotnet-hostfxr-1.0.1.
Preparing to unpack .../dotnet-hostfxr-1.0.1_1.0.1-1_amd64.deb ...
Unpacking dotnet-hostfxr-1.0.1 (1.0.1-1) ...
Selecting previously unselected package dotnet-sharedframework-microsoft.netcore.app-1.0.4.
Preparing to unpack .../dotnet-sharedframework-microsoft.netcore.app-1.0.4_1.0.4-1_amd64.deb ...
Unpacking dotnet-sharedframework-microsoft.netcore.app-1.0.4 (1.0.4-1) ...
Selecting previously unselected package dotnet-hostfxr-1.1.0.
Preparing to unpack .../dotnet-hostfxr-1.1.0_1.1.0-1_amd64.deb ...
Unpacking dotnet-hostfxr-1.1.0 (1.1.0-1) ...
Selecting previously unselected package dotnet-sharedframework-microsoft.netcore.app-1.1.1.
Preparing to unpack .../dotnet-sharedframework-microsoft.netcore.app-1.1.1_1.1.1-1_amd64.deb ...
Unpacking dotnet-sharedframework-microsoft.netcore.app-1.1.1 (1.1.1-1) ...
Selecting previously unselected package dotnet-dev-1.0.1.
Preparing to unpack .../dotnet-dev-1.0.1_1.0.1-1_amd64.deb ...
Unpacking dotnet-dev-1.0.1 (1.0.1-1) ...
Processing triggers for libc-bin (2.23-0ubuntu7) ...
/sbin/ldconfig.real: /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcudnn.so.5 is not a symbolic link
/sbin/ldconfig.real: /usr/lib/nvidia-375/libEGL.so.1 is not a symbolic link
/sbin/ldconfig.real: /usr/lib32/nvidia-375/libEGL.so.1 is not a symbolic link
Processing triggers for man-db (2.7.5-1) ...
Setting up libllvm3.6v5:amd64 (1:3.6.2-3ubuntu2) ...
Setting up liblldb-3.6 (1:3.6.2-3ubuntu2) ...
Setting up liburcu4:amd64 (0.9.1-3) ...
Setting up liblttng-ust-ctl2:amd64 (2.7.1-1) ...
Setting up liblttng-ust0:amd64 (2.7.1-1) ...
Setting up dotnet-host (1.1.0-preview1-001100-00-1) ...
Setting up dotnet-hostfxr-1.0.1 (1.0.1-1) ...
Setting up dotnet-sharedframework-microsoft.netcore.app-1.0.4 (1.0.4-1) ...
Setting up dotnet-hostfxr-1.1.0 (1.1.0-1) ...
Setting up dotnet-sharedframework-microsoft.netcore.app-1.1.1 (1.1.1-1) ...
Setting up dotnet-dev-1.0.1 (1.0.1-1) ...
Processing triggers for libc-bin (2.23-0ubuntu7) ...
/sbin/ldconfig.real: /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcudnn.so.5 is not a symbolic link
/sbin/ldconfig.real: /usr/lib/nvidia-375/libEGL.so.1 is not a symbolic link
/sbin/ldconfig.real: /usr/lib32/nvidia-375/libEGL.so.1 is not a symbolic link

To verify installation query .NET Core version:
$ dotnet --version
1.0.1

.NET Core Uninstallation


Verify which version has been installed:
$ sudo apt --installed list | grep "dotnet-dev"
dotnet-dev-1.0.1/xenial,now 1.0.1-1 amd64 [installed]

Uninstall it:
sudo apt-get remove dotnet-dev-1.0.1

Thursday, 13 April 2017

Introduction to H2O with R


H2O is scalable, open-source Machine Learning framework with interfaces is Python, R, Java, Scala and C++. It lays on the top of other major ML Frameworks (MXNet, Caffe, TensorFlow, etc.) and adds a layer of abstraction unifying and simplifying API for client/consumer applications. H2O can run in standalone mode, on Hadoop, or within a Spark cluster.


Prerequisities


RStudio
Installed package: R interface for H2O

Installation in RStudio:
install.packages("h2o")


Launching


To load h2o package and its namespace:
library(h2o)

To start and connect to H2O instance running on localhost and listening on port 54321:
h2o.init()
Connection successful!

R is connected to the H2O cluster:
H2O cluster uptime: 3 days 6 hours
H2O cluster version: 3.10.3.6
H2O cluster version age: 1 month and 19 days
H2O cluster name: H2O_started_from_R_bojan_lsr768
H2O cluster total nodes: 1
H2O cluster total memory: 3.46 GB
H2O cluster total cores: 8
H2O cluster allowed cores: 2
H2O cluster healthy: TRUE
H2O Connection ip: localhost
H2O Connection port: 54321
H2O Connection proxy: NA
R Version: R version 3.2.3 (2015-12-10)

This command will start H2O on maximum 2 CPUs. If we want to use all CPUs on the system, we have to specify nthreads argument to have value -1:
h2o.init(nthreads = -1)


Importing Data


To import data from a file into H2O cloud we can use h2o.importFile or h2o.uploadFile functions.

If file resides on the sever, we have to use h2o.importFile and specify file's absolute path (on the server):
frame <- h2o.importFile(file_absolute_path)

This file can be e.g. CSV (Comma Separated Value) file.
The output type is an instance of H2OFrame class which represents a table (2D array).
If CSV file does not have specified column names, H2O will automatically assign names C1, C2... to such columns.

If we want to push file from a client onto the server, we have to use h2o.uploadFile and specify file's absolute path (on the client):
h2o.uploadFile(file_absolute_path)


Data Exploration


To get a string vector containing column names from the H2OFrame object:
h2o.colnames(frame)
[1] "Creditability" "Account Balance" "Duration of Credit (month)"
[4] "Payment Status of Previous Credit" "Purpose" "Credit Amount"
[7] "Value Savings/Stocks" "Length of current employment" "Instalment per cent"
[10] "Sex & Marital Status" "Guarantors" "Duration in Current address"
[13] "Most valuable available asset" "Age (years)" "Concurrent Credits"
[16] "Type of apartment" "No of Credits at this Bank" "Occupation"
[19] "No of dependents" "Telephone" "Foreign Worker"

To print first 6 rows from the H2OFrame object we can use h2o.head:
h2o.head(frame)

To get detailed report on each column (type, number of missing values etc...), use h2o.describe:
h2o.describe(frame)
Label Type Missing Zeros PosInf NegInf Min Max Mean Sigma Cardinality
1 Creditability enum 4 300 0 0 0 1 0.698795180722892 0.459011997978603 2
2 Account Balance enum 0 274 0 0 0 3 4
3 Duration of Credit (month) int 0 0 0 0 4 72 20.903 12.0588144527564
4 Payment Status of Previous Credit enum 0 40 0 0 0 4 5
5 Purpose enum 0 234 0 0 0 9 10
6 Credit Amount int 0 0 0 0 250 18424 3271.248 2822.75175989565
7 Value Savings/Stocks enum 0 603 0 0 0 4 5
...

h2o.summary prints information for each column. It treats differently factor and columns of non-enum type. For factor columns it prints the statistics how many times each enum value occurs and how many values are missing ("NA"). For other columns it shows minimum, maximum, median, mean 1st and 3rd quantile:
h2o.summary(frame)
Creditability Account Balance Duration of Credit (month) Payment Status of Previous Credit Purpose Credit Amount
1 :696 4:394 Min. : 4.0 2:530 3:280 Min. : 250
0 :300 1:274 1st Qu.:12.0 4:293 0:234 1st Qu.: 1359
NA: 4 2:269 Median :18.0 3: 88 2:181 Median : 2304
3: 63 Mean :20.9 1: 49 1:103 Mean : 3271
3rd Qu.:24.0 0: 40 9: 97 3rd Qu.: 3958
Max. :72.0 6: 50 Max. :18424
...

h2o.str displays the structure of an H2OFrame object:
h2o.str(frame)
Class 'H2OFrame'
- attr(*, "op")= chr ":="
- attr(*, "eval")= logi TRUE
- attr(*, "id")= chr "RTMP_sid_a051_45"
- attr(*, "nrow")= int 1000
- attr(*, "ncol")= int 21
- attr(*, "types")=List of 21
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "int"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "int"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "int"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "enum"
..$ : chr "int"
- attr(*, "data")='data.frame': 10 obs. of 21 variables:
..$ Creditability : Factor w/ 2 levels "0","1": 2 2 2 2 2 2 2 2 2 2
..$ Account Balance : Factor w/ 4 levels "1","2","3","4": 1 1 2 1 1 1 1 1 4 2
..$ Duration of Credit (month) : num 18 9 12 12 12 10 8 6 18 24
..$ Payment Status of Previous Credit: Factor w/ 5 levels "0","1","2","3",..: 5 5 3 5 5 5 5 5 5 3
..$ Purpose : Factor w/ 10 levels "0","1","2","3",..: 3 1 9 1 1 1 1 1 4 4
..$ Credit Amount : num 1049 2799 841 2122 2171 ...
..$ Value Savings/Stocks : Factor w/ 5 levels "1","2","3","4",..: 1 1 2 1 1 1 1 1 1 3
..$ Length of current employment : Factor w/ 5 levels "1","2","3","4",..: 2 3 4 3 3 2 4 2 1 1
..$ Instalment per cent : Factor w/ 4 levels "1","2","3","4": 4 2 2 3 4 1 1 2 4 1
..$ Sex & Marital Status : Factor w/ 4 levels "1","2","3","4": 2 3 2 3 3 3 3 3 2 2
..$ Guarantors : Factor w/ 3 levels "1","2","3": 1 1 1 1 1 1 1 1 1 1
..$ Duration in Current address : Factor w/ 4 levels "1","2","3","4": 4 2 4 2 4 3 4 4 4 4
..$ Most valuable available asset : Factor w/ 4 levels "1","2","3","4": 2 1 1 1 2 1 1 1 3 4
..$ Age (years) : num 21 36 23 39 38 48 39 40 65 23
..$ Concurrent Credits : Factor w/ 3 levels "1","2","3": 3 3 3 3 1 3 3 3 3 3
..$ Type of apartment : Factor w/ 3 levels "1","2","3": 1 1 1 1 2 1 2 2 2 1
..$ No of Credits at this Bank : Factor w/ 4 levels "1","2","3","4": 1 2 1 2 2 2 2 1 2 1
..$ Occupation : Factor w/ 4 levels "1","2","3","4": 3 3 2 2 2 2 2 2 1 1
..$ No of dependents : Factor w/ 2 levels "1","2": 1 2 1 2 1 2 1 2 1 1
..$ Telephone : Factor w/ 2 levels "1","2": 1 1 1 1 1 1 1 1 1 1
..$ Foreign Worker : num 1 1 1 2 2 2 2 2 1 1

To draw a histogram of values of some column:
h2o.hist(data[, "Height"])

We can also use dollar notation to specify desired column:
h2o.hist(data$Height)

This will divide range of all possible values in columns "Height" into 10 equal sub-ranges and for each of them draw a vertical bar showing occurrence frequency. Instead of specifying column name, we can specify column number:
h2o.hist(data[, 3])


Data Manipulation


Factor column is the one whose possible values belong to some infinite set of predefined values (like enum type in some programming languages). If we want to convert type of some H2OFrame data set column i (which is of type i.e. int) into enum we can use h2o.asfactor:
data[, i] <- h2o.asfactor(data[, i])


To split a single data set into multiple smaller ones, use h2o.splitFrame(frame, ratios, destination_frames, seed). frame is source data set (H2OFrame object). ratios is scalar or vector of percentages of parts; if scalar, it denotes percentage of the first part; if vector, sum of its elements must be equal to 1. seed is a random number.

frame.split = h2o.splitFrame(frame.hex, 0.7)

frame.split = h2o.splitFrame(frame.hex, ratios = c(0.2, 0.5))

Result is list of list of H2OFrame objects:
> typeof(credit_samples)
[1] "list"
> typeof(credit_samples[1])
[1] "list"
> typeof(credit_samples[[1]])
[1] "environment"

To extract H2OFrame object we can use double squared bracket notation:
frame.training_set <- frame.split[[1]] frame.test_set <- frame.split[[2]]


To create a new frame which contains rows grouped by values in some specific column we can use h2o.group_by (similar to SQL's GROUP BY):
h2o.group_by(frame, by="Creditability", nrow("Creditability"))
Creditability nrow_Creditability
1 4
2 0 300
3 1 696

h2o.group_by's arguments are: name of the original frame, column whose values are used for grouping and the aggregate function which is using values from the chosen column to map multiple rows into aggregate values - one per each group.

To calculate natural logarithm of values in specific column in the H2OFrame object we can use h2o.log. The output is a new column-vector with the same number of elements as the source vector:
h2o.log(data[, "Velocity"])
log(Velocity)
1 6.955593
2 7.937017
3 6.734592
4 7.660114
5 7.682943
6 7.714677

[1000 rows x 1 column]


Machine Learning Algorithms


Generalized Linear Model


For Generalized Linear Model use h2o.glm. Arguments are:
y - dependent variable. This is a string, name of the column in the frame.
x - list of predictors (independent/random variables). This is a vector of strings where each string is a name of the (independent variable) column in the table.
training_frame - training data set; H2OFrame object which represents table containing columns mentioned above.
family - response's distribution family which is a type of exponential family. Supported values are: "gaussian", "poisson", "binomial", "multinomial", "gamma", "tweedie".

model <- h2o.glm(y = "VOL", x = c("AGE", "RACE", "PSA", "GLEASON"), training_frame = frame, model_id = "glm_model1", family = "binomial") |==============================================================================================================================| 100%


Return value is a GLM model, an object of type H2OBinomialModel:

summary(model)
Model Details:
==============

H2OBinomialModel: glm
Model Key: glm_model1
GLM Model: summary
family link regularization number_of_predictors_total number_of_active_predictors
1 binomial logit Elastic Net (alpha = 0.5, lambda = 0.02103 ) 71 24
number_of_iterations training_frame
1 5 RTMP_sid_be14_25

H2OBinomialMetrics: glm
** Reported on training data. **

MSE: 0.1604356
RMSE: 0.4005442
LogLoss: 0.4897464
Mean Per-Class Error: 0.275817
AUC: 0.8095359
Gini: 0.6190719
R^2: 0.2323731
Null Deviance: 736.5862
Residual Deviance: 592.5932
AIC: 642.5932

Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold:
0 1 Error Rate
0 104 76 0.422222 =76/180
1 55 370 0.129412 =55/425
Totals 159 446 0.216529 =131/605

Maximum Metrics: Maximum metrics at their respective thresholds
metric threshold value idx
1 max f1 0.601181 0.849598 266
2 max f2 0.412327 0.926251 373
3 max f0point5 0.671558 0.848516 221
4 max accuracy 0.605270 0.783471 264
5 max precision 0.947226 1.000000 0
6 max recall 0.258939 1.000000 394
7 max specificity 0.947226 1.000000 0
8 max absolute_mcc 0.671558 0.470868 221
9 max min_per_class_accuracy 0.683051 0.744444 213
10 max mean_per_class_accuracy 0.671558 0.750196 221

Gains/Lift Table: Extract with `h2o.gainsLift(, )` or `h2o.gainsLift(, valid=, xval=)`



Scoring History:
timestamp duration iteration negative_log_likelihood objective
1 2017-04-13 08:03:21 0.000 sec 0 368.29309 0.60575
2 2017-04-13 08:03:21 0.007 sec 1 308.02649 0.54301
3 2017-04-13 08:03:21 0.010 sec 2 305.03591 0.54149
4 2017-04-13 08:03:21 0.013 sec 3 304.82054 0.54148
5 2017-04-13 08:03:21 0.023 sec 4 296.50608 0.53809
6 2017-04-13 08:03:21 0.027 sec 5 296.29658 0.53809

Variable Importances: (Extract with `h2o.varimp`)
=================================================

Standardized Coefficient Magnitudes: standardized coefficient magnitudes
names coefficients sign
1 Account Balance.4 0.669816 POS
2 Account Balance.1 0.419182 NEG
3 Duration of Credit (month) 0.294272 NEG
4 Purpose.3 0.273682 POS
5 Payment Status of Previous Credit.4 0.269195 POS

---
names coefficients sign
66 Guarantors.2 0.000000 POS
67 Guarantors.3 0.000000 POS
68 Concurrent Credits.2 0.000000 POS
69 No of dependents.1 0.000000 POS
70 No of dependents.2 0.000000 POS
71 credit_amount_trnsf 0.000000 POS


Neural networks


h2o.deeplearning


Random Forest


rfHex <- h2o.randomForest(x=features, y="logSales", ntrees = 500, max_depth = 30, nbins_cats = 1115, training_frame=trainHex, validation_frame=validHex)




Model Analasys


Once model is trained, we can calculate its performance on a new (unseen) dataset by using h2o.performance. This new dataset has to have the same column names, types and dimensions as the data set used for training. Arguments are:
model - one of H2O objects representing trained model (e.g. H2OBinomialModel)
newdata - H2OFrame object representing table with unseen data
train, valid, xval - logical (boolean) values indicating whether function shall return training, validation and the cross-validation metrics (all constructed during training)

Return value is an object of one of H2O metrics types. E.g. if model is of type H2OBinomialModel then metrics is of type H2OBinomialMetyrics.

performance <- h2o.performance(model, newdata = test_frame)
> performance
H2OBinomialMetrics: glm

MSE: 0.1747882
RMSE: 0.4180768
LogLoss: 0.5196604
Mean Per-Class Error: 0.3554121
AUC: 0.7768143
Gini: 0.5536285
R^2: 0.1782966
Null Deviance: 482.3467
Residual Deviance: 406.3744
AIC: 456.3744

Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold:
0 1 Error Rate
0 44 76 0.633333 =76/120
1 21 250 0.077491 =21/271
Totals 65 326 0.248082 =97/391

Maximum Metrics: Maximum metrics at their respective thresholds
metric threshold value idx
1 max f1 0.547567 0.837521 325
2 max f2 0.283012 0.918644 390
3 max f0point5 0.613870 0.811103 278
4 max accuracy 0.558243 0.751918 319
5 max precision 0.968788 1.000000 0
6 max recall 0.283012 1.000000 390
7 max specificity 0.968788 1.000000 0
8 max absolute_mcc 0.613870 0.387921 278
9 max min_per_class_accuracy 0.692120 0.683333 223
10 max mean_per_class_accuracy 0.772244 0.705028 159

Gains/Lift Table: Extract with `h2o.gainsLift(, )` or `h2o.gainsLift(, valid=, xval=)`

To calculate the accuracy of the model (the only supported model at the moment is H2OBinomialModel), we can use h2o.accuracy. Arguments are:
object - H2OModelMetrics object (H2OBinomialMetrics is currently the only one supported)
thresholds - a value or a list of values between 0.0 and 1.0

h2o.accuracy(performance, 0.95)
[[1]]
[1] 0.314578

To use the trained model on a test set in order to make predictions, we can use h2o.predict.
pred_creditability <- h2o.predict(glm_model1,credit_test)
|==============================================================================================================================| 100%
> pred_creditability
predict p0 p1
1 1 0.3593716 0.6406284
2 1 0.2807624 0.7192376
3 1 0.2209632 0.7790368
4 1 0.1332073 0.8667927
5 1 0.3779753 0.6220247
6 1 0.3902468 0.6097532

[392 rows x 3 columns]


References


https://www.rdocumentation.org/packages/h2o/versions/3.10.3.6/topics/h2o.init
https://www.rdocumentation.org/packages/h2o/versions/3.10.3.6/topics/h2o.importFile
https://www.rdocumentation.org/packages/h2o/versions/3.10.3.6/topics/h2o.str
https://www.rdocumentation.org/packages/h2o/versions/3.10.3.6/topics/h2o.group_by
https://www.rdocumentation.org/packages/h2o/versions/3.10.3.6/topics/h2o.log
https://www.rdocumentation.org/packages/h2o/versions/3.10.3.6/topics/h2o.colnames
https://rdrr.io/cran/h2o/man/h2o.splitFrame.html
http://h2o-release.s3.amazonaws.com/h2o/master/3574/docs-website/h2o-docs/data-munging/splitting-datasets.html
http://docs.h2o.ai/h2o/latest-stable/h2o-docs/booklets/GLMBooklet.pdf
https://h2o-release.s3.amazonaws.com/h2o/rel-slater/9/docs-website/h2o-py/docs/frame.html

Introduction to R


NOTE: WORK IN PROGRESS - THIS IS AN UNFINISHED ARTICLE

Variables


To assign a value to some variable we have to use assignment operator <-:
> s <- "This is some string" > s
[1] "This is some string"

If variable table contains rows and columns from some table (matrix) and columns have names like column1, column2 etc...we can access columns as variables if we use dollar sign notation:

table$column1

We can also use dollar sign notation to add a new column to the table:

table$column1_log <- apply(table[, "column1"], 1, log)



> typeof(credit_samples[1])
[1] "list"
> typeof(credit_samples)
[1] "list"
> typeof(credit_samples[[1]])
[1] "environment"


File System


~/ is Home (User) Directory in Linux and expands to /home/username. This shortcut is very convenient as it hides absolute path (and user name).

normalizePath()

String operations


paste0

cat


Data Exploration


print

colnames

summary(data)

Output:
Column_name
Min. :1.000
1st Qu.:1.000
Median :2.000
Mean :2.577
3rd Qu.:4.000
Max. :4.000
NA's :4


head

nrow

To find out elements which belong to one but not to another set we can use setdiff:
> a <- 1:5 > a
[1] 1 2 3 4 5
> b <- 3:8 > b
[1] 3 4 5 6 7 8
> setdiff(a, b)
[1] 1 2
> setdiff(b, a)
[1] 6 7 8

Data Manipulation


c - combines its arguments to form a vector

> v <- c(1, 2, 3) > print(v)
[1] 1 2 3
> for(i in v) print(i)
[1] 1
[1] 2
[1] 3

To transform specific elements from data frame or elements in bulk (entire row or column) use apply(X, MARGIN, FUN, ...). X is vector or matrix, MARGIN is a vector with indices determining on which rows, columns or elements function FUN shall be applied. Set MARGIN to 1 to denote rows, 2 to denote columns, c(1, 2) to denote specific element in 1st row and 2nd column.

apply(data[, "Credit Amount"], 1, log)
C1
1 6.955593
2 7.937017
3 6.734592
4 7.660114
5 7.682943
6 7.714677

[1000 rows x 1 column]

setdiff

https://stat.ethz.ch/R-manual/R-devel/library/base/html/normalizePath.html
https://wiki.mobilizingcs.org/rstudio/examining_data
https://www.stat.berkeley.edu/~spector/R.pdf
https://stat.ethz.ch/R-manual/R-devel/library/base/html/c.html

Sunday, 9 April 2017

Building and debugging C++ code in VSCode on Ubuntu

We have "Hello, world!" example in main.cpp file and want to build it with g++ compiler and debug it with gdb debugger in VSCode on Ubuntu.

Packages


C/C++ for Visual Studio Code (ms-vscode.cpptools)
C++ Intellisense (austin.code-gnu-global) (optional)

Building


We have to create and set up a task.

Open Command Palette (CTRL+SHIFT+P).
Type in task and select Tasks: Configure Task Runner and then Others.
tasks.json gets created in workspace's .vscode directory and shows up in the editor.

VSCode defines variables that can be used in tasks.json.

We can verify what will each of them expand into if we simply change args value for echo command in the default version of the config file.

tasks.json:
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "0.1.0",
"command": "echo",
"isShellCommand": true,
"args": ["${file}"],
"showOutput": "always"
}

If we press combination CTRL+SHIFT+B, the output will be something like:
/home/user/dev/cpp/my_project/.vscode/tasks.json

Similarly, ${workspaceRoot} expands to the path to the workspace's root directory so we just have to append the name of the .cpp file to get its full path:

tasks.json:
...
"args": ["${workspaceRoot}/main.cpp"],
...

Running the task now gives the following output:
/home/user/dev/cpp/my_project/main.cpp

If we change command to g++...

tasks.json:
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "0.1.0",
"command": "g++",
"isShellCommand": true,
"args": ["${workspaceRoot}/main.cpp"],
"showOutput": "always"
}

...after hitting CTRL+SHIFT+B g++ compiler compiles main.cpp and a.out appears in the my_project directory.

Running


If we hit F5, this launches the program.

launch.json:
{
"version": "0.2.0",
"configurations": [
{
"name": "C++ Launch",
"type": "cppdbg",
"request": "launch",
"program": "${workspaceRoot}/a.out",
"args": [],
"stopAtEntry": false,
"cwd": "${workspaceRoot}",
"environment": [],
"externalConsole": true,
"linux": {
"MIMode": "gdb",
"setupCommands": [
{
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
}
]
},
"osx": {
"MIMode": "lldb"
},
"windows": {
"MIMode": "gdb",
"setupCommands": [
{
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
}
]
}
},
{
"name": "C++ Attach",
"type": "cppdbg",
"request": "attach",
"program": "enter program name, for example ${workspaceRoot}/a.out",
"processId": "${command:pickProcess}",
"linux": {
"MIMode": "gdb",
"setupCommands": [
{
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
}
]
},
"osx": {
"MIMode": "lldb"
},
"windows": {
"MIMode": "gdb",
"setupCommands": [
{
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
}
]
}
}
]
}

Debugging


In order to enable adding breakpoints we have to enable creation of debug information when building the source code. It is enough if we add -g to g++ arguments:

tasks.json:
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "0.1.0",
"command": "g++",
"isShellCommand": true,
"args": ["-g", "${workspaceRoot}/main.cpp"],
"showOutput": "always"
}

Sunday, 26 March 2017

How to solve distorted graphics around window edges on Ubuntu 16.04

I had been working in GNU Octave on my Ubuntu 16.04 and couple of times it crashed or made all windows in OS to loose top command bar after I hit CTRL-C to terminate the running script.

Approximately after these events, even after system updates and reboots, I noticed that graphics is heavily distorted around edges of child windows in any application. It would look something like this:


Solution which helped was to install CompizConfig Settings Manager:
$ sudo apt-get install compizconfig-settings-manager
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
linux-headers-4.4.0-59 linux-headers-4.4.0-59-generic linux-image-4.4.0-59-generic linux-image-extra-4.4.0-59-generic
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
python-cairo python-compizconfig python-gobject-2 python-gtk2
Suggested packages:
python-gobject-2-dbg python-gtk2-doc
The following NEW packages will be installed
compizconfig-settings-manager python-cairo python-compizconfig python-gobject-2 python-gtk2
0 to upgrade, 5 to newly install, 0 to remove and 0 not to upgrade.
Need to get 1,456 kB of archives.
After this operation, 9,461 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 python-cairo amd64 1.8.8-2 [41.3 kB]
Get:2 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 python-gobject-2 amd64 2.28.6-12ubuntu1 [181 kB]
Get:3 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 python-gtk2 amd64 2.24.0-4ubuntu1 [620 kB]
Get:4 http://gb.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 python-compizconfig amd64 1:0.9.12.2+16.04.20160823-0ubuntu1 [38.2 kB]
Get:5 http://gb.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 compizconfig-settings-manager all 1:0.9.12.2+16.04.20160823-0ubuntu1 [575 kB]
Fetched 1,456 kB in 7s (190 kB/s)
Selecting previously unselected package python-cairo.
(Reading database ... 306648 files and directories currently installed.)
Preparing to unpack .../python-cairo_1.8.8-2_amd64.deb ...
Unpacking python-cairo (1.8.8-2) ...
Selecting previously unselected package python-gobject-2.
Preparing to unpack .../python-gobject-2_2.28.6-12ubuntu1_amd64.deb ...
Unpacking python-gobject-2 (2.28.6-12ubuntu1) ...
Selecting previously unselected package python-gtk2.
Preparing to unpack .../python-gtk2_2.24.0-4ubuntu1_amd64.deb ...
Unpacking python-gtk2 (2.24.0-4ubuntu1) ...
Selecting previously unselected package python-compizconfig:amd64.
Preparing to unpack .../python-compizconfig_1%3a0.9.12.2+16.04.20160823-0ubuntu1_amd64.deb ...
Unpacking python-compizconfig:amd64 (1:0.9.12.2+16.04.20160823-0ubuntu1) ...
Selecting previously unselected package compizconfig-settings-manager.
Preparing to unpack .../compizconfig-settings-manager_1%3a0.9.12.2+16.04.20160823-0ubuntu1_all.deb ...
Unpacking compizconfig-settings-manager (1:0.9.12.2+16.04.20160823-0ubuntu1) ...
Processing triggers for libc-bin (2.23-0ubuntu7) ...
/sbin/ldconfig.real: /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcudnn.so.5 is not a symbolic link

/sbin/ldconfig.real: /usr/lib/nvidia-375/libEGL.so.1 is not a symbolic link

/sbin/ldconfig.real: /usr/lib32/nvidia-375/libEGL.so.1 is not a symbolic link

Processing triggers for hicolor-icon-theme (0.15-0ubuntu1) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for desktop-file-utils (0.22-1ubuntu5.1) ...
Processing triggers for bamfdaemon (0.5.3~bzr0+16.04.20160824-0ubuntu1) ...
Rebuilding /usr/share/applications/bamf-2.index...
Processing triggers for gnome-menus (3.13.3-6ubuntu3.1) ...
Processing triggers for mime-support (3.59ubuntu1) ...
Setting up python-cairo (1.8.8-2) ...
Setting up python-gobject-2 (2.28.6-12ubuntu1) ...
Setting up python-gtk2 (2.24.0-4ubuntu1) ...
Setting up python-compizconfig:amd64 (1:0.9.12.2+16.04.20160823-0ubuntu1) ...
Setting up compizconfig-settings-manager (1:0.9.12.2+16.04.20160823-0ubuntu1) ...
Processing triggers for libc-bin (2.23-0ubuntu7) ...
/sbin/ldconfig.real: /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcudnn.so.5 is not a symbolic link

/sbin/ldconfig.real: /usr/lib/nvidia-375/libEGL.so.1 is not a symbolic link

/sbin/ldconfig.real: /usr/lib32/nvidia-375/libEGL.so.1 is not a symbolic link

...then open it and turn off the Fading Windows effect:



Installing TensorFlow on Ubuntu 16.04



As per TensorFlow installation manual, TensorFlow can be installed with either CPU or GPU support.

In order to check if we can install TensorFlow with GPU support we have to check if we have NVIDIA graphics card:

$ lspci
...
01:00.0 VGA compatible controller: NVIDIA Corporation GK107 [GeForce GT 640 OEM] (rev a1)
...

We do have it so we can go forward with installation of GPU-accelerated version of TensorFlow.

Installing TensorFlow Dependencies


1) CUDA Toolkit 8.0


If CUDA is installed on local machine, we should be able to find the location of NVIDIA CUDA compiler (nvcc) and check its version (which matches the version of CUDA package):

$ which nvcc
/usr/local/cuda-8.0/bin/nvcc

$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Sun_Sep__4_22:14:01_CDT_2016
Cuda compilation tools, release 8.0, V8.0.44

If CUDA is not installed, download Ubuntu installer from CUDA Downloads page and follow Installation instructions in CUDA Quick start Guide.

In our case, CUDA 8.0 was installed so no upgrade was necessary.

2) NVIDIA drivers associated with CUDA Toolkit 8.0


They get installed within CUDA Installation and get loaded upon the next system boot.

Installed NVIDIA driver files have names with pattern nvidia-XXX where XXX is a number so we can use:
$ sudo apt list | grep -P 'nvidia-[0-9]+'
nvidia-304/xenial-updates,xenial-security 304.134-0ubuntu0.16.04.1 amd64
nvidia-304-dev/xenial-updates,xenial-security 304.134-0ubuntu0.16.04.1 amd64
nvidia-304-updates/xenial-updates,xenial-security 304.134-0ubuntu0.16.04.1 amd64
nvidia-304-updates-dev/xenial-updates,xenial-security 304.134-0ubuntu0.16.04.1 amd64
nvidia-331/xenial-updates,xenial-security 340.101-0ubuntu0.16.04.1 amd64
nvidia-331-dev/xenial-updates,xenial-security 340.101-0ubuntu0.16.04.1 amd64
nvidia-331-updates/xenial-updates,xenial-security 340.101-0ubuntu0.16.04.1 amd64
nvidia-331-updates-dev/xenial-updates,xenial-security 340.101-0ubuntu0.16.04.1 amd64
nvidia-331-updates-uvm/xenial-updates,xenial-security 340.101-0ubuntu0.16.04.1 amd64
nvidia-331-uvm/xenial-updates,xenial-security 340.101-0ubuntu0.16.04.1 amd64
nvidia-340/xenial-updates,xenial-security 340.101-0ubuntu0.16.04.1 amd64
nvidia-340-dev/xenial-updates,xenial-security 340.101-0ubuntu0.16.04.1 amd64
nvidia-340-updates/xenial-updates,xenial-security 340.101-0ubuntu0.16.04.1 amd64
nvidia-340-updates-dev/xenial-updates,xenial-security 340.101-0ubuntu0.16.04.1 amd64
nvidia-340-updates-uvm/xenial 340.96-0ubuntu2 amd64
nvidia-340-uvm/xenial-updates,xenial-security 340.101-0ubuntu0.16.04.1 amd64
nvidia-346/xenial 352.63-0ubuntu3 amd64
nvidia-346-dev/xenial 352.63-0ubuntu3 amd64
nvidia-346-updates/xenial 352.63-0ubuntu3 amd64
nvidia-346-updates-dev/xenial 352.63-0ubuntu3 amd64
nvidia-352/xenial 361.42-0ubuntu2 i386
nvidia-352-dev/xenial 361.42-0ubuntu2 i386
nvidia-352-updates/xenial 361.42-0ubuntu2 i386
nvidia-352-updates-dev/xenial 361.42-0ubuntu2 i386
nvidia-361/xenial-updates,xenial-security 367.57-0ubuntu0.16.04.1 amd64
nvidia-361-dev/xenial-updates,xenial-security 367.57-0ubuntu0.16.04.1 amd64
nvidia-361-updates/xenial 361.42-0ubuntu2 i386
nvidia-361-updates-dev/xenial 361.42-0ubuntu2 i386
nvidia-367/xenial-updates,xenial-security,now 367.57-0ubuntu0.16.04.1 amd64 [installed,automatic]
nvidia-367-dev/xenial-updates,xenial-security,now 367.57-0ubuntu0.16.04.1 amd64 [installed,automatic]

Driver in use is the latest one, with version 367.57.

We can also run NVIDIA System Management Interface:
$ nvidia-smi
Sun Mar 19 18:24:12 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.57 Driver Version: 367.57 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GT 640 Off | 0000:01:00.0 N/A | N/A |
| 40% 32C P8 N/A / N/A | 528MiB / 1990MiB | N/A Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+

...or query NVIDIA settings for NvidiaDriverVersion:
$ nvidia-settings -q NvidiaDriverVersion
Attribute 'NvidiaDriverVersion' (my-PC:0.0): 367.57
Attribute 'NvidiaDriverVersion' (my-PC:0[gpu:0]): 367.57

...or open NVIDIA settings GUI (we have to open it from Terminal) and check driver version there:
$ nvidia-settings

3) cuDNN v5.1


Installation of cuDNN is a matter of downloading its package, uncompressing it and copying header file and a library into CUDA installation directories.

We detected earlier that CUDA root directory is /usr/local/cuda-8.0/. We have to check that /usr/local/cuda-8.0/include and /usr/local/cuda-8.0/lib64 directories contain cuDNN. In our case they do as I already had cuDNN installed:
/usr/local/cuda-8.0/include$ ls cudnn*
cudnn.h

/usr/local/cuda-8.0/lib64$ ls libcudnn*
libcudnn.so libcudnn.so.5 libcudnn.so.5.1.5 libcudnn_static.a

Header file contains version information:
/usr/local/cuda-8.0/include$ cat cudnn.h | grep CUDNN_MAJOR -A 2
#define CUDNN_MAJOR 5
#define CUDNN_MINOR 1
#define CUDNN_PATCHLEVEL 5

We verified that have cuDNN v5.1 installed.

4) GPU card with CUDA Compute Capability 3.0 or higher


Compute Capability is a version of GPU's architecture generation.

NVIDIA's CUDA GPUs page lists two versions of GeForce GT 640:
GeForce GT 640 (GDDR5) 3.5
GeForce GT 640 (GDDR3) 2.1

We have to find out if GeForce GT 640 OEM has GDDR5 or GDDR3.

GeForce GT 640 OEM Specification does not mention Computing Capability.

I tried various tools with no success:

$ nvidia-smi
Sun Mar 19 22:52:36 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.57 Driver Version: 367.57 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GT 640 Off | 0000:01:00.0 N/A | N/A |
| 40% 33C P8 N/A / N/A | 721MiB / 1990MiB | N/A Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+

$ nvidia-smi --query

==============NVSMI LOG==============

Timestamp : Sun Mar 19 22:50:19 2017
Driver Version : 367.57

Attached GPUs : 1
GPU 0000:01:00.0
Product Name : GeForce GT 640
Product Brand : GeForce
Display Mode : N/A
Display Active : N/A
Persistence Mode : Disabled
Accounting Mode : N/A
Accounting Mode Buffer Size : N/A
Driver Model
Current : N/A
Pending : N/A
Serial Number : N/A
GPU UUID : GPU-f2583df9-404d-2564-d332-e7878a94d087
Minor Number : 0
VBIOS Version : 80.07.53.00.21
MultiGPU Board : N/A
Board ID : N/A
GPU Part Number : N/A
Inforom Version
Image Version : N/A
OEM Object : N/A
ECC Object : N/A
Power Management Object : N/A
GPU Operation Mode
Current : N/A
Pending : N/A
GPU Virtualization Mode
Virtualization mode : N/A
PCI
Bus : 0x01
Device : 0x00
Domain : 0x0000
Device Id : 0x0FC010DE
Bus Id : 0000:01:00.0
Sub System Id : 0x3B861642
GPU Link Info
PCIe Generation
Max : N/A
Current : N/A
Link Width
Max : N/A
Current : N/A
Bridge Chip
Type : N/A
Firmware : N/A
Replays since reset : 0
Tx Throughput : N/A
Rx Throughput : N/A
Fan Speed : 40 %
Performance State : P0
Clocks Throttle Reasons : N/A
FB Memory Usage
Total : 1990 MiB
Used : 733 MiB
Free : 1257 MiB
BAR1 Memory Usage
Total : N/A
Used : N/A
Free : N/A
Compute Mode : Default
Utilization
Gpu : N/A
Memory : N/A
Encoder : N/A
Decoder : N/A
Ecc Mode
Current : N/A
Pending : N/A
ECC Errors
Volatile
Single Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Texture Shared : N/A
Total : N/A
Double Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Texture Shared : N/A
Total : N/A
Aggregate
Single Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Texture Shared : N/A
Total : N/A
Double Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Texture Shared : N/A
Total : N/A
Retired Pages
Single Bit ECC : N/A
Double Bit ECC : N/A
Pending : N/A
Temperature
GPU Current Temp : 38 C
GPU Shutdown Temp : N/A
GPU Slowdown Temp : N/A
Power Readings
Power Management : N/A
Power Draw : N/A
Power Limit : N/A
Default Power Limit : N/A
Enforced Power Limit : N/A
Min Power Limit : N/A
Max Power Limit : N/A
Clocks
Graphics : N/A
SM : N/A
Memory : N/A
Video : N/A
Applications Clocks
Graphics : N/A
Memory : N/A
Default Applications Clocks
Graphics : N/A
Memory : N/A
Max Clocks
Graphics : N/A
SM : N/A
Memory : N/A
Video : N/A
Clock Policy
Auto Boost : N/A
Auto Boost Default : N/A
Processes : N/A

$ sudo dmidecode -t memory
# dmidecode 3.0
Getting SMBIOS data from sysfs.
SMBIOS 2.7 present.

Handle 0x000F, DMI type 16, 23 bytes
Physical Memory Array
Location: System Board Or Motherboard
Use: System Memory
Error Correction Type: None
Maximum Capacity: 32 GB
Error Information Handle: Not Provided
Number Of Devices: 4

Handle 0x0011, DMI type 17, 34 bytes
Memory Device
Array Handle: 0x000F
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 64 bits
Size: 4096 MB
Form Factor: DIMM
Set: None
Locator: ChannelA-DIMM0
Bank Locator: BANK 0
Type: DDR3
Type Detail: Synchronous
Speed: 1600 MHz
Manufacturer: 0443
Serial Number: 42718838
Asset Tag: 9876543210
Part Number: RMR5040ED58E9W1600
Rank: 2
Configured Clock Speed: 1600 MHz

Handle 0x0013, DMI type 17, 34 bytes
Memory Device
Array Handle: 0x000F
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 64 bits
Size: 4096 MB
Form Factor: DIMM
Set: None
Locator: ChannelA-DIMM1
Bank Locator: BANK 1
Type: DDR3
Type Detail: Synchronous
Speed: 1600 MHz
Manufacturer: 0443
Serial Number: 42218738
Asset Tag: 9876543210
Part Number: RMR5040ED58E9W1600
Rank: 2
Configured Clock Speed: 1600 MHz

Handle 0x0015, DMI type 17, 34 bytes
Memory Device
Array Handle: 0x000F
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 64 bits
Size: 4096 MB
Form Factor: DIMM
Set: None
Locator: ChannelB-DIMM0
Bank Locator: BANK 2
Type: DDR3
Type Detail: Synchronous
Speed: 1600 MHz
Manufacturer: 0443
Serial Number: 42508938
Asset Tag: 9876543210
Part Number: RMR5040ED58E9W1600
Rank: 2
Configured Clock Speed: 1600 MHz

Handle 0x0017, DMI type 17, 34 bytes
Memory Device
Array Handle: 0x000F
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 64 bits
Size: 4096 MB
Form Factor: DIMM
Set: None
Locator: ChannelB-DIMM1
Bank Locator: BANK 3
Type: DDR3
Type Detail: Synchronous
Speed: 1600 MHz
Manufacturer: 0443
Serial Number: 42B3C138
Asset Tag: 9876543210
Part Number: RMR5040ED58E9W1600
Rank: 2
Configured Clock Speed: 1600 MHz

$ sudo lshw -short -C memory
H/W path Device Class Description
====================================================================
/0/0 memory 64KiB BIOS
/0/c memory 1MiB L2 cache
/0/d memory 256KiB L1 cache
/0/e memory 8MiB L3 cache
/0/f memory 16GiB System Memory
/0/f/0 memory 4GiB DIMM DDR3 Synchron
/0/f/1 memory 4GiB DIMM DDR3 Synchron
/0/f/2 memory 4GiB DIMM DDR3 Synchron
/0/f/3 memory 4GiB DIMM DDR3 Synchron

$ sudo lshw -C video
*-display
description: VGA compatible controller
product: GK107 [GeForce GT 640 OEM]
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:01:00.0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress vga_controller bus_master cap_list rom
configuration: driver=nvidia latency=0
resources: irq:31 memory:f6000000-f6ffffff memory:e0000000-efffffff memory:f0000000-f1ffffff ioport:e000(size=128) memory:f7000000-f707ffff

$ lspci
...
01:00.0 VGA compatible controller: NVIDIA Corporation GK107 [GeForce GT 640 OEM] (rev a1)
...

$ sudo lspci -v -s 01:00.0
01:00.0 VGA compatible controller: NVIDIA Corporation GK107 [GeForce GT 640 OEM] (rev a1) (prog-if 00 [VGA controller])
Subsystem: Bitland(ShenZhen) Information Technology Co., Ltd. GK107 [GeForce GT 640 OEM]
Flags: bus master, fast devsel, latency 0, IRQ 31
Memory at f6000000 (32-bit, non-prefetchable) [size=16M]
Memory at e0000000 (64-bit, prefetchable) [size=256M]
Memory at f0000000 (64-bit, prefetchable) [size=32M]
I/O ports at e000 [size=128]
[virtual] Expansion ROM at f7000000 [disabled] [size=512K]
Capabilities: [60] Power Management version 3
Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+
Capabilities: [78] Express Endpoint, MSI 00
Capabilities: [b4] Vendor Specific Information: Len=14
Capabilities: [100] Virtual Channel
Capabilities: [128] Power Budgeting
Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024
Capabilities: [900] #19
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_367, nvidia_367_drm

None of these tools gave me the answer. I found on Wikipedia's page List of Nvidia graphics processing units that "The GeForce GT 640 (OEM) card is a rebranded GeForce GT 545 (DDR3)." That would mean that Compute Capability is 2.1.

Nevertheless, I decided to try to use CUDA's API cudaGetDeviceProperties which populates structure cudaDeviceProp. This structure has fields major and minor which actually denote Compute Capability. Example's code is here and after I compiled it and ran, I got:

$ ./CudaDeviceInfo.out
Device Number: 0
Device name: GeForce GT 640
Memory Clock Rate (KHz): 891000
Memory Bus Width (bits): 128
Peak Memory Bandwidth (GB/s): 28.512000
Compute Capability: 3.0

Hooray! Compute Capability is 3.0, we can use this GPU!

5) libcupti-dev library


This library is NVIDIA CUDA Profile Tools Interface. It can be installed like here:
$ sudo apt-get install libcupti-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
linux-headers-4.4.0-31 linux-headers-4.4.0-31-generic linux-headers-4.4.0-62 linux-headers-4.4.0-62-generic linux-image-4.4.0-31-generic
linux-image-4.4.0-62-generic linux-image-extra-4.4.0-31-generic linux-image-extra-4.4.0-62-generic
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
libcupti-doc libcupti7.5
The following NEW packages will be installed
libcupti-dev libcupti-doc libcupti7.5
0 to upgrade, 3 to newly install, 0 to remove and 131 not to upgrade.
Need to get 1,113 kB of archives.
After this operation, 4,915 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://gb.archive.ubuntu.com/ubuntu xenial/multiverse amd64 libcupti7.5 amd64 7.5.18-0ubuntu1 [993 kB]
Get:2 http://gb.archive.ubuntu.com/ubuntu xenial/multiverse amd64 libcupti-dev amd64 7.5.18-0ubuntu1 [65.3 kB]
Get:3 http://gb.archive.ubuntu.com/ubuntu xenial/multiverse amd64 libcupti-doc all 7.5.18-0ubuntu1 [55.2 kB]
Fetched 1,113 kB in 4s (257 kB/s)
Selecting previously unselected package libcupti7.5:amd64.
(Reading database ... 369041 files and directories currently installed.)
Preparing to unpack .../libcupti7.5_7.5.18-0ubuntu1_amd64.deb ...
Unpacking libcupti7.5:amd64 (7.5.18-0ubuntu1) ...
Selecting previously unselected package libcupti-dev:amd64.
Preparing to unpack .../libcupti-dev_7.5.18-0ubuntu1_amd64.deb ...
Unpacking libcupti-dev:amd64 (7.5.18-0ubuntu1) ...
Selecting previously unselected package libcupti-doc.
Preparing to unpack .../libcupti-doc_7.5.18-0ubuntu1_all.deb ...
Unpacking libcupti-doc (7.5.18-0ubuntu1) ...
Processing triggers for libc-bin (2.23-0ubuntu5) ...
/sbin/ldconfig.real: /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcudnn.so.5 is not a symbolic link

Setting up libcupti7.5:amd64 (7.5.18-0ubuntu1) ...
Setting up libcupti-dev:amd64 (7.5.18-0ubuntu1) ...
Setting up libcupti-doc (7.5.18-0ubuntu1) ...
Processing triggers for libc-bin (2.23-0ubuntu5) ...
/sbin/ldconfig.real: /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcudnn.so.5 is not a symbolic link

TensorFlow Installation


There are four options how to install TensorFlow: using virtualenv, native pip, Docker or Anaconda. Official documentation recommends virtualenv and we'll go this way.

I haven't used virtualenv before so let's install it (with verifying if other required packages are present):
$ sudo apt-get install python-pip python-dev python-virtualenv

Reading package lists... Done
Building dependency tree
Reading state information... Done
python-dev is already the newest version (2.7.11-1).
python-dev set to manually installed.
The following additional packages will be installed:
libpython-all-dev python-all python-all-dev python-setuptools python-wheel
python3-virtualenv virtualenv
Suggested packages:
python-setuptools-doc
The following NEW packages will be installed
libpython-all-dev python-all python-all-dev python-pip python-setuptools
python-virtualenv python-wheel python3-virtualenv virtualenv
0 to upgrade, 9 to newly install, 0 to remove and 0 not to upgrade.
Need to get 459 kB of archives.
After this operation, 1,711 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 libpython-all-dev amd64 2.7.11-1 [992 B]
Get:2 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 python-all amd64 2.7.11-1 [978 B]
Get:3 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 python-all-dev amd64 2.7.11-1 [1,000 B]
Get:4 http://gb.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 python-pip all 8.1.1-2ubuntu0.4 [144 kB]
Get:5 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 python-setuptools all 20.7.0-1 [169 kB]
Get:6 http://gb.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 python-virtualenv all 15.0.1+ds-3ubuntu1 [46.6 kB]
Get:7 http://gb.archive.ubuntu.com/ubuntu xenial/universe amd64 python-wheel all 0.29.0-1 [48.0 kB]
Get:8 http://gb.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 python3-virtualenv all 15.0.1+ds-3ubuntu1 [43.2 kB]
Get:9 http://gb.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 virtualenv all 15.0.1+ds-3ubuntu1 [4,342 B]
Fetched 459 kB in 21s (20.9 kB/s)
Selecting previously unselected package libpython-all-dev:amd64.
(Reading database ... 304681 files and directories currently installed.)
Preparing to unpack .../libpython-all-dev_2.7.11-1_amd64.deb ...
Unpacking libpython-all-dev:amd64 (2.7.11-1) ...
Selecting previously unselected package python-all.
Preparing to unpack .../python-all_2.7.11-1_amd64.deb ...
Unpacking python-all (2.7.11-1) ...
Selecting previously unselected package python-all-dev.
Preparing to unpack .../python-all-dev_2.7.11-1_amd64.deb ...
Unpacking python-all-dev (2.7.11-1) ...
Selecting previously unselected package python-pip.
Preparing to unpack .../python-pip_8.1.1-2ubuntu0.4_all.deb ...
Unpacking python-pip (8.1.1-2ubuntu0.4) ...
Selecting previously unselected package python-setuptools.
Preparing to unpack .../python-setuptools_20.7.0-1_all.deb ...
Unpacking python-setuptools (20.7.0-1) ...
Selecting previously unselected package python-virtualenv.
Preparing to unpack .../python-virtualenv_15.0.1+ds-3ubuntu1_all.deb ...
Unpacking python-virtualenv (15.0.1+ds-3ubuntu1) ...
Selecting previously unselected package python-wheel.
Preparing to unpack .../python-wheel_0.29.0-1_all.deb ...
Unpacking python-wheel (0.29.0-1) ...
Selecting previously unselected package python3-virtualenv.
Preparing to unpack .../python3-virtualenv_15.0.1+ds-3ubuntu1_all.deb ...
Unpacking python3-virtualenv (15.0.1+ds-3ubuntu1) ...
Selecting previously unselected package virtualenv.
Preparing to unpack .../virtualenv_15.0.1+ds-3ubuntu1_all.deb ...
Unpacking virtualenv (15.0.1+ds-3ubuntu1) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up libpython-all-dev:amd64 (2.7.11-1) ...
Setting up python-all (2.7.11-1) ...
Setting up python-all-dev (2.7.11-1) ...
Setting up python-pip (8.1.1-2ubuntu0.4) ...
Setting up python-setuptools (20.7.0-1) ...
Setting up python-virtualenv (15.0.1+ds-3ubuntu1) ...
Setting up python-wheel (0.29.0-1) ...
Setting up python3-virtualenv (15.0.1+ds-3ubuntu1) ...
Setting up virtualenv (15.0.1+ds-3ubuntu1) ...

We can verify that virtualenv has been properly installed by querying its version:
$ virtualenv --version
15.0.1

or simply call it to get usage help:
$ virtualenv

Running virtualenv with interpreter /usr/bin/python2
You must provide a DEST_DIR
Usage: virtualenv.py [OPTIONS] DEST_DIR

Options:
--version show program's version number and exit
-h, --help show this help message and exit
-v, --verbose Increase verbosity.
-q, --quiet Decrease verbosity.
-p PYTHON_EXE, --python=PYTHON_EXE
The Python interpreter to use, e.g.,
--python=python2.5 will use the python2.5 interpreter
to create the new environment. The default is the
python2 interpreter on your path (e.g.
/usr/bin/python2)
--clear Clear out the non-root install and start from scratch.
--no-site-packages DEPRECATED. Retained only for backward compatibility.
Not having access to global site-packages is now the
default behavior.
--system-site-packages
Give the virtual environment access to the global
site-packages.
--always-copy Always copy files rather than symlinking.
--unzip-setuptools Unzip Setuptools when installing it.
--relocatable Make an EXISTING virtualenv environment relocatable.
This fixes up scripts and makes all .pth files
relative.
--no-setuptools Do not install setuptools in the new virtualenv.
--no-pip Do not install pip in the new virtualenv.
--no-wheel Do not install wheel in the new virtualenv.
--extra-search-dir=DIR
Directory to look for setuptools/pip distributions in.
This option can be used multiple times.
--download Download preinstalled packages from PyPI.
--no-download, --never-download
Do not download preinstalled packages from PyPI.
--prompt=PROMPT Provides an alternative prompt prefix for this
environment.
--setuptools DEPRECATED. Retained only for backward compatibility.
This option has no effect.
--distribute DEPRECATED. Retained only for backward compatibility.
This option has no effect.

We can now create virtualenv for Tensorflow, in directory "tensorflow":
$ virtualenv --python=python3 --system-site-packages tensorflow
Already using interpreter /usr/bin/python3
Using base prefix '/usr'
New python executable in /home/user_name/dev/tensorflow/bin/python3
Also creating executable in /home/user_name/dev/tensorflow/bin/python
Installing setuptools, pkg_resources, pip, wheel...done.

After activating virtual environment, prompt will be prepended with the name of the folder where we created it (e.g. (tensorflow)):
tensorflow/bin$ source activate
(tensorflow) user@user-machine:~/tensorflow/bin$

We want to install GPU-enabled TensorFlow for Python3:
$ pip3 install --upgrade tensorflow-gpu
Collecting tensorflow-gpu
Downloading tensorflow_gpu-1.0.1-cp35-cp35m-manylinux1_x86_64.whl (94.8MB)
100% |████████████████████████████████| 94.8MB 18kB/s
Requirement already up-to-date: wheel>=0.26 in /home/bojan/dev/tensorflow/lib/python3.5/site-packages (from tensorflow-gpu)
Requirement already up-to-date: protobuf>=3.1.0 in /usr/local/lib/python3.5/dist-packages (from tensorflow-gpu)
Collecting numpy>=1.11.0 (from tensorflow-gpu)
Downloading numpy-1.12.1-cp35-cp35m-manylinux1_x86_64.whl (16.8MB)
100% |████████████████████████████████| 16.8MB 87kB/s
Requirement already up-to-date: six>=1.10.0 in /home/bojan/dev/tensorflow/lib/python3.5/site-packages (from tensorflow-gpu)
Requirement already up-to-date: setuptools in /home/bojan/dev/tensorflow/lib/python3.5/site-packages (from protobuf>=3.1.0->tensorflow-gpu)
Requirement already up-to-date: appdirs>=1.4.0 in /home/bojan/dev/tensorflow/lib/python3.5/site-packages (from setuptools->protobuf>=3.1.0->tensorflow-gpu)
Requirement already up-to-date: packaging>=16.8 in /home/bojan/dev/tensorflow/lib/python3.5/site-packages (from setuptools->protobuf>=3.1.0->tensorflow-gpu)
Requirement already up-to-date: pyparsing in /home/bojan/dev/tensorflow/lib/python3.5/site-packages (from packaging>=16.8->setuptools->protobuf>=3.1.0->tensorflow-gpu)
Installing collected packages: numpy, tensorflow-gpu
Found existing installation: numpy 1.12.0
Not uninstalling numpy at /usr/local/lib/python3.5/dist-packages, outside environment /home/bojan/dev/tensorflow
Successfully installed numpy-1.12.1 tensorflow-gpu-1.0.1

TensorFlow Installation Verification


We can run a short program within Python interactive shell:
$ python
Python 3.5.2 (default, Nov 17 2016, 17:05:23)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:910] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: GeForce GT 640
major: 3 minor: 0 memoryClockRate (GHz) 0.797
pciBusID 0000:01:00.0
Total memory: 1.94GiB
Free memory: 1.46GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GT 640, pci bus id: 0000:01:00.0)
>>> print(sess.run(hello))
b'Hello, TensorFlow!'
>>>

Yay! We're ready to use TensorFlow!

Once we're done working with TensorFlow, we should deactivate its virtualenv:
$ deactivate

To remove virtualenv completely, simply delete its directory:
$ rm -rf tensorflow

References:

TensorFlow Manual: Installing TensorFlow on Ubuntu

Friday, 3 February 2017

How to install JetPack on Jetson TX1

Limiting factors in the real-time machine learning on the embedded devices are both hardware and software. Hardware with high processing power usually consumes lots of energy and there is a lack of Machine Learning software support for embedded platforms. NVIDIA provides solution for this covering both constraints:
  • Jetson boards offer low-power, portable hardware platform for real-time training and inference, supported by the multi-core GPU
  • JetPack is a set of tools (including those made for parallel processing and machine learning) that can be deployed onto Jetson and which make NVIDIA Jetson Embedded Platform

Jetson


There are two Jetson Developer Kit platforms, based on the Tegra SoC (System on Chip): TK1 (based on Tegra K1) and TX1 (based on Tegra X1).

Jetson TX1 Developer Kit:

  • Tegra X1 processor (under the heath sink)
  • carrier board with various interfaces

Tegra X1:

  • NVIDIA Maxwell™ GPU with 256 NVIDIA® CUDA® Cores
    • (K1 uses NVIDIA Kepler GPU with 192 CUDA Cores)
  • Quad-core ARM® Cortex®-A57 MPCore Processor
  • 4 GB LPDDR4 Memory
  • 16 GB eMMC 5.1 Flash Storage
  • 802.11ac WiFi
  • Bluetooth
  • runs Ubuntu Linux which includes proprietary Linux for Tegra ("L4T") device drivers

Carrier board:

  • 6x CSI outputs for a half-dozen Raspberry Pi-style cameras
  • 2x DSI outputs 
  • 1 eDP 1.4 
  • 1 eDP 1.2
  • HDMI 2.0
  • Storage: either SD cards or SATA
  • 3x USB 3.0
  • 3x USB 2.0
  • Gigabit Ethernet
  • PCIe x1
  • PCIe x4
  • GPIOs, UARTs, SPI and I2C busses

Why Jetson?

  • small power consumption (less than 10W) so suitable for battery-powered, portable devices (drones, robots etc...) 
  • big processing power for embedded visual, AI, GPU computing

Jetson TX1 Unboxing







JetPack


In my recent posts I wrote about installing some of NVIDIA's frameworks - CUDA (for parallel computing using GPU) and cuDNN (Neural Network system) on a desktop machine running Ubuntu (16.04; with CUDA-compatible graphic card). NVIDIA packaged together these and some other tools and made them available for Jetson. That package is called JetPack and can be downloaded for free from NVIDIA's website.

JetPack Installer can:
  • flash Jetson board with Ubuntu OS
    • 32-bit Ubuntu 14.04 on Jetson TK1
    • 64-bit Ubuntu 16.04 on Jetson TX1
  • install all the Jetson Embedded Platform tools (on the Jetson)
  • install various tools on the host only; it is not necessary to have Jetson Developer Kit (TK1 or TX1 boards) in order to run these tools on PC

Installation Process

Prerequisites

We have to have:
  • Jetson TX1 (target)
    • HDMI cable - to connect Jetson board to monitor
    • Micro-USB to USB cable - to connect Jetson with PC
    • USB keyboard
    • USB mouse
  • PC or Linux VM (host). Host is required for running JetPack installer as this can't be run directly on the Jetson. Also, host needs to be running Ubuntu 14.04.
NOTE: I tried to use PC with Ubuntu 16.04 as host but installation of some packages failed. E.g. Installation of OpenCV package compiled for 14.04 (libopencv4tegra-repo_2.4.13-17-g5317135_amd64_ubuntu-14.04.deb) and also some CUDA dependencies. Don't waste time on experimenting, just follow the instructions and stick to Ubuntu 14.04 on the host. I used Ubuntu 14.04 running as VM on VirtualBox on my Window 10 machine.

Downloading JetPack

  • Fill in the online form in order to get membership in the NVIDIA Embedded Developer Program
  • Download JetPack Installer. At the time of writing the latest version is v2.3.1 and the installer file name is JetPack-L4T-2.3.1-linux-x64.run.
  • Create a new directory to store installation packages and place .run installer in that directory

Hardware Setup

Host (PC/VM)

  • Connected to the router (WiFi or LAN cable)

Jetson

  • Connected to the router (WiFi or LAN cable)
  • USB Keyboard and Mouse attached 
  • Connected to the monitor via HDMI cable

Jetson comes preflashed with Ubuntu Linux. If we turn on Jetson device for the first time, Terminal window appears with instructions how to install Nvidia Linux driver:

$ cd ${HOME}/NVIDIA-INSTALLER
$ sudo ./installer.sh



This is a prerequisite for UI to appear after reboot which can be triggered with:

$ sudo reboot

Running the Installer

Add execution permission to .run installer:

/opt/JetPack$ chmod +x JetPack-L4T-2.3.1-linux-x64.run

Run the installer:

/opt/JetPack$ ./JetPack-L4T-2.3.1-linux-x64.run


Installation Wizard appears:



Wizard shows Installation directory:



In the next step we have to select Jetson platform we'll be installing JetPack on. In our case, that's Jetson TX1:



At one point installation prompts for privileges elevation:



Wizard now displays a Component Manager with list of tools that can be installed on the host and the target:



This is the second part of the list - showing all packages that can be installed on the target:


Here is a short description of some available packages:

NVIDIA PerfKit is a comprehensive suite of performance tools to help debug and profile OpenGL and Direct3D applications.

NVIDIA VisionWorks toolkit is a software development package for computer vision (CV) and image processing.

NVIDIA TensorRT is a high performance neural network inference engine for production deployment of deep learning applications.

If we don't have Jetson Developer Kit (Jetson board) but want to install JetPack on the host only, we can simply change Action type from install to no action for item group "For Jetson TX1 64bit".

In the next step we have to accept Ts&Cs:



Before the installation starts a pop up appears asking us to be aware about what's going on, to monitor the process and act if required:



Installer first downloads packages selected for installation:



We are half way through - host packages have been installed!



Because we included flashing OS in the process, in the next step we have to select our setup regarding Jetson's Internet accesses. Two options are available:

  • Jetson connects to the same router as the host (making them both members of the same subnet)
  • Host acts as gateway through which Jetson gets access to the outer network and Internet
    • Installer in this case automatically sets routing table and DHCP server

I opted for more straightforward option and connected both host and target to the same router: PC via WiFi and Jetson via LAN cable:



If host has multiple network interfaces (adapters), we have to select the one which is connected to the same network (router) as Jetson. We can run ifconfig command in the Terminal to find the name of the interface. LAN IP addresses of host's and target's network interface have to show that they belong to the same network (both have to have IP addresses which start with e.g. 192.168.0).



Wizard is listing what will be its next actions so we can still decide to go back change them if necessary:



Still on the host machine, a Terminal window appears listing instructions how to connect Jetson with the host and how to put Jetson into Force USB Recovery Mode which is required for flushing OS:



If running host on a Virtual Machine, we'll have to enable its access to the USB device. In VirtualBox, after connecting PC and Jetson with USB cable, I ran lsusb command but USB device with name Nvidia Corp was not in the list but I could see it in VirtualBox's list of USB Devices:



All I had to do was to click on USB Settings and add desired USB device to the filter:





lsusb command now shows NVIDIA Corp USB device:



 The output which appears on the host during flushing OS onto target:



...more output...:



 The output which appears on the host when flushing OS onto target is completed:



Upon restarting the Jetson, host has to establish SSH tunnel with it through which it will push software tools selected in the Package Manager. Host first has to get target's IP address:



We have to manually enter Jetson's IP address and OS username and password (default credentials are ubuntu/ubuntu):



We can determine IP address of each device with ifconfig command and we can verify that they both belong to the same subnet. If using VirtualBox, make sure that Virtual Machine's adapter is in Network settings attached to Bridged Adapter instead of NAT (the default option).



Once transport channel is established, we can confirm the list of the next actions:



...and trigger pushing software from host to target starts by clicking on Next button:



Once all selected software packages are installed on Jetson, Wizard is giving us an option to remove all package installers from the host:


This completes JetPack installation. Yay!




APPENDIX A: Connecting Jetson to the Internet via Host 


If we choose the following networking setup:



...Installer displays which actions it will perform in order to make this connection possible:



I haven't tried this setup myself though.


APPENDIX B: OpenCV for Tegra on Ubuntu 14.04 Installation fails (Ubuntu 16.04 --> Jetson)

During JetPack Installation Ubuntu 16.04 --> Jetson, installation of OpenCV for Tegra package fails as JetPack tries to install package compiled for Ubuntu 14.04 on Ubuntu 16.04:



I tried to install it manually but without success:





I completed the installation by omitting OpenCV for Tegra installation:


This made my JetPack installation incomplete. This is one of the reasons why I decided to stick with the recommendation and use Ubuntu 14.04 on the host.


APPENDIX C: Installing CUDA fails (Ubuntu 16.04 --> Jetson)











APPENDIX D: VisionWorks Package Installation 

During JetPack installation (Ubuntu 14.04 VM --> Jetson) installation of VisionWorks package on the target initially failed:



I installed suggested packages manually by executing the following commands in Jetson's Terminal:

sudo apt-get install libvisionworks
sudo apt-get install libvisionworks-dev
sudo apt-get install libvisionworks-docs
sudo apt-get install libvisionworks-samples
sudo apt-get install libvisionworks-nvxio
sudo apt-get install libvisionworks-nvxio-dev

Once this was done, I re-ran JetPack installer again (sudo ./JetPack….run) and quickly got to the list of modules and actions where I selected only VisionWorks and its dependencies to be installed.

When I tried to install only VisionWorks, I got a message that additional packages have to be installed first:



...so I selected both VisionWorks and its dependencies:


This completed successfully and at that point I had all desired packages installed.



What was your experience with installing JetPack on Jetson TX1? Did you come across similar problems and how did you solve them?



References:

TK1 Specification
TX1 Specification
JetPack Installation Notes