Thursday, 15 August 2019

Introduction to TeamCity


Build Configuration

All paths in build configuration settings should be relative paths (relative to the build checkout directory). The checkout directory is the directory where all the sources (configured as VCS roots) are placed by the TeamCity.


General Settings

Name (text field)

This is a custom name of this build configuration.

Build configuration ID (text field)

Default value is in form: ProjectName_SubProjectName_BuildConfigurationName

Description (text field)

This is a custom description of this build configuration.

Build configuration type (combo box)

This can have one of these 3 values:
  • Regular
  • Composite (aggregating results)
  • Deployment

Build number format (text field)

Example:
%build.counter%

Build counter (text field)

Publish artifacts (combo box)

3 options are available:

  • Even if build fails
  • Only if build status is successful
  • Always, even if build stop command was issued

Artifact paths (text field)


The build artifacts are files that you want to publish as the results of your build. If the build generates it's output in the folder named "output", you can just set "output" as your artifacts path.

Let's assume that some build step creates directory output/content and this is the place where artifacts are stored. If value of this field is set to:

output/content => content

...then upon successful build, in the build list view, we can see for this build an Artifacts icon enabled and if we click on it, we can see a folder named content which we can expand and see its content.

Example:

Build step >> Working Directory: ./output/
Build step >> Custom script: mkdir -p ./dirA/ && echo "content1" > ./dirA/file1
Artifact paths: ./output/ => output_content

Upon build artifacts are available at URL https://tc.example.com/viewLog.html?buildId=1123342&buildTypeId=MyApp_Test&tab=artifacts


Build Step: Command Line


Working directory


Allows starting a build in a subdirectory of the checkout directory (use a relative path).
When not specified, the build is started in the checkout directory.
All relative paths in TeamCity are relative to checkout directory.
If specified directory doesn't exist, TeamCity will create it (there is no need for mkdir to be used).

Agent Requirements


How to allow only those agents whose name starts with some specific string?


Add a new requirement with the following settings:

Parameter Name: system.agent.name
Condition: starts with
Value: <string>


How to allow only those agents which are running Linux?


Parameter Name: docker.server.osType
Condition: equals
Value: linux


Dependencies


Snapshot Dependencies


Snapshot dependencies are used to create build chains. When being a part of build chain the build of this configuration will start only when all dependencies are built. If necessary, the dependencies will be triggered automatically. Build configurations linked by a snapshot dependency can optionally use revisions synchronization to ensure the same snapshot of the sources.

Artifact Dependencies


Artifact dependency allows using artifacts produced by another build. 

Add new artifact dependency button opens a dialog where we can choose:

Depend on (string) - this is the name of the TeamCity build config which will be Artifacts source.

Get artifacts from:
  • Latest successful build
  • Latest pinned build
  • Latest finished build
  • Latest finished build with specified tag
  • Build from the same chain
  • Build with specified build number
Artifacts rules (string) - this is the pattern which defines which directory from artifacts to be copied to which directory in the local build. Provide here a newline-delimited set of rules in the form of
[+:|-:]SourcePath[!ArchivePath][=>DestinationPath]. Example:

ouptut => data/input/ 

ouptut is a directory from published artifacts from previous build and data/input/ is local path in the current build.

Dependencies can be temporary disabled which is useful when testing build configs.

TBC...


Writing Build Steps

Writing bash scripts

---
To print some predefined property use:

echo teamcity.agent.home.dir = %teamcity.agent.home.dir%

In the output log we'll have e.g.:

teamcity.agent.home.dir = /home/agent-docker-01

---
If we have a build step which uses Command Line runner to run Custom script we can use variables in that script as:

MY_VAR=test
echo MY_VAR value is: ${MY_VAR}
---

If we want to echo text which contains parentheses, we need to escape them:

echo Parentheses test: \(This text is between parentheses\)
---

If we want to find the id and name of the current user and group, we can use the following in the bash script:

echo user:group \(id\) = $(id -u):$(id -g)
echo user:group \(name\) = $(id -un):$(id -gn)

Log output is like:

user:group (id) = 1008:1008
user:group (name) = docker-slave-73:docker-slave-73
---
How to ping some remote server:

echo Pinging example.com...
ping -v -c 4 example.com

---
How to get current agent's public IP address?

Get agent\'s public IP address:
dig TXT +short o-o.myaddr.l.google.com @ns1.google.com
---

Running the build


If there are no compatible agents and you try to run the build, the following message appears:

Warning: No enabled compatible agents for this build configuration. Please register a build agent or tweak build configuration requirements.

TC automatically detects if there are any agents compatible with build steps (Build Steps item in the left-hand menu). They are shown in: 

Agent Requirements >> Agents Compatibility 
[In this section you can see which agents are compatible with the requirements and which are not.]


https://stackoverflow.com/questions/4737114/build-project-how-to-checkout-different-repositories-in-different-folders

https://confluence.jetbrains.com/display/TCD5/VCS+Checkout+Rules

https://www.jetbrains.com/help/teamcity/2019.1/integrating-teamcity-with-docker.html#IntegratingTeamCitywithDocker-DockerSupportBuildFeature

Integration with Artifacotry


TeamCity Artifactory Plug-in

Once plugin is installed and integrated, we can see Artifactory Integration section in build step settings. It contains the following settings:

  • Artifactory server URL (string)
  • Override default deployer credentials (boolean)
  • Publish build info (boolean)
  • Run license checks (boolean)
  • Download and upload by:
    • Specs
    • Legacy patterns (deprecated)
  • Download spec source:
    • Job configuration
    • File
  • Spec (string)
  • Upload spec source:
    • Job configuration
    • File
  • Spec (string)


Publish build info


If this is checked then Artifactory plugin creates and publishes on Artifactory server a json file which contains build info with the plain list of all artifacts. Path to this file on Artifactory is:

Artifact Repository Browser >> artifactory-build-info/<build_configuration_id>/xx-xxxxxxxxxxxxx.json

Build configuration id is usually in form ProjectName_BuildConfigurationName.

The content of that json file is like:

{
  "version" : "1.0.1",
  "name" : "My_Proj_Test",
  "number" : "25",
  "type" : "GENERIC",
  "buildAgent" : {
    "name" : "simpleRunner"
  },
  "agent" : {
    "name" : "TeamCity",
    "version" : "2019.1.2 (build 66342)"
  },
  "started" : "2019-08-19T10:47:12.557+0200",
  "durationMillis" : 112,
  "principal" : "Bojan Komazec",
  "artifactoryPrincipal" : "deployer",
  "artifactoryPluginVersion" : "2.8.0",
  "url" : "https://teamcity.iexample.com/viewLog.html?buildId=16051507&buildTypeId=My_Proj_Test",
  "vcs" : [ ],
  "licenseControl" : {
    "runChecks" : false,
    "includePublishedArtifacts" : false,
    "autoDiscover" : true,
    "licenseViolationsRecipientsList" : "",
    "scopesList" : ""
  },
  "modules" : [ {
    "id" : "My_Proj_Test :: 25",
    "artifacts" : [ {
      "type" : "",
      "sha1" : "b29930daa02406077d96a7b7a08ce282b3de6961",
      "sha256" : "47d741b6059c6d7e99be23ce46fb9ba099cfd6515de1ef7681f93479d25996a4",
      "md5" : "9b2bb321f2dd1a87857eb875ce22f7e1",
      "name" : "file1"
    }, {
      "type" : "",
      "sha1" : "b29930dda02406077d96a7b7a08ce282b3de6961",
      "sha256" : "47d741b6059c6d7e99be25ce46fb9ba099cfd6515de1ef7681f93479d25996a4",
      "md5" : "9b2bb321f5dd1a87857eb875ce22f7e1",
      "name" : "file2"
    } ]
  } ],
  "buildDependencies" : [ ],
  "governance" : {
    "blackDuckProperties" : {
      "runChecks" : false,
      "includePublishedArtifacts" : false,
      "autoCreateMissingComponentRequests" : false,
      "autoDiscardStaleComponentRequests" : false
    }
  }
}



Upload spec source with Job configuration



If we want to upload some output directory to Artifactory it is enough to set URL for Artifactory server, choose Job configuration as Upload spec source and set Spec as e.g.:

{
   "files": [
      {
         "pattern":"./output",
         "target": "path/to/output/",
         "flat": false
      }   
   ] 
}

output is directory on TeamCity and path/to/output/ is path to the target directory on the Artifactory. In this example content in Artifactory will be at path artifactory.example.com/path/to/output/output/*.

To avoid this, we can set working directory to ./output/ and then set pattern to "./". In that case the content would be at the path artifactory.example.com/path/to/output/*.


It is possible to use TeamCity variables in Custom published artifacts value:

data-vol/artefacts/=>MyArtifactoryFolder/artefacts-%build.number%.zip


https://teamcity-support.jetbrains.com/hc/en-us/community/posts/206163909-Select-branch-combination-from-different-VCS-Roots

https://tc.example.com/viewLog.html?buildId=15995461&buildTypeId=Browser_AdminNew_RunWpTools&tab=artifacts

Wednesday, 14 August 2019

How To Install Postman on Ubuntu

Download installer archive file from Postman's Download page.

Unpack the archive:

$ sudo tar -xzvf Postman-linux-x64-7.5.0.tar.gz -C /opt

Verify the content of the unpack directory:

$ ls -la /opt/Postman/
total 12
drwxr-xr-x 3  999 docker 4096 Aug 12 13:14 .
drwxr-xr-x 8 root root   4096 Aug 14 11:08 ..
drwxr-xr-x 4  999 docker 4096 Aug 12 13:14 app
lrwxrwxrwx 1  999 docker   13 Aug 12 13:14 Postman -> ./app/Postman

Remove the archive as it's not needed anymore:

$ rm Postman-linux-x64-7.5.0.tar.gz

Create Postman.desktop file:

$ touch ~/.local/share/applications/Postman.desktop

Open it:

$ gedit ~/.local/share/applications/Postman.desktop

Edit it:

[Desktop Entry]
Encoding=UTF-8
Name=Postman
Exec=/opt/Postman/app/Postman %U
Icon=/opt/Postman/app/resources/app/assets/icon.png
Terminal=false
Type=Application
Categories=Development;

Save it and close the editor.

Postman now appears in the list of Ubuntu applications.

References:

Postman - Linux installation

Saturday, 3 August 2019

Testing Go with Ginkgo


>ginkgo help
Ginkgo Version 1.8.0

ginkgo --
--------------------------------------------
Run the tests in the passed in (or the package in the current directory if left blank).
Any arguments after -- will be passed to the test.
Accepts the following flags:
-a Force rebuilding of packages that are already up-to-date.
-afterSuiteHook string
Run a command when a suite test run completes
-asmflags string
Arguments to pass on each go tool asm invocation.
-blockprofilerate int
Control the detail provided in goroutine blocking profiles by calling runtime.SetBlockProfileRate with the given value. (default 1)
-buildmode string
Build mode to use. See 'go help buildmode' for more.
-compiler string
Name of compiler to use, as in runtime.Compiler (gccgo or gc).
-compilers int
The number of concurrent compilations to run (0 will autodetect)
-cover
Run tests with coverage analysis, will generate coverage profiles with the package name in the current directory.
-covermode string
Set the mode for coverage analysis.
-coverpkg string
Run tests with coverage on the given external modules.
-coverprofile string
Write a coverage profile to the specified file after all tests have passed.
-cpuprofile string
Write a CPU profile to the specified file before exiting.
-debug
If set, ginkgo will emit node output to files when running in parallel.
-dryRun
If set, ginkgo will walk the test hierarchy without actually running anything. Best paired with -v.
-failFast
If set, ginkgo will stop running a test suite after a failure occurs.
-failOnPending
If set, ginkgo will mark the test suite as failed if any specs are pending.
-flakeAttempts int
Make up to this many attempts to run each spec. Please note that if any of the attempts succeed, the suite will not be failed. But any failures will still be recorded. (default 1)
-focus string
If set, ginkgo will only run specs that match this regular expression.
-gccgoflags string
Arguments to pass on each gccgo compiler/linker invocation.
-gcflags string
Arguments to pass on each go tool compile invocation.
-installsuffix string
A suffix to use in the name of the package installation directory.
-keepGoing
When true, failures from earlier test suites do not prevent later test suites from running
-ldflags string
Arguments to pass on each go tool link invocation.
-linkshared
Link against shared libraries previously created with -buildmode=shared.
-memprofile string
Write a memory profile to the specified file after all tests have passed.
-memprofilerate int
Enable more precise (and expensive) memory profiles by setting runtime.MemProfileRate.
-mod string
Go module control. See 'go help modules' for more.
-msan
Enable interoperation with memory sanitizer.
-n go test
Have go test print the commands but do not run them.
-noColor
If set, suppress color output in default reporter.
-nodes int
The number of parallel test nodes to run (default 1)
-noisyPendings
If set, default reporter will shout about pending tests. (default true)
-noisySkippings
If set, default reporter will shout about skipping tests. (default true)
-outputdir string
Place output files from profiling in the specified directory.
-p Run in parallel with auto-detected number of nodes
-pkgdir string
install and load all packages from the given dir instead of the usual locations.
-progress
If set, ginkgo will emit progress information as each spec runs to the GinkgoWriter.
-r Find and run test suites under the current directory recursively.
-race
Run tests with race detection enabled.
-randomizeAllSpecs
If set, ginkgo will randomize all specs together. By default, ginkgo only randomizes the top level Describe, Context and When groups.
-randomizeSuites
When true, Ginkgo will randomize the order in which test suites run
-regexScansFilePath
If set, ginkgo regex matching also will look at the file path (code location).
-requireSuite
Fail if there are ginkgo tests in a directory but no test suite (missing RunSpecs)
-seed int
The seed used to randomize the spec suite. (default 1552989981)
-skip string
If set, ginkgo will only run specs that do not match this regular expression.
-skipMeasurements
If set, ginkgo will skip any measurement specs.
-skipPackage string
A comma-separated list of package names to be skipped. If any part of the package's path matches, that package is ignored.
-slowSpecThreshold float
(in seconds) Specs that take longer to run than this threshold are flagged as slow by the default reporter. (default 5)
-stream
stream parallel test output in real time: less coherent, but useful for debugging (default true)
-succinct
If set, default reporter prints out a very succinct report
-tags string
A list of build tags to consider satisfied during the build.
-timeout duration
Suite fails if it does not complete within the specified timeout (default 24h0m0s)
-toolexec string
a program to use to invoke toolchain programs like vet and asm.
-trace
If set, default reporter prints out the full stack trace when a failure occurs
-untilItFails
When true, Ginkgo will keep rerunning tests until a failure occurs
-v If set, default reporter print out all specs as they begin.
-vet string
Configure the invocation of 'go vet' to use the comma-separated list of vet checks. If list is 'off', 'go test' does not run 'go vet' at all.
-work
Print the name of the temporary work directory and do not delete it when exiting.
-x go test
Have go test print the commands.

ginkgo watch --
--------------------------------------------------
Watches the tests in the passed in and runs them when changes occur.
Any arguments after -- will be passed to the test.
Accepts all the flags that the ginkgo command accepts except for --keepGoing and --untilItFails

ginkgo build
-------------------------------
Build the passed in (or the package in the current directory if left blank).
Accepts the following flags:
-a Force rebuilding of packages that are already up-to-date.
-asmflags string
Arguments to pass on each go tool asm invocation.
-blockprofilerate int
Control the detail provided in goroutine blocking profiles by calling runtime.SetBlockProfileRate with the given value. (default 1)
-buildmode string
Build mode to use. See 'go help buildmode' for more.
-compiler string
Name of compiler to use, as in runtime.Compiler (gccgo or gc).
-cover
Run tests with coverage analysis, will generate coverage profiles with the package name in the current directory.
-covermode string
Set the mode for coverage analysis.
-coverpkg string
Run tests with coverage on the given external modules.
-coverprofile string
Write a coverage profile to the specified file after all tests have passed.
-cpuprofile string
Write a CPU profile to the specified file before exiting.
-gccgoflags string
Arguments to pass on each gccgo compiler/linker invocation.
-gcflags string
Arguments to pass on each go tool compile invocation.
-installsuffix string
A suffix to use in the name of the package installation directory.
-ldflags string
Arguments to pass on each go tool link invocation.
-linkshared
Link against shared libraries previously created with -buildmode=shared.
-memprofile string
Write a memory profile to the specified file after all tests have passed.
-memprofilerate int
Enable more precise (and expensive) memory profiles by setting runtime.MemProfileRate.
-mod string
Go module control. See 'go help modules' for more.
-msan
Enable interoperation with memory sanitizer.
-n go test
Have go test print the commands but do not run them.
-outputdir string
Place output files from profiling in the specified directory.
-pkgdir string
install and load all packages from the given dir instead of the usual locations.
-r Find and run test suites under the current directory recursively.
-race
Run tests with race detection enabled.
-requireSuite
Fail if there are ginkgo tests in a directory but no test suite (missing RunSpecs)
-skipPackage string
A comma-separated list of package names to be skipped. If any part of the package's path matches, that package is ignored.
-tags string
A list of build tags to consider satisfied during the build.
-toolexec string
a program to use to invoke toolchain programs like vet and asm.
-vet string
Configure the invocation of 'go vet' to use the comma-separated list of vet checks. If list is 'off', 'go test' does not run 'go vet' at all.
-work
Print the name of the temporary work directory and do not delete it when exiting.
-x go test
Have go test print the commands.

ginkgo bootstrap
------------------------
Bootstrap a test suite for the current package
Accepts the following flags:
-agouti
If set, bootstrap will generate a bootstrap file for writing Agouti tests
-internal
If set, generate will generate a test file that uses the regular package name
-nodot
If set, bootstrap will generate a bootstrap file that does not . import ginkgo and gomega
-template string
If specified, generate will use the contents of the file passed as the bootstrap template

ginkgo generate
-----------------------------
Generate a test file named filename_test.go
If the optional argument is omitted, a file named after the package in the current directory will be created.
Accepts the following flags:
-agouti
If set, generate will generate a test file for writing Agouti tests
-internal
If set, generate will generate a test file that uses the regular package name
-nodot
If set, generate will generate a test file that does not . import ginkgo and gomega

ginkgo nodot
------------
Update the nodot declarations in your test suite
Any missing declarations (from, say, a recently added matcher) will be added to your bootstrap file.
If you've renamed a declaration, that name will be honored and not overwritten.

ginkgo convert /path/to/package
-------------------------------
Convert the package at the passed in path from an XUnit-style test to a Ginkgo-style test

ginkgo unfocus (or ginkgo blur)
-------------------------------
Recursively unfocuses any focused tests under the current directory

ginkgo version
--------------
Print Ginkgo's version

ginkgo help
---------------------
Print usage information. If a command is passed in, print usage information just for that command.


---

To create test file for some package (example):

../github.com/BojanKomazec/go-demo/internal/pkg/stringdemo$ ginkgo bootstrap
Generating ginkgo test suite bootstrap for stringdemo in:
        stringdemo_suite_test.go

Generated file stringdemo_suite_test.go looks like this:

package stringdemo_test

import (
"testing"

. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)

func TestStringdemo(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "Stringdemo Suite")
}

This file has to have name in form *_test.go.

package name can be adjusted to match the name of the package which is under test (e.g. package stringdemo instead of stringdemo_test). This can be done during the bootstrap by passing -internal argument to it.

---

To run all tests across all packages in the project and also print the coverage % use:

ginkgo -r -v -cover

To achieve the same with native go test do the following:

go test ./... -v -cover


Panic in a goroutine crashes test suite

If running dlv test on tests written with Ginkgo framework:

API server listening at: 127.0.0.1:9379

Usage of C:\...\git.bk.com\example+project\internal\example_package\debug.test:
  -ginkgo.debug
    If set, ginkgo will emit node output to files when running in parallel.
  -ginkgo.dryRun
    If set, ginkgo will walk the test hierarchy without actually running anything.  Best paired with -v.
  -ginkgo.failFast
    If set, ginkgo will stop running a test suite after a failure occurs.
  -ginkgo.failOnPending
    If set, ginkgo will mark the test suite as failed if any specs are pending.
  -ginkgo.flakeAttempts int
    Make up to this many attempts to run each spec. Please note that if any of the attempts succeed, the suite will not be failed. But any failures will still be recorded. (default 1)
  -ginkgo.focus string
    If set, ginkgo will only run specs that match this regular expression.
  -ginkgo.noColor
    If set, suppress color output in default reporter.
  -ginkgo.noisyPendings
    If set, default reporter will shout about pending tests. (default true)
  -ginkgo.noisySkippings
    If set, default reporter will shout about skipping tests. (default true)
  -ginkgo.parallel.node int
    This worker node's (one-indexed) node number.  For running specs in parallel. (default 1)
  -ginkgo.parallel.streamhost string
    The address for the server that the running nodes should stream data to.
  -ginkgo.parallel.synchost string
    The address for the server that will synchronize the running nodes.
  -ginkgo.parallel.total int
    The total number of worker nodes.  For running specs in parallel. (default 1)
  -ginkgo.progress
    If set, ginkgo will emit progress information as each spec runs to the GinkgoWriter.
  -ginkgo.randomizeAllSpecs
    If set, ginkgo will randomize all specs together.  By default, ginkgo only randomizes the top level Describe, Context and When groups.
  -ginkgo.regexScansFilePath
    If set, ginkgo regex matching also will look at the file path (code location).
  -ginkgo.seed int
    The seed used to randomize the spec suite. (default 1553106348)
  -ginkgo.skip string
    If set, ginkgo will only run specs that do not match this regular expression.
  -ginkgo.skipMeasurements
    If set, ginkgo will skip any measurement specs.
  -ginkgo.slowSpecThreshold float
    (in seconds) Specs that take longer to run than this threshold are flagged as slow by the default reporter. (default 5)
  -ginkgo.succinct
    If set, default reporter prints out a very succinct report
  -ginkgo.trace
    If set, default reporter prints out the full stack trace when a failure occurs
  -ginkgo.v
    If set, default reporter print out all specs as they begin.
  -test.bench regexp
    run only benchmarks matching regexp
  -test.benchmem
    print memory allocations for benchmarks
  -test.benchtime d
    run each benchmark for duration d (default 1s)
  -test.blockprofile file
    write a goroutine blocking profile to file
  -test.blockprofilerate rate
    set blocking profile rate (see runtime.SetBlockProfileRate) (default 1)
  -test.count n
    run tests and benchmarks n times (default 1)
  -test.coverprofile file
    write a coverage profile to file
  -test.cpu list
    comma-separated list of cpu counts to run each test with
  -test.cpuprofile file
    write a cpu profile to file
  -test.failfast
    do not start new tests after the first test failure
  -test.list regexp
    list tests, examples, and benchmarks matching regexp then exit
  -test.memprofile file
    write an allocation profile to file
  -test.memprofilerate rate
    set memory allocation profiling rate (see runtime.MemProfileRate)
  -test.mutexprofile string
    write a mutex contention profile to the named file after execution
  -test.mutexprofilefraction int
    if >= 0, calls runtime.SetMutexProfileFraction() (default 1)
  -test.outputdir dir
    write profiles to dir
  -test.parallel n
    run at most n tests in parallel (default 4)
  -test.run regexp
    run only tests and examples matching regexp
  -test.short
    run smaller test suite to save time
  -test.testlogfile file
    write test action log to file (for use only by cmd/go)
  -test.timeout d
    panic test binary after duration d (default 0, timeout disabled)
  -test.trace file
    write an execution trace to file
  -test.v
    verbose: print additional output


TBC...

Tuesday, 30 July 2019

Apache Ant Patterns


Apache Ant is used for Java build files. It uses so called "Ant-style" wildcards which have been accepted and are now used by many other tools.

Ant-style wildcards:


?


  • Matches one character (any character except path separators)
  • used to match file names
  • matches one level
  • any character except path separators



  • Matches zero or more characters (not including path separators)
  • used to match file names
  • matches one level
  • any character except path separators


**


  • Matches zero or more path segments (directory tree) 
  • used for folder-names matching
  • includes/matches path separators (slash, /) 
  • matches multiple levels
  • src/**/*.cs will find all cs files in any sub-directory of src



If we have the following tree:

/dir1/dir2/file1.txt
/dir1/dir2/dir3/file2.txt

Ant pattern which filters all .txt files in any subdirectory of a dir2 directory would be:

**/dir/**/*.txt

When ** is used as the name of a directory in the pattern, it matches zero or more directories.

References:


Directory-based Tasks
How do I use Nant/Ant naming patterns?
Pattern matching guide
Learning Ant path style

Monday, 29 July 2019

How to play .mp4 videos on Ubuntu

Install the following packages (and accept EULAs):

$ sudo apt-get update
$ sudo apt install libdvdnav4 libdvdread4 gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly libdvd-pkg
$ sudo dpkg-reconfigure libdvd-pkg
$ sudo apt install ubuntu-restricted-extras

(Tested on Ubuntu 18.04)

Wednesday, 17 July 2019

Introduction to Makefile

make commands

ifeq
else
endif
...

They MUST NOT be indented with TAB characters as (almost) all lines with TAB characters as the first character on the line in a makefile are passed to the shell (/bin/sh). The shell doesn't know anything about  make commands. make commands can be indented with a set of SPACE characters but this might be misleading as recipies is what MUST be indented with TABs.


Recipies


They must be indented with TAB character in order to be passed to shell.

Targets


Targets are labels that allow make to execute a group of commands together.

Makefile:

target1:
   @echo target1 is executing
target2:
   @echo target2 is executing

We can now run make as:

$ make target1

or

$ make target2


Conditional Execution


Use ifeq-endif or ifeq-else-endif blocks.

Makefile:

VAR1= test
VAR2=nottest

demo-if-else-endif:

ifeq($(VAR1), $(VAR2))
   ...
else
   ...
endif

Makefile ifeq: when are they evaluated?

Variable comparison


TEST=ON
ifeq ($(TEST),ON)
    @echo PASSED
else
    @echo FAILED
endif


To check if variable is empty:

ifeq ($(TEST),)
TEST := $(something else)
endif

Makefile set if variable is empty


Wednesday, 26 June 2019

Software Design

Main Principles


Code should be correct, clear and efficient.
Prefer simple. Avoid clever.
(taken from https://yourbasic.org/)

Make MVP working (correctly) first. Release it and analyze feedback. Revenue, not the beauty of the code should drive development...but code should be well-designed if TDD is followed. Refactor evolutionary, not revolutionary.

Software design
SOLID Principles

12 Factor Applications 
Twelve-Factor App methodology

TDD - Test-Driven Development

I feel comfortable when replacing one implementation of the function with another only if that function is covered by unit tests.

POD - Performance-Oriented Development

Command-line Arguments



  • To indicate optional arguments, Square brackets are commonly used, and can also be used to group parameters that must be specified together.
  • To indicate required arguments, Angled brackets are commonly used, following the same grouping conventions as square brackets.
  • Exclusive parameters can be indicated by separating them with vertical bars within groups.

Argument passing strategy - environment variables vs. command line

Logical Expressions

Use Boolean Algebra laws to simplify complex conditions (logical expressions).

Global Variables

They should be avoided unless they are static/singletons that represent an object with cross-cutting concern functionality.

Global Variables Are Bad

Functions

Functions should be simple, short and follow SRP principle. E.g. if function has to create a file at some path, don't make it also creating that path (if path does not exist). Create another function which is responsible ONLY for creating paths instead.

Don't make library/package functions asynchronous by default - allow users to choose how they want to consume them - synchronously or asynchronously. They can always create async wrapper around them.

The same stands for functions in Go. We could make them accept sync.WaitGroup argument so they can be awaited...but we should make function only do its main job as fiddling with wait group pollutes function's main functionality and thus break SRP.

func foo(arg1 T1, arg2 T2, ...wg *sync.WaitGroup) {
   wg.Add(1)
   ...
   defer wg.Done()
}

In the same way, don't add logging to library/package functions. Return errors/throw exceptions with error messages/codes instead. User of the library should decide what they want to see in the log output.

If function has multiple parameters and e.g. one parameter is used only in one part of the function, check if this part of the function is doing a task (or...has responsibility for one "thing") that could be extracted into a separate function.

Indentation & Single Point of Return


There are two schools here. The one which recommends that each function should have single point of return and one that allows multiple points of return.

Single point of return:
  • if function is long this increases chances of having multiple levels of nested conditions
  • returned value (error) is assigned at multiple places and at multiple levels
  • it's difficult to track positive execution path
Multiple points of return:
  • prevents deep levels of indentation (such functions usually have only two)
  • it is easy to track which expression would make function to return which error
  • we can use indentation here to visually create positive and error paths: positive path of execution are expressions in the 1st indentation level. Handling errors is in the 2nd (indented) level of indentation (see Code: Align the happy path to the left edge)
Here are some more Tips for a good line of sight from Mat Ryer:
  • Align the happy path to the left; you should quickly be able to scan down one column to see the expected execution flow
  • Don’t hide happy path logic inside a nest of indented braces
  • Exit early from your function
  • Avoid else returns; consider flipping the if statement
  • Put the happy return statement as the very last line
  • Extract functions and methods to keep bodies small and readable
  • If you need big indented bodies, consider giving them their own function

How small function should be?

How small should functions be?
Small Functions considered Harmful
What should be the maximum length of a function?
How can wrapping an expression as a function be Clean Code?


Arguments Validation

Arguments should be validated if their values are coming from the wild outside world and this happens in the public API. Contract validation frameworks can be used.

In private/internal functions we can use assertions (in C++/C#) or no validation at all and allow application to crash.

Classes 

Prefer composition over inheritance.

TBD...

Logging

Don't use logging as a substitute for proper debugging. Logging is poor man's debugging. Learn how to use debugging and profiling tools relevant for your development stack and IDE.

Think carefully what will go into log. If there is no error, don't make log message like this:

Created symlink ./.../myapp--setup.exe --> myapp-setup.exe. Error: <nil>

When you later analyze log file and look for word "error", you'll get tons of false positives.


Documentation

The older I am the less I like having documentation about the software I write anywhere else but in the source code itself. This reduces information redundancy, duplication and situations when documentation is not in sync with the implementation. Having brief but comprehensive comments and tools (example1) which can extract desired information from them should do the job. Having high unit test coverage (and BDD-style tests if you are kind to non-tech members of the team) should also help as reading test names should be as informative and as easy as reading a requirement specification.


Some Common Patterns and Anti-Patterns


Producer - Consumer


What is the benefit of writing to a temp location, And then copying it to the intended destination?

TBD...