Thursday, 15 August 2019

Introduction to TeamCity


Build Configuration

All paths in build configuration settings should be relative paths (relative to the build checkout directory). The checkout directory is the directory where all the sources (configured as VCS roots) are placed by the TeamCity.

General Settings


Artifact paths


The build artifacts are the file you want to publish as the results of your build. If the build generates it's output in the folder named "output", you can just set "output" as your artifacts path.

Let's assume that some build step creates directory output/content and this is the place where artifacts are stored. If value of this field is set to:

output/content => content

...then upon successful build, in the build list view, we can see for this build an Artifacts icon enabled and if we click on it, we can see a folder named content which we can expand and see its content.


Example:

Build step >> Working Directory: ./output/
Build step >> Custom script: mkdir -p ./dirA/ && echo "content1" > ./dirA/file1
Artifact paths: ./output/ => output_content

Upon build artifacts are available at URL https://tc.example.com/viewLog.html?buildId=1123342&buildTypeId=MyApp_Test&tab=artifacts


Build Step: Command Line


Working directory


Allows starting a build in a subdirectory of the checkout directory (use a relative path).
When not specified, the build is started in the checkout directory.
All relative paths in TeamCity are relative to checkout directory.
If specified directory doesn't exist, TeamCity will create it (there is no need for mkdir to be used).

Agent Requirements


How to allow only those agents whose name starts with some specific string?


Add a new requirement with the following settings:

Parameter Name: system.agent.name
Condition: starts with
Value: <string>


How to allow only those agents which are running Linux?


Parameter Name: docker.server.osType
Condition: equals
Value: linux


Dependencies


Snapshot Dependencies


Snapshot dependencies are used to create build chains. When being a part of build chain the build of this configuration will start only when all dependencies are built. If necessary, the dependencies will be triggered automatically. Build configurations linked by a snapshot dependency can optionally use revisions synchronization to ensure the same snapshot of the sources.

Artifact Dependencies


Artifact dependency allows using artifacts produced by another build. 

Add new artifact dependency button opens a dialog where we can choose:

Depend on (string) - this is the name of the TeamCity build config which will be Artifacts source.
Get artifacts from:
  • Latest successful build
  • Latest pinned build
  • Latest finished build
  • Latest finished build with specified tag
  • Build from the same chain
  • Build with specified build number
Artifacts rules (string) - this is the pattern which defines which directory from artifacts to be copied to which directory in the local build. Provide here a newline-delimited set of rules in the form of
[+:|-:]SourcePath[!ArchivePath][=>DestinationPath]. Example:

ouptut => data/input/ 

ouptut is a directory from published artifacts from previous build and data/input/ is local path in the current build.

Dependencies can be temporary disabled which is useful when testing build configs.

TBC...


Writing Build Steps

Writing bash scripts

---
To print some predefined property use:

echo teamcity.agent.home.dir = %teamcity.agent.home.dir%

In the output log we'll have e.g.:

teamcity.agent.home.dir = /home/agent-docker-01

---
If we have a build step which uses Command Line runner to run Custom script we can use variables in that script as:

MY_VAR=test
echo MY_VAR value is: ${MY_VAR}
---

If we want to echo text which contains parentheses, we need to escape them:

echo Parentheses test: \(This text is between parentheses\)
---

If we want to find the id and name of the current user and group, we can use the following in the bash script:

echo user:group \(id\) = $(id -u):$(id -g)
echo user:group \(name\) = $(id -un):$(id -gn)

Log output is like:

user:group (id) = 1008:1008
user:group (name) = docker-slave-73:docker-slave-73
---
How to ping some remote server:

echo Pinging example.com...
ping -v -c 4 example.com

---
How to get current agent's public IP address?

Get agent\'s public IP address:
dig TXT +short o-o.myaddr.l.google.com @ns1.google.com
---

Running the build


If there are no compatible agents and you try to run the build, the following message appears:

Warning: No enabled compatible agents for this build configuration. Please register a build agent or tweak build configuration requirements.

TC automatically detects if there are any agents compatible with build steps (Build Steps item in the left-hand menu). They are shown in: 

Agent Requirements >> Agents Compatibility 
[In this section you can see which agents are compatible with the requirements and which are not.]


https://stackoverflow.com/questions/4737114/build-project-how-to-checkout-different-repositories-in-different-folders

https://confluence.jetbrains.com/display/TCD5/VCS+Checkout+Rules

https://www.jetbrains.com/help/teamcity/2019.1/integrating-teamcity-with-docker.html#IntegratingTeamCitywithDocker-DockerSupportBuildFeature

Integration with Artifacotry


TeamCity Artifactory Plug-in

Once plugin is installed and integrated, we can see Artifactory Integration section in build step settings. It contains the following settings:

  • Artifactory server URL (string)
  • Override default deployer credentials (boolean)
  • Publish build info (boolean)
  • Run license checks (boolean)
  • Download and upload by:
    • Specs
    • Legacy patterns (deprecated)
  • Download spec source:
    • Job configuration
    • File
  • Spec (string)
  • Upload spec source:
    • Job configuration
    • File
  • Spec (string)


If we want to upload some output directory to Artifactory it is enough to set URL for Artifactory server, choose Job configuration as Upload spec source and set Spec as e.g.:

{
   "files": [
      {
         "pattern":"./output",
         "target": "path/to/output/",
         "flat": false
      }   
   ] 
}

output is directory on TeamCity and path/to/output/ is path to the target directory on the Artifactory. In this example content in Artifactory will be at path artifactory.example.com/path/to/output/output/*.

To avoid this, we can set working directory to ./output/ and then set pattern to "./". In that case the content would be at the path artifactory.example.com/path/to/output/*.


It is possible to use TeamCity variables in Custom published artifacts value:

data-vol/artefacts/=>MyArtifactoryFolder/artefacts-%build.number%.zip


https://teamcity-support.jetbrains.com/hc/en-us/community/posts/206163909-Select-branch-combination-from-different-VCS-Roots

https://tc.example.com/viewLog.html?buildId=15995461&buildTypeId=Browser_AdminNew_RunWpTools&tab=artifacts

Wednesday, 14 August 2019

How To Install Postman on Ubuntu

Download installer archive file from Postman's Download page.

Unpack the archive:

$ sudo tar -xzvf Postman-linux-x64-7.5.0.tar.gz -C /opt

Verify the content of the unpack directory:

$ ls -la /opt/Postman/
total 12
drwxr-xr-x 3  999 docker 4096 Aug 12 13:14 .
drwxr-xr-x 8 root root   4096 Aug 14 11:08 ..
drwxr-xr-x 4  999 docker 4096 Aug 12 13:14 app
lrwxrwxrwx 1  999 docker   13 Aug 12 13:14 Postman -> ./app/Postman

Remove the archive as it's not needed anymore:

$ rm Postman-linux-x64-7.5.0.tar.gz

Create Postman.desktop file:

$ touch ~/.local/share/applications/Postman.desktop

Open it:

$ gedit ~/.local/share/applications/Postman.desktop

Edit it:

[Desktop Entry]
Encoding=UTF-8
Name=Postman
Exec=/opt/Postman/app/Postman %U
Icon=/opt/Postman/app/resources/app/assets/icon.png
Terminal=false
Type=Application
Categories=Development;

Save it and close the editor.

Postman now appears in the list of Ubuntu applications.

References:

Postman - Linux installation

Saturday, 3 August 2019

Testing Go with Ginkgo


>ginkgo help
Ginkgo Version 1.8.0

ginkgo --
--------------------------------------------
Run the tests in the passed in (or the package in the current directory if left blank).
Any arguments after -- will be passed to the test.
Accepts the following flags:
-a Force rebuilding of packages that are already up-to-date.
-afterSuiteHook string
Run a command when a suite test run completes
-asmflags string
Arguments to pass on each go tool asm invocation.
-blockprofilerate int
Control the detail provided in goroutine blocking profiles by calling runtime.SetBlockProfileRate with the given value. (default 1)
-buildmode string
Build mode to use. See 'go help buildmode' for more.
-compiler string
Name of compiler to use, as in runtime.Compiler (gccgo or gc).
-compilers int
The number of concurrent compilations to run (0 will autodetect)
-cover
Run tests with coverage analysis, will generate coverage profiles with the package name in the current directory.
-covermode string
Set the mode for coverage analysis.
-coverpkg string
Run tests with coverage on the given external modules.
-coverprofile string
Write a coverage profile to the specified file after all tests have passed.
-cpuprofile string
Write a CPU profile to the specified file before exiting.
-debug
If set, ginkgo will emit node output to files when running in parallel.
-dryRun
If set, ginkgo will walk the test hierarchy without actually running anything. Best paired with -v.
-failFast
If set, ginkgo will stop running a test suite after a failure occurs.
-failOnPending
If set, ginkgo will mark the test suite as failed if any specs are pending.
-flakeAttempts int
Make up to this many attempts to run each spec. Please note that if any of the attempts succeed, the suite will not be failed. But any failures will still be recorded. (default 1)
-focus string
If set, ginkgo will only run specs that match this regular expression.
-gccgoflags string
Arguments to pass on each gccgo compiler/linker invocation.
-gcflags string
Arguments to pass on each go tool compile invocation.
-installsuffix string
A suffix to use in the name of the package installation directory.
-keepGoing
When true, failures from earlier test suites do not prevent later test suites from running
-ldflags string
Arguments to pass on each go tool link invocation.
-linkshared
Link against shared libraries previously created with -buildmode=shared.
-memprofile string
Write a memory profile to the specified file after all tests have passed.
-memprofilerate int
Enable more precise (and expensive) memory profiles by setting runtime.MemProfileRate.
-mod string
Go module control. See 'go help modules' for more.
-msan
Enable interoperation with memory sanitizer.
-n go test
Have go test print the commands but do not run them.
-noColor
If set, suppress color output in default reporter.
-nodes int
The number of parallel test nodes to run (default 1)
-noisyPendings
If set, default reporter will shout about pending tests. (default true)
-noisySkippings
If set, default reporter will shout about skipping tests. (default true)
-outputdir string
Place output files from profiling in the specified directory.
-p Run in parallel with auto-detected number of nodes
-pkgdir string
install and load all packages from the given dir instead of the usual locations.
-progress
If set, ginkgo will emit progress information as each spec runs to the GinkgoWriter.
-r Find and run test suites under the current directory recursively.
-race
Run tests with race detection enabled.
-randomizeAllSpecs
If set, ginkgo will randomize all specs together. By default, ginkgo only randomizes the top level Describe, Context and When groups.
-randomizeSuites
When true, Ginkgo will randomize the order in which test suites run
-regexScansFilePath
If set, ginkgo regex matching also will look at the file path (code location).
-requireSuite
Fail if there are ginkgo tests in a directory but no test suite (missing RunSpecs)
-seed int
The seed used to randomize the spec suite. (default 1552989981)
-skip string
If set, ginkgo will only run specs that do not match this regular expression.
-skipMeasurements
If set, ginkgo will skip any measurement specs.
-skipPackage string
A comma-separated list of package names to be skipped. If any part of the package's path matches, that package is ignored.
-slowSpecThreshold float
(in seconds) Specs that take longer to run than this threshold are flagged as slow by the default reporter. (default 5)
-stream
stream parallel test output in real time: less coherent, but useful for debugging (default true)
-succinct
If set, default reporter prints out a very succinct report
-tags string
A list of build tags to consider satisfied during the build.
-timeout duration
Suite fails if it does not complete within the specified timeout (default 24h0m0s)
-toolexec string
a program to use to invoke toolchain programs like vet and asm.
-trace
If set, default reporter prints out the full stack trace when a failure occurs
-untilItFails
When true, Ginkgo will keep rerunning tests until a failure occurs
-v If set, default reporter print out all specs as they begin.
-vet string
Configure the invocation of 'go vet' to use the comma-separated list of vet checks. If list is 'off', 'go test' does not run 'go vet' at all.
-work
Print the name of the temporary work directory and do not delete it when exiting.
-x go test
Have go test print the commands.

ginkgo watch --
--------------------------------------------------
Watches the tests in the passed in and runs them when changes occur.
Any arguments after -- will be passed to the test.
Accepts all the flags that the ginkgo command accepts except for --keepGoing and --untilItFails

ginkgo build
-------------------------------
Build the passed in (or the package in the current directory if left blank).
Accepts the following flags:
-a Force rebuilding of packages that are already up-to-date.
-asmflags string
Arguments to pass on each go tool asm invocation.
-blockprofilerate int
Control the detail provided in goroutine blocking profiles by calling runtime.SetBlockProfileRate with the given value. (default 1)
-buildmode string
Build mode to use. See 'go help buildmode' for more.
-compiler string
Name of compiler to use, as in runtime.Compiler (gccgo or gc).
-cover
Run tests with coverage analysis, will generate coverage profiles with the package name in the current directory.
-covermode string
Set the mode for coverage analysis.
-coverpkg string
Run tests with coverage on the given external modules.
-coverprofile string
Write a coverage profile to the specified file after all tests have passed.
-cpuprofile string
Write a CPU profile to the specified file before exiting.
-gccgoflags string
Arguments to pass on each gccgo compiler/linker invocation.
-gcflags string
Arguments to pass on each go tool compile invocation.
-installsuffix string
A suffix to use in the name of the package installation directory.
-ldflags string
Arguments to pass on each go tool link invocation.
-linkshared
Link against shared libraries previously created with -buildmode=shared.
-memprofile string
Write a memory profile to the specified file after all tests have passed.
-memprofilerate int
Enable more precise (and expensive) memory profiles by setting runtime.MemProfileRate.
-mod string
Go module control. See 'go help modules' for more.
-msan
Enable interoperation with memory sanitizer.
-n go test
Have go test print the commands but do not run them.
-outputdir string
Place output files from profiling in the specified directory.
-pkgdir string
install and load all packages from the given dir instead of the usual locations.
-r Find and run test suites under the current directory recursively.
-race
Run tests with race detection enabled.
-requireSuite
Fail if there are ginkgo tests in a directory but no test suite (missing RunSpecs)
-skipPackage string
A comma-separated list of package names to be skipped. If any part of the package's path matches, that package is ignored.
-tags string
A list of build tags to consider satisfied during the build.
-toolexec string
a program to use to invoke toolchain programs like vet and asm.
-vet string
Configure the invocation of 'go vet' to use the comma-separated list of vet checks. If list is 'off', 'go test' does not run 'go vet' at all.
-work
Print the name of the temporary work directory and do not delete it when exiting.
-x go test
Have go test print the commands.

ginkgo bootstrap
------------------------
Bootstrap a test suite for the current package
Accepts the following flags:
-agouti
If set, bootstrap will generate a bootstrap file for writing Agouti tests
-internal
If set, generate will generate a test file that uses the regular package name
-nodot
If set, bootstrap will generate a bootstrap file that does not . import ginkgo and gomega
-template string
If specified, generate will use the contents of the file passed as the bootstrap template

ginkgo generate
-----------------------------
Generate a test file named filename_test.go
If the optional argument is omitted, a file named after the package in the current directory will be created.
Accepts the following flags:
-agouti
If set, generate will generate a test file for writing Agouti tests
-internal
If set, generate will generate a test file that uses the regular package name
-nodot
If set, generate will generate a test file that does not . import ginkgo and gomega

ginkgo nodot
------------
Update the nodot declarations in your test suite
Any missing declarations (from, say, a recently added matcher) will be added to your bootstrap file.
If you've renamed a declaration, that name will be honored and not overwritten.

ginkgo convert /path/to/package
-------------------------------
Convert the package at the passed in path from an XUnit-style test to a Ginkgo-style test

ginkgo unfocus (or ginkgo blur)
-------------------------------
Recursively unfocuses any focused tests under the current directory

ginkgo version
--------------
Print Ginkgo's version

ginkgo help
---------------------
Print usage information. If a command is passed in, print usage information just for that command.


---

To create test file for some package (example):

../github.com/BojanKomazec/go-demo/internal/pkg/stringdemo$ ginkgo bootstrap
Generating ginkgo test suite bootstrap for stringdemo in:
        stringdemo_suite_test.go

Generated file stringdemo_suite_test.go looks like this:

package stringdemo_test

import (
"testing"

. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)

func TestStringdemo(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "Stringdemo Suite")
}

This file has to have name in form *_test.go.

package name can be adjusted to match the name of the package which is under test (e.g. package stringdemo instead of stringdemo_test). This can be done during the bootstrap by passing -internal argument to it.

---

To run all tests across all packages in the project and also print the coverage % use:

ginkgo -r -v -cover

To achieve the same with native go test do the following:

go test ./... -v -cover


Panic in a goroutine crashes test suite

If running dlv test on tests written with Ginkgo framework:

API server listening at: 127.0.0.1:9379

Usage of C:\...\git.bk.com\example+project\internal\example_package\debug.test:
  -ginkgo.debug
    If set, ginkgo will emit node output to files when running in parallel.
  -ginkgo.dryRun
    If set, ginkgo will walk the test hierarchy without actually running anything.  Best paired with -v.
  -ginkgo.failFast
    If set, ginkgo will stop running a test suite after a failure occurs.
  -ginkgo.failOnPending
    If set, ginkgo will mark the test suite as failed if any specs are pending.
  -ginkgo.flakeAttempts int
    Make up to this many attempts to run each spec. Please note that if any of the attempts succeed, the suite will not be failed. But any failures will still be recorded. (default 1)
  -ginkgo.focus string
    If set, ginkgo will only run specs that match this regular expression.
  -ginkgo.noColor
    If set, suppress color output in default reporter.
  -ginkgo.noisyPendings
    If set, default reporter will shout about pending tests. (default true)
  -ginkgo.noisySkippings
    If set, default reporter will shout about skipping tests. (default true)
  -ginkgo.parallel.node int
    This worker node's (one-indexed) node number.  For running specs in parallel. (default 1)
  -ginkgo.parallel.streamhost string
    The address for the server that the running nodes should stream data to.
  -ginkgo.parallel.synchost string
    The address for the server that will synchronize the running nodes.
  -ginkgo.parallel.total int
    The total number of worker nodes.  For running specs in parallel. (default 1)
  -ginkgo.progress
    If set, ginkgo will emit progress information as each spec runs to the GinkgoWriter.
  -ginkgo.randomizeAllSpecs
    If set, ginkgo will randomize all specs together.  By default, ginkgo only randomizes the top level Describe, Context and When groups.
  -ginkgo.regexScansFilePath
    If set, ginkgo regex matching also will look at the file path (code location).
  -ginkgo.seed int
    The seed used to randomize the spec suite. (default 1553106348)
  -ginkgo.skip string
    If set, ginkgo will only run specs that do not match this regular expression.
  -ginkgo.skipMeasurements
    If set, ginkgo will skip any measurement specs.
  -ginkgo.slowSpecThreshold float
    (in seconds) Specs that take longer to run than this threshold are flagged as slow by the default reporter. (default 5)
  -ginkgo.succinct
    If set, default reporter prints out a very succinct report
  -ginkgo.trace
    If set, default reporter prints out the full stack trace when a failure occurs
  -ginkgo.v
    If set, default reporter print out all specs as they begin.
  -test.bench regexp
    run only benchmarks matching regexp
  -test.benchmem
    print memory allocations for benchmarks
  -test.benchtime d
    run each benchmark for duration d (default 1s)
  -test.blockprofile file
    write a goroutine blocking profile to file
  -test.blockprofilerate rate
    set blocking profile rate (see runtime.SetBlockProfileRate) (default 1)
  -test.count n
    run tests and benchmarks n times (default 1)
  -test.coverprofile file
    write a coverage profile to file
  -test.cpu list
    comma-separated list of cpu counts to run each test with
  -test.cpuprofile file
    write a cpu profile to file
  -test.failfast
    do not start new tests after the first test failure
  -test.list regexp
    list tests, examples, and benchmarks matching regexp then exit
  -test.memprofile file
    write an allocation profile to file
  -test.memprofilerate rate
    set memory allocation profiling rate (see runtime.MemProfileRate)
  -test.mutexprofile string
    write a mutex contention profile to the named file after execution
  -test.mutexprofilefraction int
    if >= 0, calls runtime.SetMutexProfileFraction() (default 1)
  -test.outputdir dir
    write profiles to dir
  -test.parallel n
    run at most n tests in parallel (default 4)
  -test.run regexp
    run only tests and examples matching regexp
  -test.short
    run smaller test suite to save time
  -test.testlogfile file
    write test action log to file (for use only by cmd/go)
  -test.timeout d
    panic test binary after duration d (default 0, timeout disabled)
  -test.trace file
    write an execution trace to file
  -test.v
    verbose: print additional output


TBC...

Tuesday, 30 July 2019

Apache Ant Patterns


Apache Ant is used for Java build files. It uses so called "Ant-style" wildcards which have been accepted and are now used by many other tools.

Ant-style wildcards:


?


  • Matches one character (any character except path separators)
  • used to match file names
  • matches one level
  • any character except path separators



  • Matches zero or more characters (not including path separators)
  • used to match file names
  • matches one level
  • any character except path separators


**


  • Matches zero or more path segments (directory tree) 
  • used for folder-names matching
  • includes/matches path separators (slash, /) 
  • matches multiple levels
  • src/**/*.cs will find all cs files in any sub-directory of src



If we have the following tree:

/dir1/dir2/file1.txt
/dir1/dir2/dir3/file2.txt

Ant pattern which filters all .txt files in any subdirectory of a dir2 directory would be:

**/dir/**/*.txt

When ** is used as the name of a directory in the pattern, it matches zero or more directories.

References:


Directory-based Tasks
How do I use Nant/Ant naming patterns?
Pattern matching guide
Learning Ant path style

Monday, 29 July 2019

How to play .mp4 videos on Ubuntu

Install the following packages (and accept EULAs):

$ sudo apt-get update
$ sudo apt install libdvdnav4 libdvdread4 gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly libdvd-pkg
$ sudo dpkg-reconfigure libdvd-pkg
$ sudo apt install ubuntu-restricted-extras

(Tested on Ubuntu 18.04)

Wednesday, 17 July 2019

Introduction to Makefile

make commands

ifeq
else
endif
...

They MUST NOT be indented with TAB characters as (almost) all lines with TAB characters as the first character on the line in a makefile are passed to the shell (/bin/sh). The shell doesn't know anything about  make commands. make commands can be indented with a set of SPACE characters but this might be misleading as recipies is what MUST be indented with TABs.


Recipies


They must be indented with TAB character in order to be passed to shell.

Targets


Targets are labels that allow make to execute a group of commands together.

Makefile:

target1:
   @echo target1 is executing
target2:
   @echo target2 is executing

We can now run make as:

$ make target1

or

$ make target2


Conditional Execution


Use ifeq-endif or ifeq-else-endif blocks.

Makefile:

VAR1= test
VAR2=nottest

demo-if-else-endif:

ifeq($(VAR1), $(VAR2))
   ...
else
   ...
endif

Makefile ifeq: when are they evaluated?

Variable comparison


TEST=ON
ifeq ($(TEST),ON)
    @echo PASSED
else
    @echo FAILED
endif


To check if variable is empty:

ifeq ($(TEST),)
TEST := $(something else)
endif

Makefile set if variable is empty


Wednesday, 26 June 2019

Software Design

Main Principles


Code should be correct, clear and efficient.
Prefer simple. Avoid clever.
(taken from https://yourbasic.org/)

Make MVP working (correctly) first. Release it and analyze feedback. Revenue, not the beauty of the code should drive development...but code should be well-designed if TDD is followed. Refactor evolutionary, not revolutionary.

Software design
SOLID Principles

12 Factor Applications 
Twelve-Factor App methodology

TDD - Test-Driven Development

I feel comfortable when replacing one implementation of the function with another only if that function is covered by unit tests.

POD - Performance-Oriented Development

Command-line Arguments



  • To indicate optional arguments, Square brackets are commonly used, and can also be used to group parameters that must be specified together.
  • To indicate required arguments, Angled brackets are commonly used, following the same grouping conventions as square brackets.
  • Exclusive parameters can be indicated by separating them with vertical bars within groups.

Logical Expressions

Use Boolean Algebra laws to simplify complex conditions (logical expressions).

Global Variables

They should be avoided unless they are static/singletons that represent an object with cross-cutting concern functionality.

Global Variables Are Bad

Functions

Functions should be simple, short and follow SRP principle. E.g. if function has to create a file at some path, don't make it also creating that path (if path does not exist). Create another function which is responsible ONLY for creating paths instead.

Don't make library/package functions asynchronous by default - allow users to choose how they want to consume them - synchronously or asynchronously. They can always create async wrapper around them.

The same stands for functions in Go. We could make them accept sync.WaitGroup argument so they can be awaited...but we should make function only do its main job as fiddling with wait group pollutes function's main functionality and thus break SRP.

func foo(arg1 T1, arg2 T2, ...wg *sync.WaitGroup) {
   wg.Add(1)
   ...
   defer wg.Done()
}

In the same way, don't add logging to library/package functions. Return errors/throw exceptions with error messages/codes instead. User of the library should decide what they want to see in the log output.

If function has multiple parameters and e.g. one parameter is used only in one part of the function, check if this part of the function is doing a task (or...has responsibility for one "thing") that could be extracted into a separate function.

Indentation & Single Point of Return


There are two schools here. The one which recommends that each function should have single point of return and one that allows multiple points of return.

Single point of return:
  • if function is long this increases chances of having multiple levels of nested conditions
  • returned value (error) is assigned at multiple places and at multiple levels
  • it's difficult to track positive execution path
Multiple points of return:
  • prevents deep levels of indentation (such functions usually have only two)
  • it is easy to track which expression would make function to return which error
  • we can use indentation here to visually create positive and error paths: positive path of execution are expressions in the 1st indentation level. Handling errors is in the 2nd (indented) level of indentation (see Code: Align the happy path to the left edge)
Here are some more Tips for a good line of sight from Mat Ryer:
  • Align the happy path to the left; you should quickly be able to scan down one column to see the expected execution flow
  • Don’t hide happy path logic inside a nest of indented braces
  • Exit early from your function
  • Avoid else returns; consider flipping the if statement
  • Put the happy return statement as the very last line
  • Extract functions and methods to keep bodies small and readable
  • If you need big indented bodies, consider giving them their own function

How small function should be?

How small should functions be?
Small Functions considered Harmful
What should be the maximum length of a function?
How can wrapping an expression as a function be Clean Code?


Arguments Validation

Arguments should be validated if their values are coming from the wild outside world and this happens in the public API. Contract validation frameworks can be used.

In private/internal functions we can use assertions (in C++/C#) or no validation at all and allow application to crash.

Classes 

Prefer composition over inheritance.

TBD...

Logging

Don't use logging as a substitute for proper debugging. Logging is poor man's debugging. Learn how to use debugging and profiling tools relevant for your development stack and IDE.

Think carefully what will go into log. If there is no error, don't make log message like this:

Created symlink ./.../myapp--setup.exe --> myapp-setup.exe. Error: <nil>

When you later analyze log file and look for word "error", you'll get tons of false positives.


Documentation

The older I am the less I like having documentation about the software I write anywhere else but in the source code itself. This reduces information redundancy, duplication and situations when documentation is not in sync with the implementation. Having brief but comprehensive comments and tools (example1) which can extract desired information from them should do the job. Having high unit test coverage (and BDD-style tests if you are kind to non-tech members of the team) should also help as reading test names should be as informative and as easy as reading a requirement specification.


Some Common Patterns and Anti-Patterns


Producer - Consumer


What is the benefit of writing to a temp location, And then copying it to the intended destination?

TBD...

Wednesday, 12 June 2019

Introduction to Go language

Here are some notes on Go language features, packages and tools.

---
First, read this: How to Write Go Code

Upon installing Go on Ubuntu, its source code is placed at the following path: /var/lib/go/src/
---
For the build tools to produce an executable, the function main must be declared, and it
becomes the entry point for the program.Variab

Function main is located in a package called main.
If your main function doesn’t exist in package main, the build tools won’t produce an executable.

Every code file in Go belongs to a package, and main.go is no exception.

packages are an important feature of Go

packages define a unit of compiled code and their names help provide a level of indirection to the identifiers that are declared inside of them, just like a namespace. This makes it possible to distinguish identifiers that are declared with exactly the same name in the different packages you import.

Imports are just that: they import code and give you access to identifiers such as types,
functions, constants, and interfaces.

All code files in a folder must use the same package name, and it’s common prac-
tice to name the package after the folder. As stated before, a package defines a unit of
compiled code, and each unit of code represents a package.

each code file will contain the keyword package at the top with a name
for the package. e.g. Each code file in the "search" folder will contain "search" for the pack-
age name.

Imported packages can be grouped as:

import (
   "log"
   "os"
    _ "github.com/goinaction/code/chapter2/sample/matchers"
   "github.com/goinaction/code/chapter2/sample/search"
)

Packages from standard library don't need to have path specified.
Packages from the local workspace have to have their relative path specified (relative to the workspace).

When you import code from the standard library, you only need to reference the
name of the package, unlike when you import code from outside of the standard
library. The compiler will always look for the packages you import at the locations ref-
erenced by the GOROOT and GOPATH environment variables.

what should be the values of GOPATH and GOROOT?

import the matchers package and use the blank identifier (_) before listing out the import path.This is a technique in Go to allow initialization from a package to occur, even if you
don’t directly use any identifiers from the package. To make your programs more
readable, the Go compiler won’t let you declare a package to be imported if it’s not
used. The blank identifier allows the compiler to accept the import and call any init
functions that can be found in the different code files within that package.

All init functions in any code file that are part of the program will get called before
the main function.

When is the init() function run?

func init() {
  // Change the device for logging to stdout.
   log.SetOutput(os.Stdout)
   ...
}

This init function sets the logger from the standard library to
write to the stdout device. By default, the logger is set to write to the stderr device.

call to the Run function that belongs to the search package:
search.Run("president")

The log package provides support for logging messages to the stdout , stderr , or even
custom devices. The sync package provides support for synchronizing goroutines,

var matchers = make(map[string]Matcher)
This variable is located outside the scope of any function and so is considered a
package-level variable. The variable is declared using the keyword var and is declared
as a map of Matcher type values with a key of type string .

the name of the variable matchers starts with a lowercase letter.

In Go, identifiers are either exported or unexported from a package.

  • An exported identifier can be directly accessed by code in other packages when the respective package is imported. These identifiers start with a capital letter.
  • Unexported identifiers start with a lowercase letter and can’t be directly accessed by code in other packages. But just because an identifier is unexported, it doesn’t mean other packages can’t indirectly access these identifiers. As an example, a function can return a value of an unexported type and this value is accessible by any calling function, even if the calling function has been declared in a different package.


This variable declaration also contains an initialization of the variable via the
assignment operator and a special built-in function called make .

A map is a reference type that you’re required to make in Go. If you don’t make the
map first and assign it to your variable, you’ll receive errors when you try to use the map
variable. This is because the zero value for a map variable is nil .

In Go, all variables are initialized to their zero value. For numeric types, that value
is 0 ; for strings it’s an empty string; for Booleans it’s false ; and for pointers, the zero
value is nil . When it comes to reference types, there are underlying data structures
that are initialized to their zero values. But variables declared as a reference type set to
their zero value will return the value of nil .

Go programs can be structured to handle the launching and synchronization of goroutines that run concurrently

To declare a function in Go, use the keyword func followed by the function name, any
parameters, and then any return values.

feeds, err := RetrieveFeeds()
if err != nil {
   log.Fatal(err)
}

NOTE: You can omit the parentheses () from an if statement in Golang, but the curly braces {} are mandatory!

This function belongs to the search package and returns two values. The first return value is a slice
of Feed type values. A slice is a reference type that implements a dynamic array. You use
slices in Go to work with lists of data.

The second return value is an error.

functions can have multiple return
values. It’s common to declare functions that return a value and an error value just
like the RetrieveFeeds function. If an error occurs, never trust the other values being
returned from the function. They should always be ignored, or else you run the risk of
the code generating more errors or panics.

the short variable declaration operator ( := ). This operator is
used to both declare and initialize variables at the same time. The type of each value
being returned is used by the compiler to determine the type for each variable,
respectively. The short variable declaration operator is just a shortcut to streamline
your code and make the code more readable. The variable it declares is no different
than any other variable you may declare when using the keyword var .

results := make(chan *Result)


  • = assigns the right to the left
  • :=  creates a new variable named the left, and assigns it the value of the item on the right


we use the built-in function make to create an unbuffered channel. We use
the short variable declaration operator to declare and initialize the channel variable
with the call to make . A good rule of thumb when declaring variables is to use the key-
word var when declaring variables that will be initialized to their zero value, and to
use the short variable declaration operator when you’re providing extra initialization
or making a function call.

Channels are also a reference type in Go like maps and slices, but channels imple-
ment a queue of typed values that are used to communicate data between goroutines.
Channels provide inherent synchronization mechanisms to make communication
safe.


The next two lines of code are used later to prevent the program from terminating
before all the search processing is complete.

// Setup a wait group so we can process all the feeds.
var waitGroup sync.WaitGroup
// Set the number of goroutines we need to wait for while
// they process the individual feeds.
waitGroup.Add(len(feeds))


In Go, once the main function returns, the program terminates. Any goroutines that
were launched and are still running at this time will also be terminated by the Go run-
time. When you write concurrent programs, it’s best to cleanly terminate any gorou-
tines that were launched prior to letting the main function return. Writing programs
that can cleanly start and shut down helps reduce bugs and prevents resources from
corruption.

Our program is using a WaitGroup from the sync package to track all the goroutines we’re going to launch. A WaitGroup is a great way to track when a goroutine is
finished performing its work. A WaitGroup is a counting semaphore, and we’ll use it to
count off goroutines as they finish their work.

On line 23 we declare a variable of type WaitGroup from the sync package. Then
on line 27 we set the value of the WaitGroup variable to match the number of gorou-
tines we’re going to launch. As you’ll soon see, we’ll process each feed concurrently
with its own goroutine. As each goroutine completes its work, it will decrement the

count of the WaitGroup variable, and once the variable gets to zero, we’ll know all the
work is done.


for _, item := range items {
   ...
}

range (keyword)

  • can be used with arrays, strings, slices, maps, and channels

for range

  • used to iterate over the slice of items
  • When used to iterate over a slice, we get two values back on each iteration:
    • index position of the element we’re iterating over
    • a copy of the value in that element

blank identifier (_)

  • used as a substitution for the variable that would be assigned to the index value for the range call. When you have a function that returns multiple values, and you don’t have a need for one, you can use the blank identifier to ignore those values. In our case with this range, we won’t be using the index value, so the blank identifier allows us to ignore it.


matcher, exists := matchers[feed.Type]
if !exists {
   matcher = matchers["default"]
}

we check the map for a
key that matches the feed type. When looking up a key in a map, you have two
options: you can assign a single variable or two variables for the lookup call. The first
variable is always the value returned for the key lookup, and the second value, if speci-
fied, is a Boolean flag that reports whether the key exists or not. When a key doesn’t
exist, the map will return the zero value for the type of value being stored in the map.
When the key does exist, the map will return a copy of the value for that key.

// Launch the goroutine
go func(matcher Matcher, feed *Feed) {
   Match(matcher, feed, searchTerm, results) 
   waitGroup.Done()
}(matcher, feed)


goroutine

  • light-weight process that is automatically time-sliced onto one or more operating system threads by the Go runtime.
  • a function that’s launched to run independently from other functions in the program.
  • Use the keyword go to launch and schedule goroutines to run concurrently. 
  • the order the goroutines get executed is unpredictable
  • can be an anonymous function
  • can be launched in a (for range) loop, for each element of some set: this allows each element to be processed independently in a concurrent fashion
  • There's no goroutine ID available from the runtime
  • not an OS thread (thread managed/scheduled natively by OS)
  • not exactly a green thread (thread managed/scheduled by languages runtime or virtual machine) [Green threads] [Why not Green Threads?]
  • is a special type of coroutine (concurrent subroutines - functions, closures or methods) - the one that is non-preemptive. It cannot be interrupted but instead has multiple points through which it can be suspended or reentered. Go runtime defines this points internally and automatically suspends them when they block and resumes them when they become unblocked.
  • goroutines operate within the same address space as each other, and host functions
  • Go implements M:N scheduler, which means it maps M green threads to N OS threads. Goroutines are then scheduled onto the green threads. When we have more goroutines than green threads available, the scheduler handles the distribution of the goroutines across the available threads and ensures that when these goroutines become blocked, other goroutines can be run.
  • Go follows a model of concurrency called the fork-join model.

To launch a function as goroutine:
goroutines operate within the same address space as each other, and
simply host functions,
func foo() {
   fmt.Println("foo()")
}

go foo()

To launch anonymous function as goroutine:

go func() {
   fmt.Println("foo()")
}()

---


anonymous function

  • a function that’s declared without a name
  • can take parameters

pointer variables

  • are great for sharing variables between functions. They allow functions to access and change the state of a variable that was declared within the scope of a different function and possibly a different goroutine.
  • In Go, all variables are passed by value. Since the value of a pointer variable is the address to the memory being pointed to, passing pointer variables between functions is still considered a pass by value.


waitGroup.Done()

Once the main task withing the goroutine completes, we execute the code which decrements the WaitGroup count. Once every goroutine finishes calling Done method, the program will know every main task has been done (e.g. element has been processed).

There’s something else interesting about the method call to Done: the WaitGroup
value was never passed into the anonymous function as a parameter, yet the anony-
mous function has access to it. Go supports closures and you’re seeing this in action. In fact, the searchTerm and results variables are also being accessed by the anonymous function via closures.
Thanks to closures, the function can access those variables directly without the need to
pass them in as parameters. The anonymous function isn’t given a copy of these variables; it has direct access to the same variables declared in the scope of the outer function. This is the reason why we don’t use closures for the matcher and feed variables.

---

With all the processing goroutines working, sending results on the results channel
and decrementing the waitGroup counter, we need a way to display those results and
keep the main function alive until all the processing is done. We'll launch yet another anonymous function as a goroutine. This anonymous function takes no parameters and uses closures to access both the waitGroup and results variables. This goroutine calls the method Wait() on the WaitGroup value, which is causing the goroutine to block until the count for the WaitGroup hits zero. Once that happens, the goroutine calls the built-in function close on the channel, which as you’ll see causes the program to terminate.


// Launch a goroutine to monitor when all the work is done.
go func() {
   // Wait for everything to be processed.
   waitGroup.Wait()

   // Close the channel to signal to the Display

   // function that we can exit the program.
   close(results)
}()

---

func Match(matcher Matcher, feed *Feed, searchTerm string, results chan<- *Result) {
   // Perform the search against the specified matcher.
   searchResults, err := matcher.Search(feed, searchTerm)
   if err != nil {
      log.Println(err)
      return
   }
   // Write the results to the channel.
   for _, result := range searchResults {
      results <- result
   }
}

results are written to the channel

BK: not how nil is used instead of null
BK: note how it's so convenient the model of returning function result and error at the same time; no need to pass variable by pointer and no expensive exceptions
BK: note how there are no brackets around conditions in if, for... statements.

---

Variables


Variable names can't start with number.
Variable names can't be the same as names of imported packages.


---

Constants


  • cannot be declared using the := syntax

const Pi = 3.14

Tour of Go - Constants
The Go Blog - Constants

"Hello, 世界" is untyped string constant.
It remains an untyped string constant even when given a name:

const hello = "Hello, 世界"

hello is also an untyped string constant.

An untyped constant is just a value, one not yet given a defined type that would force it to obey the strict rules that prevent combining differently typed values.


typed string constant is one that's been given a type:

const typedHello string = "Hello, 世界"


---

Logical Operators


Operands in logical expressions are evaluated lazily - only if needed:

A && B <==> If A then B else FALSE
A || B <==> If A then TRUE else B


---

Functions

Go by Example: Functions

func plus(a int, b int) int {...}


Variable can be of a function type and be assigned a function:

sayHello := func() {
   fmt.Println("hello")
}

go sayHello()


Go by Example: Variadic Functions

func sum(nums... int){...}

Specifying function's default value for an argument is NOT supported.
Default value in Go's method


Naming


Your custom function can't have the same name as the name of some imported package.

Function Overloading


Does the Go language have function/method overloading? (Answer: NO)
Optional parameters, default parameter values and method overloading

The idiomatic way to emulate optional parameters and method overloading in Go is to write several methods with different names.

Alternative Patterns for Method Overloading in Go
Functional options for friendly APIs <-- MUST READ (!)


Closures


Go by Example: Closures

What exactly does “closing over” mean?

 "A closes over B" == "B is a free variable of A", where free variables are those that appear in a function's body, but not in its signature.


What will be the output of the following snippet?

var wg sync.WaitGroup

for _, letter := range []string{"a", "b", "c"} {
   wg.Add(1)
   go func() {
      defer wg.Done()
      fmt.Println(letter)
   }()
}

wg.Wait()


Go closure variable scope

Closures in Go capture variables by reference. That means the inner function holds a reference to the  i variable in the outer scope, and each call of it accesses this same variable.


Closure (computer programming)

Comapre to JS Clsoures.
Lexical scoping:
This is an example of lexical scoping, which describes how a parser resolves variable names when functions are nested. The word "lexical" refers to the fact that lexical scoping uses the location where a variable is declared within the source code to determine where that variable is available. Nested functions have access to variables declared in their outer scope.


How golang's “defer” capture closure's parameter?


Data Types

How to find a type of an object in Go?

import "reflect"
tst := "string"
fmt.Println(reflect.TypeOf(tst))


A Tour of Go - Type switches

switch v := i.(type) {
case T:
    // here v has type T
case S:
    // here v has type S
default:
    // no match; here v has the same type as i
}

Type assertions

x.(T)

Type casting (Type conversions)

string(x)


---
type is reserved word in Go and can't be used as e.g. struct field name (Go linter issues error: syntax error: unexpected type, expecting field name or embedded type)

What (exactly) does the type keyword do in go?
---

Where to declare custom types?

Declare (custom) type just before the place you need it. That does not have to be at the beginning of the file, but e.g. your custom struct can be declared just before the set of functions that are using it. Example: https://golang.org/src/net/http/client.go

Golang - Code organisation with structs, variables and interfaces

All the files are based on features, and it is best to use a proximity principle approach, where you can find in the same file the definition of what you are using.
Generally, those features are grouped in one file per package, except for large ones, where one package is composed of many files (net, net/http)

If you want to separate anything, separate the source (xxx.go) from the tests/benchmarks (xxx_test.go)


iota is predeclared untyped integer identifier which resets to 0 at each constant declaration and its value is incremented by 1 for each constant withing that declaration.

type Day int

const (
 MONDAY Day = 1 + iota
 TUESDAY
 WEDNESDAY
 THURSDAY
 FRIDAY
 SATURDAY
 SUNDAY
)

var days = [...]string {
 "MONDAY",
 "TUESDAY",
 "WEDNESDAY",
 "THURSDAY",
 "FRIDAY",
 "SATURDAY",
 "SUNDAY",
}

func (day Day) String() string {
 return days[day - 1]
}

Type of all variables above is Day. But be careful, the type of first variable in the const block applies to subsequent variables only if no values are explicitly assigned to them.

Take  this example:

type architecture string
const (
mac   architecture = "mac"
win64              = "win64"
win86              = "win86"
)

Only mac is of type architecture. win64 and win86 are still of type string! Solution:

const (
mac    architecture = "mac"
win64  architecture = "win64"
win86  architecture = "win86"
)

Go does not provide off-the-shelf way to find the count of enum values but here is one trick:

const (
Sunday = iota
Monday
Tuesday
Wednesday
Thursday
Friday
Partyday
numberOfDays  // this constant is not exported
)

We could have used string as the underlying type:

type Day string
const (
   MONDAY Day = "MONDAY"
   TUESDAY = "TUESDAY"
   WEDNESDAY = "WEDNESDAY"
   THURSDAY = "THURSDAY"
   FRIDAY = "FRIDAY"
   SATURDAY = "SATURDAY"
   SUNDAY = "SUNDAY"
)


for _, day := range []Day{MONDAY, TUESDAY} {
   // cast enumerator to string
   s := string(day) }
---

4 iota enum examples
Ultimate Visual Guide to Go Enums and iota
Create an enum from a group of related constants in Go


Structures


type S struct {
   X int
}

// create an instance of S
var s1 S

// create an instance of S and get a pointer to it
s2 := &S{123} 
x1 := (*s2).X 
x2 := s2.X // a shorter form

// another way to get a pointer:
s3 := new(S)

// create an instance and initialize it
s4 := S {X: 456}
s5 := S {789}


If some function has to return a struct variable:

return S{1}


Empty structure


The empty struct 

type S struct{}
var s S
fmt.Println(unsafe.Sizeof(s)) // 0

chan struct{} construct is used for signalling between goroutines.

---
Returning nil for a struct? - If function returns (MyStruct, error) which value for MyStruct to return in case of error?

Don't be too concerned about sending back an empty/blank struct, as it's generally good practice to first check the error before proceeding to do anything with the value of the struct. So it's value is generally irrelevant until the error has been checked.
---


---

Channels

Curious Channels

Go by Example: Closing Channels
How to Gracefully Close Channels
Range and Close
idiomatic goroutine termination and error handling

To pass a channel as a function argument:

func foo(c chan struct{}) {
   ...
}

...
c := make(chan struct{}, 10)
foo(c)
...

Should channel be passed by reference (as a pointer)?
No. See Are channels passed by reference implicitly

The reference types in Go are slice, map and channel. When passing these, you're making a copy of the reference. Strings are also implemented as a reference type, though they're immutable.




---

String 


s := "This is in first line"
s += "\n"
s += "...and this is in the second line"

How do you write multiline strings in Go?

This raw quote (raw literal) does not parse escape sequences (\n would remain):

`line 1
line 2\n
line 3`

It is possible to use formatters though:

fmt.Sprintf(`a = %d`, 123)

Another option:

"line 1" +
"line 2" +
"line 3"



What is the difference between backticks (``) & double quotes (“”) in golang?

String Comparison - use == or !=

Slice string into letters

String Formatting 


s := fmt.Sprintf("a %s", "string")

Go by Example: String Formatting
Extracting substrings in Go

%x - formats value as HEX string with lowercase letters
%X - formats value as HEX string with uppercase letters

var n []byte = ...
s := fmt.Sprinf("%x", n) 



The empty interface


interface{}

  • interface type that specifies zero methods
  • may hold values of any type
  • used by code that handles values of unknown type

var i interface{}
i = 42
i = "hello"

A Tour of Go - The empty interface



package main

import (
"fmt"
)

func main() {
var i interface{}
var j interface{} = 2
var k *interface{}
i = &j
k = &j
fmt.Println(i)
        
        // invalid indirect of i (type interface {})
// fmt.Println(*i) 

fmt.Println(*k)

}

Output:

0x40c130
2


Array


  • collection of elements of a single type
  • a fixed-length data type that contains a contiguous block of elements of the same type


var intArray [5]int
intArray[0] = 10
var intArray = [5]int {10, 20, 30}
var intArray = [5]int {0:10, 2:30, 4:50}
intArray := [5]int {10, 20, 30, 40, 50}
intArray := [...]int {10, 20, 30, 40, 50}

Create array of array literal in Golang

a := [][]int{{1,2,3},{4,5,6}}



Slice

  • a segment of dynamic arrays that can grow and shrink
  • indexable and have a length
  • used as a dynamic array - when we want to use an array but don't know its size in advance
  • can be created with built-in function make. make has one mandatory (type) and two optional arguments (length and capacity)

var numbers []int
or
numbers := make([]int, 0)

numbers = append(numbers, 1)
numbers = append(numbers, 2)
fmt.Println(len(numbers)) // == 2

Note this:

slc := make([]string, len(in)) // creates a slice which already contains 3 elements (empty strings)

slc := make([]string, 0, len(in)) // creates a slice which contains 0 but has capacity to hold 3 elements (3 strings)


Creating a slice with make
Go Slices: usage and internals
The Go Blog - Arrays, slices (and strings): The mechanics of 'append'


Declare slice or make slice?

var s []int
  • simple declaration
  • crates so called "nil slice"
  • does not allocate memory
  • s points to nil
  • should not be a return value of an API which returns slice (an empty slice should be returned)
  • marshaling the nil slice (var s []int) will produce null

s  := make([]int, 0)
  • creates so called "empty slice"
  • allocates memory
  • s points to memory to a slice with 0 elements
  • should be returned if an API needs to return a slice with 0 elements
  • marshalling the empty slice will produce the expected []


Convert slice of string to slice of pointer to string

Convert slice of type A to slice of pointers to type A

This:

for _, v := range a {
b = append(b, &v)
}

and this:

for i := 0; i < len(a); i++ {
b = append(b, &a[i])
}

are not the same. In the latter, you are taking the address of element a[i] in the array (which is what you want). In the former, you are taking the address of the iteration variable v, which, as you discovered, does not change during the loop execution (it gets allocated once, it's in a completely new location in memory, and gets overwritten at each iteration).

BK: We can still use range:

 for i := range a {
b = append(b, &a[i])
}

---
How to declare a slice of slices (2-D slice)?


slice literal is written as []type{<value 1>, <value 2>, ... }.

A slice of ints would be []int{1,2,3}
A slice of int slices would be [][]int{{1,2,3}, {4,5,6}}

var s2d [][]string = [][]string{{"a", "b", "c"}, {"d", "e", "f"}}

Try it in The Go Playground


---

Interfaces

Struct implements interface if it implements all its methods.


pinger/pinger.go:

package pinger

type (
   PingerConfig struct {
      timeout int
   }

   Pinger interface {
      Ping(ip string) error
   }

   // New is a Pinger factory method
   // New func(pingerConfig PingerConfig) (Pinger, error)
)

superPinger/pinger.go:

package superpinger

import (
   "example.com/myproduct/myproject/internal/pkg/pinger"


type superPinger struct {
id     string
}

// New function creates an instance of PostgreSQL client
func New(config pinger.PingerConfig) (pinger.Pinger, error) {
        superPinger := superPinger(config)
        err := superPinger.Init(...)
return superPinger, err
}


// Ping function verifies remote endpoint accessibility
func (superPinger superPinger) Ping(ip string) error {
err := doPing(ip)
return err
}

main.go:

import (
   "example.com/myproduct/myproject/internal/pkg/pinger"
   "example.com/myproduct/myproject/internal/pkg/superpinger"
)

...
pingerConfig := pinger.NewPingerConfig(...)
pinger, err := superpinger.New(pingerConfig)
pinger.Ping("8.8.8.8")
...


How to pass an interface to function: by value or pointer?

What is correct:

func CalculateSHA(h hash.Hash, file *os.File)
or
func CalculateSHA(h *hash.Hash, file *os.File) ([]byte, error) {

(hash.Hash is an interface)


Interfaces and Pointers to Interfaces
you almost never need a pointer to an interface, since interfaces are just pointers themselves
If I was doing a code review and saw that you were passing the address of an interface value to a function, it would raise a flag. An interface value is designed to be copied. An interface value is a reference type, just like slices, maps, channels and function variables. 
Have the pointer inside the interface if possible, rather than as a pointer to interface.
can't use pointers to interface types?

>     func Foo(r io.Reader)
>
> Why is this? Won't this copy the thing?
interface{} is just 2 words, copying it is very cheap.
interface copy does not copy the stored object, interface assignment can  
Function calls in Go (like most other languages) is 'pass by value'. The values passed to the function are copied on to the stack for the function to access. In languages without complex values type (i.e. structs) these values are either primitive values (ints, floats etc) or they are references(pointers) to objects.  
Interface values in Go are the size of two pointers, one pointer to type information and the other is a pointer to the value currently stored in the interface value. eg. an io.Reader that you've assigned an *os.File to will contain that *os.File pointer and a pointer to the *os.File type information.  
Passing interface values to a function copies the interface value.
It generally doesn't make sense to declare a function that takes a pointer to an interface, e.g. *io.Writer. It's not very useful - io.Writer itself might *contain* a pointer (as in your p2) example and all is fine. 

Go Data Structures: Interfaces


Generics (Templates)


They do NOT exist in Go!

That's why you'll need to write many functions from scratch...like here:
What is the correct way to find the min between two integers in Go?


Polymorphism

Part 28: Polymorphism - OOP in Go:
A variable of type interface can hold any value which implements the interface. This property of interfaces is used to achieve polymorphism in Go.

---

For Loop


for i :=0; i<10; i++ {
   fmt.Println("i =", i)
}



Error Creation and Handling


3 simple ways to create an error

If function returns type error, it can return nil if there is no error.


---

Working with File System

Writing to files


Go by Example: Writing Files

Without buffering:


f, err := os.Create("/tmp/test.txt")
if err != nil {
panic(err)
}
defer f.Close()
f.WriteString("some text!\n")


WriteString() under the hood calls system function write():

r0, _, e1 := Syscall(SYS_WRITE, uintptr(fd), uintptr(_p0), uintptr(len(p)))

write() belongs to the group of Unix unbuffered file I/O functions (together with open, close, read, lseek...).

From FILE I/O:
They are part of the POSIX standard for UNIX programming, not part of ANSI C. Unlike the standard I/O routines provided by ANSI C (such as fscanf and fprintf which store the data they read in buffers) these functions are unbuffered I/O. They invoke a system call in the kernel, and will be called by the standard ANSI C function calls in the UNIX environment.

With buffering:


f, err := os.Create("/tmp/test.txt")
if err != nil {
panic(err)
}
defer f.Close()
w := bufio.NewWriter(f)
w.WriteString("some text!\n")
w.Flush()


Encryption




Concurrency & Parallelism


How can my Go program keep all the CPU cores busy?


---


Packages

If we have:

import "mypackage"

then we need to use package name when calling any its exported member:

mypackage.Foo()

It is possible to assign an alias name to the imported package. Example:

import b64 "encoding/base64"
...
es := b64.StdEncoding.EncodeToString([]byte(data))


Internal Packages


https://notes.shichao.io/gopl/ch10/#internal-packages

How to import local packages in go?

Go starting path for import is $HOME/go/src.


encoding/base64


Go by Example: Base64 Encoding



encoding/json


Parsing JSON in Golang

the Species attribute in our Bird struct will map to the species, or Species or sPeCiEs JSON property.

Go by Example: JSON

When defining a structure into which we want to unmarshal some JSON, we have to declare struct members as exported (their names have to start with capital letter). This is required so other package (json) can use reflection and access these fields.

JSON and dealing with unexported fields

The json library does not have the power to view fields using reflect unless they are exported. A package can only view the unexported fields of types within its own package.


It's fundamentally limited by the reflect package that any package can't set unexported fields in types
defined in other packages (without using unsafe.)

---
How to unmarshal an escaped JSON string in Go?
strconv.Unquote()
---

Converting Go struct to JSON

---

errors


Go by Example: Errors

If function returns type error and we want to return error object with custom message:

return errors.New("Argument is nil: file")

To create an error with formatted message use fmt.Errorf:

err := fmt.Errorf("user %q (id %d) not found", name, id)

---
Example when type assertion is used to convert from error to some specific error type:
What is err.(*os.PathError) in Go?

Example:

fi, err := file.Stat()
if err != nil {
return 0, fmt.Errorf("Failed to access file %s", (err.(*os.PathError)).Path)
}

---


fmt


fmt.Println("Table names:", tableNames)

SPACE character is automatically inserted between these two strings.

Golang - How to print the values of Arrays?

fmt.Printf("%v", projects)


fmt.Printf


%+v - prints struct’s field names (if value is a struct)

---

io (input/output)

io


WARNING: Second io.Copy will not copy any data from file f to sha1 hasher! This is because reader (passed as second argument to io.Copy) advances to EOF during the first io.Copy call and stays there.

f, err := os.Open(path)
defer f.Close()

sha256Hash := sha256.New()
io.Copy(sha256Hash, f)
...
sha1Hash := sha1.New()
io.Copy(sha1Hash, f)

Fix: before calling io.Copy for the second time, we need to move reader pointer to the beginning of the file:

f.Seek(0, os.SEEK_SET)

or, in the latest Go version:

f.Seek(0, io.SeekStart)

---

log 


Why should I use log.Println instead of fmt.Println?

log.Println writes to standard error by default. To change it to stdout, use:

log.SetOutput(os.Stdout)

To set it back to stderr:

log.SetOutput(os.Stderr)


---

os (Operating System)


os.Create() truncates file if file already exists.

Truncate a file Truncate a file to a specific length.Shrink or extend the size of each FILE to the specified size.
How to empty (“truncate”) a file on linux that already exists and is protected in someway?
Word for “truncate to size zero”
Truncating a file means to eliminate all the content.In general, "truncating a file" means truncating it to a specific length, which may or may not be zero; the size defaults to zero if not specified. 

To get current working directory use:
os.Getwd()

How to create nested directories using Mkdir in Golang?

To create a single directory:

os.Mkdir

To create a folder path:

os.MkdirAll(folderPath, os.ModePerm)

Creating a relative symbolic link through the os package
os.Symlink(target, symlink)

Golang : Create and resolve(read) symbolic links

os.IsExist(err) vs os.IsNotExist(err)


---

path


Rather than using + to create paths, use:

import "path/filepath"
path := filepath.Join(someRootPath, someSubPath)

The above uses the correct separators automatically on each platform for you.


sync


WaitGroup

  • a structure which monitors current number of goroutines being executed
  • has 3 functions defined on it:
    • Add(delta int) - add a number of goroutines that are about to be created
    • Done() - decrements counter by 1
    • Wait()
      • blocks current thread until counter drops to 0 (all goroutines have returned)
      • introduces synchronisation points between goroutines
Application will panic if counter goes below 0 at any time:

panic: sync: negative WaitGroup counter

Add(1) has to be executed before Done() => call to Add should be done before launching the go routine containing the call to Done for  the call to Add after launching the go routine may not execute until the go routine completes, which will generate a runtime panic as the call to Done inside the routine makes the waitgroup negative.

Add(1) should not be executed from goroutine but before scheduling it. (?) [Is it safe to call WaitGroup.Add concurrently from multiple go routines?]

defer wg.Done() should be the first line in goroutine. 

Very basic concurrency for beginners in Go



Unit Testing

Package testing
Proper package naming for testing with the Go language
How to write benchmarks in Go
Go Benchmarks

go test command executes any function which is in this form:

func TestXxx(*testing.T)

go test -bench executes benchmarks (and also other tests unless -run flag specifies what has to be run) which must be in form:

func BenchmarkXxx(*testing.B)

To suppress running other (regular) unit tests, set the regex value of the -run parameter to something that does not match the name of any unit test:

$ go test -bench=. -run=NONE 

To  show how many allocation operations were performed per invocation and how much memory was allocated during each invocation, add -benchmem flag:

$ go test -bench=. -benchmem -run=NONE

or add to the beginning of the benchmark function:

b.ReportAllocs()

Test memory consumption
How can i limit/gate memory usage in go unit test
Practical Go Benchmarks

Example of -benchmem:
How much memory do golang maps reserve?

---

Profiling

dotGo 2019 - Daniel Martí - Optimizing Go code without a blindfold presentation shows how to use the following tools:

  • benchmark tests (-bench, -cpuprofile)
  • pprof
  • benchcmp
  • benchstat (from perf suite)
  • perflock
  • go compiler flags ( -gcflags)
  • GOSSAFUNC


You can try running benchmark tests already provided within the go source code available on your dev machine upon installing go:

/var/lib/go/src/encoding/json$ go test -bench=CodeDecoder
goos: linux
goarch: amd64
pkg: encoding/json
BenchmarkCodeDecoder-4         200   10837270 ns/op  179.06 MB/s
PASS
ok   encoding/json 3.332s

Results vary on each run. Variance quantifies how far are measurements from the mean value in average.
Variance Definition

Better benchcmp: https://godoc.org/golang.org/x/perf/cmd/benchstat
It is part of perf - Go performance measurement, storage, and analysis tools.
To install perf:

$ go get -u golang.org/x/perf/cmd/...

benchstat should now be available in terminal:

$ benchstat --help
usage: benchstat [options] old.txt [new.txt] [more.txt ...]
options:
  -alpha α
    consider change significant if p < α (default 0.05)
  -csv
    print results in CSV form
  -delta-test test
    significance test to apply to delta: utest, ttest, or none (default "utest")
  -geomean
    print the geometric mean of each file
  -html
    print results as an HTML table
  -norange
    suppress range columns (CSV only)
  -sort order
    sort by order: [-]delta, [-]name, none (default "none")
  -split labels
    split benchmarks by labels (default "pkg,goos,goarch")

This is the output of benchmark test:

/var/lib/go/src/encoding/json$ go test -bench=CodeDecoder -count=8
goos: linux
goarch: amd64
pkg: encoding/json
BenchmarkCodeDecoder-4         100   12851072 ns/op 151.00 MB/s
BenchmarkCodeDecoder-4         100   12327673 ns/op 157.41 MB/s
BenchmarkCodeDecoder-4         100   12171500 ns/op 159.43 MB/s
BenchmarkCodeDecoder-4         100   12609445 ns/op 153.89 MB/s
BenchmarkCodeDecoder-4         100   12769039 ns/op 151.97 MB/s
BenchmarkCodeDecoder-4         100   12903776 ns/op 150.38 MB/s
BenchmarkCodeDecoder-4         100   12563360 ns/op 154.45 MB/s
BenchmarkCodeDecoder-4         100   12523548 ns/op 154.95 MB/s
PASS
ok  encoding/json 20.097s

To use benchstat we need to run it twice and save results in files on disk:

$ cd ~
$ mkdir tmp
$ cd tmp
$ go test -bench=CodeDecoder /var/lib/go/src/encoding/json -count=8 > old.txt
$ go test -bench=CodeDecoder /var/lib/go/src/encoding/json -count=8 > new.txt
$ benchstat old.txt
name           time/op
CodeDecoder-4   11.1ms ± 4%

name           speed
CodeDecoder-4  175MB/s ± 4%

Variance is 4% which is still quite high if we want to optimize some algorithm in e.g. range 5-10%. 

Higher variance can be the consequence of the current/random CPU load/spikes caused by various apps running on the computer (Electron-based apps and GIFs are resource hungry and can throttle CPU). The first thing we have to make sure before running benchmarks is that test computer is idle. Benchmark demand 100% of CPU. If we close browsers, Slack, etc...variance can go down to 1%.

Nevertheless, if we increase number of benchmark loops:

$ go test -bench=CodeDecoder /var/lib/go/src/encoding/json -count=20

...CPU burns, laptop throttles, fans turn on and speed deteriorates. CPU speed (frequency) goes down as it can't keep up with overheating due to limited fan speed.

Solution for this is perflock. To install it and run the daemon:

$ go get github.com/aclements/perflock/cmd/perflock
$ cd $GOPATH/src/github.com/aclements/perflock
$ ./install.bash
Installing /home/user/dev/go/bin/perflock to /usr/bin
[sudo] password for user: 
Installing init script for Upstart
Installing service for systemd
Starting perflock daemon (using systemd)

Let's explore perflock CLI:

$ perflock --help
Usage of perflock:

  perflock [flags] command...
  perflock -list
  perflock -daemon

  -daemon
    start perflock daemon
  -governor percent
    set CPU frequency to percent between the min and max
    while running command, or "none" for no adjustment (default 90%)
  -list
    print current and pending commands
  -shared
    acquire lock in shared mode (default: exclusive mode)
  -socket path
    connect to socket path (default "/var/run/perflock.socket")

---
$ perflock -governor=70% go test -bench=CodeDecoder /var/lib/go/src/encoding/json -count=8
goos: linux
goarch: amd64
pkg: encoding/json
BenchmarkCodeDecoder-4         100   13234601 ns/op 146.62 MB/s
BenchmarkCodeDecoder-4         100   13389018 ns/op 144.93 MB/s
BenchmarkCodeDecoder-4         100   13170943 ns/op 147.33 MB/s
BenchmarkCodeDecoder-4         100   13384738 ns/op 144.98 MB/s
BenchmarkCodeDecoder-4         100   13105316 ns/op 148.07 MB/s
BenchmarkCodeDecoder-4         100   13210405 ns/op 146.89 MB/s
BenchmarkCodeDecoder-4         100   13094757 ns/op 148.19 MB/s
BenchmarkCodeDecoder-4         100   13965663 ns/op 138.95 MB/s
PASS
ok  encoding/json 21.430s

$ perflock -governor=70% go test -bench=CodeDecoder /var/lib/go/src/encoding/json -count=8 > old.txt

$ perflock -governor=70% go test -bench=CodeDecoder /var/lib/go/src/encoding/json -count=8 > new.txt

$ benchstat old.txt new.txt
name           old time/op   new time/op   delta
CodeDecoder-4   13.4ms ± 5%   13.2ms ± 5%   ~     (p=0.645 n=8+8)

name           old speed     new speed     delta
CodeDecoder-4  145MB/s ± 5%  147MB/s ± 5%   ~     (p=0.645 n=8+8)

Larger variance => need larger N

Tools for CPU load: pprof, perflock
benchstat works for all benchmarks e.g. for measuring network latency or disk I/O
benchstat to compare statistics
perflock to avoid noise.

Compiler options

To get a full list of compiler options use:

$ go tool compile -help
usage: compile [options] file.go...
  -% debug non-static initializers
  -+ compiling runtime
  -B disable bounds checking
  -C disable printing of columns in error messages
  -D path
    set relative path for local imports
  -E debug symbol export
  -I directory
    add directory to import search path
  -K debug missing line numbers
  -L show full file names in error messages
  -N disable optimizations
  -S print assembly listing
  -V print version and exit
  -W debug parse tree after type checking
  -allabis
    generate ABI wrappers for all symbols (for bootstrap)
  -asmhdr file
    write assembly header to file
  -bench file
    append benchmark times to file
  -blockprofile file
    write block profile to file
  -buildid id
    record id as the build id in the export metadata
  -c int
    concurrency during compilation, 1 means no concurrency (default 1)
  -complete
    compiling complete package (no C or assembly)
  -cpuprofile file
    write cpu profile to file
  -d list
    print debug information about items in list; try -d help
  -dwarf
    generate DWARF symbols (default true)
  -dwarflocationlists
    add location lists to DWARF in optimized mode (default true)
  -dynlink
    support references to Go symbols defined in other shared libraries
  -e no limit on number of errors reported
  -gendwarfinl int
    generate DWARF inline info records (default 2)
  -goversion string
    required version of the runtime
  -h halt on error
  -importcfg file
    read import configuration from file
  -importmap definition
    add definition of the form source=actual to import map
  -installsuffix suffix
    set pkg directory suffix
  -j debug runtime-initialized variables
  -l disable inlining
  -lang string
    release to compile for
  -linkobj file
    write linker-specific object to file
  -live
    debug liveness analysis

  -m print optimization decisions
(e.g. escape analysis decisions)

  -memprofile file
    write memory profile to file
  -memprofilerate rate
    set runtime.MemProfileRate to rate
  -msan
    build code compatible with C/C++ memory sanitizer
  -mutexprofile file
    write mutex profile to file
  -nolocalimports
    reject local (relative) imports
  -o file
    write output to file
  -p path
    set expected package import path
  -pack
    write to file.a instead of file.o
  -r debug generated wrappers
  -race
    enable race detector
  -s warn about composite literals that can be simplified
  -shared
    generate code that can be linked into a shared library
  -std
    compiling standard library
  -symabis file
    read symbol ABIs from file
  -traceprofile file
    write an execution trace to file
  -trimpath prefix
    remove prefix from recorded source file paths
  -v increase debug verbosity
  -w debug type checking
  -wb
    enable write barrier (default true)
---

Repeat the flag value in order to increase the output verbosity. 
Use any of these two forms of passing flag values:

~/tmp$ go build -gcflags '-m -m' /var/lib/go/src/io

or

~/tmp$ go build -gcflags='-m -m' /var/lib/go/src/io
# io
/var/lib/go/src/io/pipe.go:73:27: (*pipe).CloseRead.func1 capturing by value: p (addr=false assign=false width=8)
/var/lib/go/src/io/pipe.go:112:27: (*pipe).CloseWrite.func1 capturing by value: p (addr=false assign=false width=8)
/var/lib/go/src/io/io.go:289:6: cannot inline WriteString: function too complex: cost 136 exceeds budget 80
/var/lib/go/src/io/io.go:304:6: cannot inline ReadAtLeast: unhandled op FOR
/var/lib/go/src/io/io.go:328:6: can inline ReadFull as: func(Reader, []byte) (int, error) { return ReadAtLeast(r, buf, len(buf)) }
/var/lib/go/src/io/io.go:430:6: can inline LimitReader as: func(Reader, int64) Reader { return &LimitedReader literal }
/var/lib/go/src/io/io.go:380:6: cannot inline copyBuffer: unhandled op FOR
/var/lib/go/src/io/io.go:363:6: can inline Copy as: func(Writer, Reader) (int64, error) { return copyBuffer(dst, src, nil) }
/var/lib/go/src/io/io.go:339:6: cannot inline CopyN: function too complex: cost 100 exceeds budget 80
/var/lib/go/src/io/io.go:340:38: inlining call to LimitReader func(Reader, int64) Reader { return &LimitedReader literal }
...

Example when to use -m and that gcflags can be also passed to go run (not just go build): Why does a pointer to a local variable escape to the heap?

$ go run -gcflags='-m -m' esc.go


bce = bounds check elimination; this is a runtime check that you're accessing slice within its bounds

~/tmp$ go build -gcflags=-d=ssa/check_bce/debug=1 /var/lib/go/src/io
# io
/var/lib/go/src/io/io.go:310:23: Found IsSliceInBounds
/var/lib/go/src/io/io.go:404:27: Found IsSliceInBounds
/var/lib/go/src/io/io.go:537:27: Found IsSliceInBounds
/var/lib/go/src/io/multi.go:30:18: Found IsInBounds
/var/lib/go/src/io/multi.go:31:27: Found IsSliceInBounds
/var/lib/go/src/io/multi.go:106:15: Found IsSliceInBounds
/var/lib/go/src/io/pipe.go:90:9: Found IsSliceInBounds

~/tmp$ go build -gcflags=-d=ssa/prove/debug=1 /var/lib/go/src/io
# io
/var/lib/go/src/io/io.go:404:27: Proved Geq64
/var/lib/go/src/io/io.go:446:8: Proved IsSliceInBounds
/var/lib/go/src/io/io.go:446:8: Proved Geq64
/var/lib/go/src/io/io.go:473:8: Proved IsSliceInBounds
/var/lib/go/src/io/io.go:507:8: Proved IsSliceInBounds
/var/lib/go/src/io/io.go:537:27: Proved Geq64
/var/lib/go/src/io/multi.go:21:27: Proved IsInBounds
/var/lib/go/src/io/multi.go:26:23: Proved IsInBounds
/var/lib/go/src/io/multi.go:61:10: Induction variable: limits [0,?), increment 1
/var/lib/go/src/io/multi.go:77:3: Induction variable: limits [0,?), increment 1
/var/lib/go/src/io/multi.go:105:17: Induction variable: limits [0,?), increment 1

~/tmp$ go build -gcflags=-d=ssa/prove/debug=2 /var/lib/go/src/io
# io
/var/lib/go/src/io/io.go:404:27: Proved Geq64 (v147)
/var/lib/go/src/io/io.go:446:8: Proved IsSliceInBounds (v46)
/var/lib/go/src/io/io.go:446:8: Proved Geq64 (v42)
/var/lib/go/src/io/io.go:473:8: Proved IsSliceInBounds (v49)
/var/lib/go/src/io/io.go:507:8: Proved IsSliceInBounds (v60)
/var/lib/go/src/io/io.go:537:27: Proved Geq64 (v35)
/var/lib/go/src/io/multi.go:21:27: Proved IsInBounds (v34)
/var/lib/go/src/io/multi.go:26:23: Proved IsInBounds (v72)
/var/lib/go/src/io/multi.go:59:14: x+d >= w; x:v24 b6 delta:1 w:0 d:signed
/var/lib/go/src/io/multi.go:61:10: Induction variable: limits [0,v15), increment 1 (v24)
/var/lib/go/src/io/multi.go:76:14: x+d >= w; x:v25 b6 delta:1 w:0 d:signed
/var/lib/go/src/io/multi.go:77:3: Induction variable: limits [0,v16), increment 1 (v25)
/var/lib/go/src/io/multi.go:104:14: x+d >= w; x:v30 b6 delta:1 w:0 d:signed
/var/lib/go/src/io/multi.go:105:17: Induction variable: limits [0,v8), increment 1 (v30)

Same benchmark can give different results upon code change simply for different compiled code organization (e.g. code alignments...).

To see how is compiler compiling specific function use:

$ GOSSAFUNC=pattern go build

Example:

~/tmp$ echo $'package p\n\nfunc HelloWorld(){\n\tprintln(\"Hello, world!\")\n}\n' > p.go

$ GOSSAFUNC=HelloWorld go build
# _/home/bojan/tmp
dumped SSA to ./ssa.html

$ chromium ./ssa.html 
---




Profile your golang benchmark with pprof
High Performance Go Workshop
An Overview of Go's Tooling


CPU Profiling


With the current version of Go it is possible to perform CPU profiling only for one package at a time.

$ go test \
-run=NONE \
-bench=. \
-cpuprofile=cpu.out \
"./internal/pkg/package_name/"

Example:

$ go test \
-run=NONE \
-bench=. \
-cpuprofile=cpu.out \
"./internal/pkg/datatypesdemo/"

This will create file cpu.out in the current/root directory.

To open cpu.out and enter interactive mode in pprof tool:

$ go tool pprof cpu.out

To check how much time it takes to execute parts of function that were called in benchmarks:

(pprof) list <function_name>

Example:

$ go tool pprof cpu.out
File: datatypesdemo.test
Type: cpu
Time: Jul 5, 2019 at 4:50pm (BST)
Duration: 1.70s, Total samples = 1.94s (113.98%)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) list copyList
Total: 1.94s
ROUTINE ======================== github.com/BojanKomazec/go-demo/internal/pkg/datatypesdemo.copyList in /home/bojan/dev/go/src/github.com/BojanKomazec/go-demo/internal/pkg/datatypesdemo/datatypesdemo.go
     210ms      1.31s (flat, cum) 67.53% of Total
         .          .     61:func copyList(in []string) []string {
         .          .     62:   var out []string
      80ms       80ms     63:   for _, s := range in {
     130ms      1.23s     64:           out = append(out, s)
         .          .     65:   }
         .          .     66:   return out
         .          .     67:}
(pprof)

To exit, type exit:

(pprof) exit

benchcmp tool automatically calculates differences (in %) in performances before and after code changes:

$ cd ../go/src/github.com/BojanKomazec/go-demo
$ go test -run=NONE -bench=. ./... > old.txt
$ go test -run=NONE -bench=. ./... > new.txt
$ benchcmp old.txt new.txt
benchmark                       old ns/op     new ns/op     delta
BenchmarkCopyList1_100x16-4     1400          1404          +0.29%
BenchmarkCopyList2_100x16-4     467           468           +0.21%

benchmark                       old allocs     new allocs     delta
BenchmarkCopyList1_100x16-4     8              8              +0.00%
BenchmarkCopyList2_100x16-4     1              1              +0.00%

benchmark                       old bytes     new bytes     delta
BenchmarkCopyList1_100x16-4     4080          4080          +0.00%
BenchmarkCopyList2_100x16-4     1792          1792          +0.00%

Go CLI



$ go
Go is a tool for managing Go source code.

Usage:

        go <command> [arguments]

The commands are:

        bug         start a bug report
        build       compile packages and dependencies
        clean       remove object files and cached files
        doc         show documentation for package or symbol
        env         print Go environment information
        fix         update packages to use new APIs
        fmt         gofmt (reformat) package sources
        generate    generate Go files by processing source
        get         download and install packages and dependencies
        install     compile and install packages and dependencies
        list        list packages or modules
        mod         module maintenance
        run         compile and run Go program
        test        test packages
        tool        run specified go tool
        version     print Go version
        vet         report likely mistakes in packages

Use "go help <command>" for more information about a command.

Additional help topics:

        buildmode   build modes
        c           calling between Go and C
        cache       build and test caching
        environment environment variables
        filetype    file types
        go.mod      the go.mod file
        gopath      GOPATH environment variable
        gopath-get  legacy GOPATH go get
        goproxy     module proxy protocol
        importpath  import path syntax
        modules     modules, module versions, and more
        module-get  module-aware go get
        packages    package lists and patterns
        testflag    testing flags
        testfunc    testing functions

Use "go help <topic>" for more information about that topic.


go build


$ go help build
usage: go build [-o output] [-i] [build flags] [packages]

Build compiles the packages named by the import paths,
along with their dependencies, but it does not install the results.

If the arguments to build are a list of .go files, build treats
them as a list of source files specifying a single package.

When compiling a single main package, build writes
the resulting executable to an output file named after
the first source file ('go build ed.go rx.go' writes 'ed' or 'ed.exe') or the source code directory ('go build unix/sam' writes 'sam' or 'sam.exe'). The '.exe' suffix is added when writing a Windows executable.

When compiling multiple packages or a single non-main package,
build compiles the packages but discards the resulting object,
serving only as a check that the packages can be built.

When compiling packages, build ignores files that end in '_test.go'.

The -o flag, only allowed when compiling a single package,
forces build to write the resulting executable or object
to the named output file, instead of the default behavior described
in the last two paragraphs.

The -i flag installs the packages that are dependencies of the target.

The build flags are shared by the build, clean, get, install, list, run, and test commands:

        -a
                force rebuilding of packages that are already up-to-date.
        -n
                print the commands but do not run them.
        -p n
                the number of programs, such as build commands or
                test binaries, that can be run in parallel.
                The default is the number of CPUs available.
        -race
                enable data race detection.
                Supported only on linux/amd64, freebsd/amd64, darwin/amd64 and windows/amd64.
        -msan
                enable interoperation with memory sanitizer.
                Supported only on linux/amd64, linux/arm64
                and only with Clang/LLVM as the host C compiler.
        -v
                print the names of packages as they are compiled.
        -work
                print the name of the temporary work directory and
                do not delete it when exiting.
        -x
                print the commands.

        -asmflags '[pattern=]arg list'
                arguments to pass on each go tool asm invocation.
        -buildmode mode
                build mode to use. See 'go help buildmode' for more.
        -compiler name
                name of compiler to use, as in runtime.Compiler (gccgo or gc).
        -gccgoflags '[pattern=]arg list'
                arguments to pass on each gccgo compiler/linker invocation.
        -gcflags '[pattern=]arg list'
                arguments to pass on each go tool compile invocation.
        -installsuffix suffix
                a suffix to use in the name of the package installation directory,
                in order to keep output separate from default builds.
                If using the -race flag, the install suffix is automatically set to race
                or, if set explicitly, has _race appended to it. Likewise for the -msan
                flag. Using a -buildmode option that requires non-default compile flags
                has a similar effect.
        -ldflags '[pattern=]arg list'
                arguments to pass on each go tool link invocation.
        -linkshared
                link against shared libraries previously created with
                -buildmode=shared.
        -mod mode
                module download mode to use: readonly or vendor.
                See 'go help modules' for more.
        -pkgdir dir
                install and load all packages from dir instead of the usual locations.
                For example, when building with a non-standard configuration,
                use -pkgdir to keep generated packages in a separate location.
        -tags 'tag list'
                a space-separated list of build tags to consider satisfied during the
                build. For more information about build tags, see the description of
                build constraints in the documentation for the go/build package.
        -toolexec 'cmd args'
                a program to use to invoke toolchain programs like vet and asm.
                For example, instead of running asm, the go command will run
                'cmd args /path/to/asm <arguments for asm>'.

The -asmflags, -gccgoflags, -gcflags, and -ldflags flags accept a
space-separated list of arguments to pass to an underlying tool
during the build. To embed spaces in an element in the list, surround it with either single or double quotes. The argument list may be receded by a package pattern and an equal sign, which restricts the use of that argument list to the building of packages matching that pattern (see 'go help packages' for a description of package patterns). Without a pattern, the argument list applies only to the packages named on the command line. The flags may be repeated
with different patterns in order to specify different arguments for
different sets of packages. If a package matches patterns given in
multiple flags, the latest match on the command line wins.
For example, 'go build -gcflags=-S fmt' prints the disassembly
only for package fmt, while 'go build -gcflags=all=-S fmt'
prints the disassembly for fmt and all its dependencies.

For more about specifying packages, see 'go help packages'.
For more about where packages and binaries are installed,
run 'go help gopath'.
For more about calling between Go and C/C++, run 'go help c'.

Note: Build adheres to certain conventions such as those described
by 'go help gopath'. Not all projects can follow these conventions,
however. Installations that have their own conventions or that use
a separate software build system may choose to use lower-level
invocations such as 'go tool compile' and 'go tool link' to avoid
some of the overheads and design decisions of the build tool.

See also: go install, go get, go clean.

Note that list of options goes BEFORE list of source code files.

$ go build -o build/wiki wiki.go 

To keep the flag package simple, the first non-flag is treated as the first command line argument, regardless of what might come after, so all flags must come before all regular arguments.

The go tool uses the flag package and so is subjected to the same limitations for it's built-in subcommands. Therefore the "output" flag must be set before the arguments (which contain the input).

example:
go build foo -o bar -> flags: {} args: [foo, -o, bar]
go build -o bar foo -> flags: {o: bar} args: [foo] 

[(How to) Give a name to the binary using go build]

go get



$ go help get
usage: go get [-d] [-f] [-t] [-u] [-v] [-fix] [-insecure] [build flags] [packages]

Get downloads the packages named by the import paths, along with their dependencies. It then installs the named packages, like 'go install'.

The -d flag instructs get to stop after downloading the packages; that is, it instructs get not to install the packages.

The -f flag, valid only when -u is set, forces get -u not to verify that each package has been checked out from the source control repository implied by its import path. This can be useful if the source is a local fork of the original.

The -fix flag instructs get to run the fix tool on the downloaded packages before resolving dependencies or building the code.

The -insecure flag permits fetching from repositories and resolving
custom domains using insecure schemes such as HTTP. Use with caution.

The -t flag instructs get to also download the packages required to build the tests for the specified packages.

The -u flag instructs get to use the network to update the named packages and their dependencies. By default, get uses the network to check out missing packages but does not use it to look for updates to existing packages.

The -v flag enables verbose progress and debug output.

Get also accepts build flags to control the installation. See 'go help build'.

When checking out a new package, get creates the target directory
GOPATH/src/<import-path>. If the GOPATH contains multiple entries,
get uses the first one. For more details see: 'go help gopath'.

When checking out or updating a package, get looks for a branch or tag that matches the locally installed version of Go. The most important rule is that if the local installation is running version "go1", get searches for a branch or tag named "go1". If no such version exists it retrieves the default branch of the package.

When go get checks out or updates a Git repository,
it also updates any git submodules referenced by the repository.

Get never checks out or updates code stored in vendor directories.

For more about specifying packages, see 'go help packages'.

For more about how 'go get' finds source code to download, see 'go help importpath'.

This text describes the behavior of get when using GOPATH
to manage source code and dependencies. If instead the go command is running in module-aware mode, the details of get's flags and effects change, as does 'go help get'. See 'go help modules' and 'go help module-get'.

See also: go build, go install, go clean.

Example:

$ go get -d -v ./...

What do three dots “./…” mean in Go command line invocations?

./ tells to start from the current folder, ... tells to go down recursively.

Example:

$ go get -u golang.org/x/perf/cmd/...

$ go get github.com/aclements/perflock/cmd/perflock


go env


This is the best way to check which $GOPATH and $GOROOT values are used by any local Go tool.
Mind that these values can be different for current user and for root:

$ go env
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/my_user_name/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/my_user_name/dev/go"
GOPROXY=""
GORACE=""
GOROOT="/var/lib/go"
GOTMPDIR=""
GOTOOLDIR="/var/lib/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build480098893=/tmp/go-build -gno-record-gcc-switches"

For root:

$ sudo go env
...
GOPATH="/home/my_user_name/go"
...



go install



$ go help install
usage: go install [-i] [build flags] [packages]

Install compiles and installs the packages named by the import paths.

The -i flag installs the dependencies of the named packages as well.

For more about the build flags, see 'go help build'.
For more about specifying packages, see 'go help packages'.

See also: go build, go get, go clean.

It:

  • places the executable file in $GOPATH/bin
  • caches all non-main packages (which app imports) in $GOPATH/pkg
    • Cache is used in the next compilation unless it changes in the meantime

Go naming convention

What are conventions for filenames in Go?

File names that begin with "." or "_" are ignored by the go tool
Files with the suffix _test.go are only compiled and run by the go test tool.
Files with os and architecture specific suffixes automatically follow those same constraints, e.g. name_linux.go will only build on linux, name_amd64.go will only build on amd64. This is the same as having a //+build amd64 line at the top of the file

filenames are generally all lowercase in case, both for consistency and for systems with case-insensitive filesystems

regular file names are lower case, short, and without any sort of underscore or space. Generally, file names follow the same convention as package names.

there is no convention but _ suffixes may have special semantics in the future so I recommend to avoid them

Naming a file dns_windows.go will cause it to be included only when building the package for Windows; similarly, math_386.s will be included only when building the package for 32-bit x86.

File name convention for compound words?

Source file names are usually kept short, so in your case I would tend to call it find.go in package weightedunion <--  the idea is to make better use of packages when you are tempted to name it long


What is the best naming convention of keeping source files in a programming language?

Go follows a convention where source files are all lower case with underscore separating multiple words. Example:

https://github.com/kubernetes/client-go/tree/master/discovery/cached/diskGolang


folder structure and filename conventions under clean architecture

Interfaces in Go have a er-suffix (io.Reader, fmt.Stringer, etc).

The filenames are not per se named after a single type as often the package contains several types (see your first question).

Mixed case filenames are cute until your filesystem does not distinguish cases. Keep it simple, lowercase only.

Style guideline for Go packages


Go code Documentation & Comments


Godoc: documenting Go code

To document a type, variable, constant, function, or even a package, write a regular comment directly preceding its declaration, with no intervening blank line. Godoc will then present that comment as text alongside the item it documents.

Comment is a complete sentence that begins with the name of the element it describes.

The "note marker" needs to follow this syntax [source]:

MARKER(uid): note body

Then godoc -notes="MARKER" includes this note into the doc.

If we have in the code:

// TODO(id_manager): Add id range checking... 

And then run godoc comments parser as:


godoc -note="TODO" .

We'll see in the output:

PACKAGE DOCUMENTATION

package my_package
   ...
TODOS

   Add id range checking... 


Installing godoc 


On Ubuntu:

$ sudo apt install golang-golang-x-tools

Difference between godoc and go doc

godoc:

user@host:~/.../go/src/github.com/user/app$ godoc ./internal/pkg/pgclient/ 
PACKAGE DOCUMENTATION

package pgclient
    import "./internal/pkg/pgclient/"


FUNCTIONS

func New(params dbclient.ConnParams) (dbclient.DbClient, error)
    New function creates an instance of PostgreSQL client

go doc:

user@host:~/.../go/src/github.com/user/app$ go doc ./internal/pkg/pgclient/ 
package pgclient // import "github.com/user/app/internal/pkg/pgclient"

func New(params dbclient.ConnParams) (dbclient.DbClient, error)

---

Go & Docker


Deploying a Go Application in a Docker Container

Go & PostgreSQL


Go comes with embedded database/sql package.
The standard PostgreSQL DB driver for Go is lib/pq.

package pq

How to use PostgreSQL

Querying for a single record using Go's database/sql package

QueryRow() is a method provided by the DB type and is used to execute an SQL query that is expected to return a single row.


Useful packages


https://github.com/go-ozzo/ozzo-validation

Resources:


"Go in Action" by William Kennedy
Effective Go
#Go @ siongui.github.io
GoLang Tutorials
Common Mistakes
I Spy Code - Blog - golang (Programming puzzles)
Dot Net Pearls - Go