Thursday 15 August 2019

Introduction to TeamCity

Project 

Each project is identified by its name and unique ID:




General Settings


Project dashboard contains General Settings menu in the upper left corner. It looks like this:




VCS Roots


TeamCity builds need to get a code that has to be built into a binary. That code is usually kept in some Version Control System (VCS). We can add VCS to TeamCity via VCS Roots view:


Click on Create VCS root button opens a new page where we can select a type of VCS. For example, if we choose Git, we'll get:



VCS root name can be a name of the repository e.g. my-repo.

VCS root ID gets automatically generated as you type VCS root name.

Fetch URL can be in SSH form e.g. git@git.example.com:project/my-repo.git.

When uploaded key is selected (from a list which is automatically populated with SSH keys added in SSH Keys view) a field for entering private key password appears dynamically:




When TeamCity has to check out a repository from VCS, it needs to authenticate. Using SSH keys is preferred way. We can create SSH key pair on a dev machine and upload private key on TeamCity server and public key on VCS (e.g. GitHub). I wrote earlier about how to generate SSH key pairs on Ubuntu

After adding public SSH key to the list of Deploy keys for the given repo in VCS, we can click the button Test connection and if everything is ok, we'll see:


If you've forgot to add public SSH key to the repo in VCS, you might get this error:




Now we need to click Create button in order to this VCS root configuration to be saved.

Report Tabs


Parameters


Connections


Shared Resources


Meta-Runners

Meta-Runner is a generalized build step that can be used across different build configurations.

Meta-runners are created by selecting a build step that we want to reuse/generalise and selecting in the upper right corner Actions >> Extract meta-runner... which opens a new window where we can define the following meta-runner's attributes:
  • Project (so all build configurations withing that project can use it)
  • Name
  • ID
  • Description
Meta-runners are stored in XML format which contains a list of relevant parameters and the script which performs meta-runner's action. Here is an example:

<?xml version="1.0" encoding="UTF-8"?>
<meta-runner name="7z_Extract_Archive">
  <description>Use 7z to extract an archive</description>
  <settings>
    <parameters>
      <param name="7z.input.archive.name" value="%7z.input.archive.name%" spec="text display='normal' validationMode='not_empty'" />
      <param name="7z.output.directory.name" value="%7z.output.directory.name%" spec="text display='normal' validationMode='not_empty'" />
      ...
    </parameters>
    <build-runners>
      <runner name="Use 7z to extract the archive" type="simpleRunner">
        <parameters>
          ...
          <param name="script.content" value="7z.exe e %7z.input.archive.name% -o%7z.output.directory.name%" />
          <param name="teamcity.step.mode" value="default" />
          <param name="use.custom.script" value="true" />
        </parameters>
      </runner>
    </build-runners>
    <requirements />
  </settings>
</meta-runner>

Meta-runners are used in a build configuration as the custom types of runners for build steps. When we want to add a new build step we first need to choose its Runner type from a drop-down list. This list shows first meta-runners from this project, than its parent...to those from Root project. If we choose 7z_Extract_Archive as a runner for some step, then its params 7z.input.archive.name and 7z.output.directory.name will be automatically added to this build configuration and they will be set to %7z.input.archive.name% and %7z.output.directory.name%. So where to we get these 7z.input.archive.name and 7z.output.directory.name whose values are referenced via %param%?

Templates


Hand in hand with meta-runners go templates. Templates are build configurations which don't have any build steps but only have/define a set of parameters and their default values. In our case, we'll create a template named e.d. 7z_Extract_Archive_Policy which defines two parameters 7z.input.archive.name and 7z.output.directory.name.

Build config which contains a step based on 7z_Extract_Archive meta-runner should be attached to a template 7z_Extract_Archive_Policy as this is the way for it to get all parameters required/used by 7z_Extract_Archive.

This explains why in meta-runner 7z_Extract_Archive we set parameter 7z.input.archive.name to value %7z.input.archive.name% - this is going to be a value taken from a parameter 7z.input.archive.name inherited from 7z_Extract_Archive_Policy.

Meta-runners define WHAT has to be done and templates define HOW.

Q: How to create template which references some property which belongs to some meta-runner or template (policy) as templates are not inheritable from other templates; the issue is that if we reference it, build would use this one. 

A: If build config is based on two templates which share the same property, it will be given the property of the template of the higher rank order (the one that is listed first in the list of templates)

Maven Settings


Issue Trackers


Cloud Profiles


Clean-up Rules


Versioned Settings


Artifacts Storage


SonarQube Servers


NuGet Feed


SSH Keys


This is where we add private SSH keys for authenticating TeamCity with VCS.


Click on the button Upload SSH Key opens the following dialog:




Suggestions





Build Configuration

All paths in build configuration settings should be relative paths (relative to the build checkout directory). The checkout directory is the directory where all the sources (configured as VCS roots) are placed by the TeamCity.


General Settings

Name (text field)


This is a custom name of this build configuration.

Build configuration ID (text field)


Default value is in form: ProjectName_SubProjectName_BuildConfigurationName

Description (text field)


This is a custom description of this build configuration.

Build configuration type (combo box)


This can have one of these 3 values:
  • Regular
  • Composite (aggregating results)
  • Deployment

Build number format (text field)


Example:
%build.counter%

Build counter (text field)

Publish artifacts (combo box)


3 options are available:

  • Even if build fails
  • Only if build status is successful
  • Always, even if build stop command was issued

Artifact paths (text field)


The build artifacts are files that you want to publish as the results of your build. If the build generates it's output in the folder named "output", you can just set "output" as your artifacts path.

Let's assume that some build step creates directory output/content and this is the place where artifacts are stored. If value of this field is set to:

output/content => content

...then upon successful build, in the build list view, we can see for this build an Artifacts icon enabled and if we click on it, we can see a folder named content which we can expand and see its content.

Example:

Build step >> Working Directory: ./output/
Build step >> Custom script: mkdir -p ./dirA/ && echo "content1" > ./dirA/file1
Artifact paths: ./output/ => output_content

Upon build artifacts are available at URL https://tc.example.com/viewLog.html?buildId=1123342&buildTypeId=MyApp_Test&tab=artifacts


We can omit => target and specify just files/directories we want to be picked as artefacts. 

Example: to pick all exe and ps1 files from the root directory we can set:

.\*.exe
.\*MyTool.ps1


Build options


Enable hanging builds detection

Allow triggering personal builds

Enable status widget

Limit the number of simultaneously running builds (0 — unlimited)

  • set it to 1 to prevent parallel executions completely


Build Step: Command Line


Working directory


Allows starting a build in a subdirectory of the checkout directory (use a relative path).
When not specified, the build is started in the checkout directory.
All relative paths in TeamCity are relative to checkout directory.
If specified directory doesn't exist, TeamCity will create it (there is no need for mkdir to be used).

Agent Requirements


How to allow only those agents whose name starts with some specific string?


Add a new requirement with the following settings:

Parameter Name: system.agent.name
Condition: starts with
Value: <string>


How to allow only those agents which are running Linux?


Parameter Name: docker.server.osType
Condition: equals
Value: linux


Dependencies


Snapshot Dependencies


Snapshot dependencies are used to create build chains. When being a part of build chain the build of this configuration will start only when all dependencies are built. If necessary, the dependencies will be triggered automatically. Build configurations linked by a snapshot dependency can optionally use revisions synchronization to ensure the same snapshot of the sources.

Artifact Dependencies


Artifact dependency allows using artifacts produced by another build. 

Add new artifact dependency button opens a dialog where we can choose:

Depend on (string) - this is the name of the TeamCity build config which will be Artifacts source.

Get artifacts from:
  • Latest successful build
  • Latest pinned build
  • Latest finished build
  • Latest finished build with specified tag
  • Build from the same chain
  • Build with specified build number
Artifacts rules (string) - this is the pattern which defines which directory from artifacts to be copied to which directory in the local build. Provide here a newline-delimited set of rules in the form of
[+:|-:]SourcePath[!ArchivePath][=>DestinationPath]. Example:

ouptut => data/input/ 

ouptut is a directory from published artifacts from previous build and data/input/ is local path in the current build.

Dependencies can be temporary disabled which is useful when testing build configs.

TBC...


Version Control Settings


Updating sources: auto checkout (on agent)
[18:21:53]Will use agent side checkout
[18:21:53]VCS Root: My App 
[18:21:53]checkout rules: =>my-app; revision: 8259e79c8c472a31ddf041ffd6e99308905913c6
[18:21:53]Git version: 2.7.4.0
[18:21:53]Update checkout directory (/home/docker-agent/work/5094643590a7b75e/my-app)
[18:21:53]/usr/bin/git config core.sparseCheckout true
[18:21:53]/usr/bin/git config http.sslCAInfo
[18:21:53]/usr/bin/git show-ref
[18:21:53]/usr/bin/git ls-remote origin
[18:21:54]/usr/bin/git show-ref refs/remotes/origin/feature/dummy-feature
[18:21:54]/usr/bin/git log -n1 --pretty=format:%H%x20%s 8259e79c8c472a31ddf041ffd6e99308905913c6 --
[18:21:54]/usr/bin/git branch
[18:21:54]/usr/bin/git reset --hard 8259e79c8c472a31ddf041ffd6e99308905913c6
[18:21:54]/usr/bin/git branch --set-upstream-to=refs/remotes/origin/feature/dummy-feature
[18:21:54]Cleaning My App in /home/docker-agent/work/5094603590a7b75e/my-app the file set ALL_UNTRACKED
[18:21:54]/usr/bin/git clean -f -d -x
[18:21:54]Failed to perform checkout on agent: '/usr/bin/git clean -f -d -x' command failed.
exit code: 1
stderr: warning: failed to remove .jfrog/projects
[18:21:55]Error message is logged

Fix: Enable the following option:

Version Control Settings >>  Clean build >>  Delete all files in the checkout directory before the build

Writing Build Steps

Writing bash scripts

---
To print some predefined property use:

echo teamcity.agent.home.dir = %teamcity.agent.home.dir%
echo teamcity.agent.work.dir = %teamcity.agent.work.dir%
echo system.teamcity.build.checkoutDir = %system.teamcity.build.checkoutDir%

In the output log we'll have e.g.:

teamcity.agent.home.dir = /home/agent-docker-01

---
If we have a build step which uses Command Line runner to run Custom script we can use variables in that script as:

MY_VAR=test
echo MY_VAR value is: ${MY_VAR}

It is possible to concatenate string variable with string value of the build property:

SCHEMAS_DIR=output_schemas_%myapp.env%

---

If we want to echo text which contains parentheses, we need to escape them:

echo Parentheses test: \(This text is between parentheses\)
---

If we want to find the id and name of the current user and group, we can use the following in the bash script:

echo user:group \(id\) = $(id -u):$(id -g)
echo user:group \(name\) = $(id -un):$(id -gn)

Log output is like:

user:group (id) = 1008:1008
user:group (name) = docker-slave-73:docker-slave-73
---
How to ping some remote server:

echo Pinging example.com...
ping -v -c 4 example.com

---
How to get current agent's public IP address?

Get agent\'s public IP address:
dig TXT +short o-o.myaddr.l.google.com @ns1.google.com
---
TeamCity uses /bin/sh by default. On Ubuntu, /bin/sh is typically a symbolic link to /bin/dash, a Bourne-like shell that doesn't support arrays. That will give you the error:

Syntax error: "(" unexpected ...

e.g.
Syntax error: "(" unexpected (expecting "done")

To instruct TeamCity's Linux agent to use /bin/bash (and therefore support arrays in bash scripts), add a bash shebang #!/bin/bash at the beginning of the script:

To test whether this works, add a test Command Line step as the 1st step in your job and use this snippet for Custom script:

#!/bin/bash
array=(1 2 3 4 5)
echo ${array[*]}

---


Running the build


If there are no compatible agents and you try to run the build, the following message appears:

Warning: No enabled compatible agents for this build configuration. Please register a build agent or tweak build configuration requirements.

TC automatically detects if there are any agents compatible with build steps (Build Steps item in the left-hand menu). They are shown in: 

Agent Requirements >> Agents Compatibility 
[In this section you can see which agents are compatible with the requirements and which are not.]


https://stackoverflow.com/questions/4737114/build-project-how-to-checkout-different-repositories-in-different-folders

https://confluence.jetbrains.com/display/TCD5/VCS+Checkout+Rules

https://www.jetbrains.com/help/teamcity/2019.1/integrating-teamcity-with-docker.html#IntegratingTeamCitywithDocker-DockerSupportBuildFeature

Integration with Artifacotry


TeamCity Artifactory Plug-in

Once plugin is installed and integrated, we can see Artifactory Integration section in build step settings. It contains the following settings:

  • Artifactory server URL (string)
  • Override default deployer credentials (boolean)
  • Publish build info (boolean)
  • Run license checks (boolean)
  • Download and upload by:
    • Specs
    • Legacy patterns (deprecated)
  • Download spec source:
    • Job configuration
    • File
  • Spec (string)
  • Upload spec source:
    • Job configuration
    • File
  • Spec (string)

Artifactory server URL

Example:
https://artifactory.example.com/artifactory


Publish build info


If this is checked then Artifactory plugin creates and publishes on Artifactory server a json file which contains build info with the plain list of all artifacts. Path to this file on Artifactory is:

Artifact Repository Browser >> artifactory-build-info/<build_configuration_id>/xx-xxxxxxxxxxxxx.json

Build configuration id is usually in form ProjectName_BuildConfigurationName.

The content of that json file is like:

{
  "version" : "1.0.1",
  "name" : "My_Proj_Test",
  "number" : "25",
  "type" : "GENERIC",
  "buildAgent" : {
    "name" : "simpleRunner"
  },
  "agent" : {
    "name" : "TeamCity",
    "version" : "2019.1.2 (build 66342)"
  },
  "started" : "2019-08-19T10:47:12.557+0200",
  "durationMillis" : 112,
  "principal" : "Bojan Komazec",
  "artifactoryPrincipal" : "deployer",
  "artifactoryPluginVersion" : "2.8.0",
  "url" : "https://teamcity.iexample.com/viewLog.html?buildId=16051507&buildTypeId=My_Proj_Test",
  "vcs" : [ ],
  "licenseControl" : {
    "runChecks" : false,
    "includePublishedArtifacts" : false,
    "autoDiscover" : true,
    "licenseViolationsRecipientsList" : "",
    "scopesList" : ""
  },
  "modules" : [ {
    "id" : "My_Proj_Test :: 25",
    "artifacts" : [ {
      "type" : "",
      "sha1" : "b29930daa02406077d96a7b7a08ce282b3de6961",
      "sha256" : "47d741b6059c6d7e99be23ce46fb9ba099cfd6515de1ef7681f93479d25996a4",
      "md5" : "9b2bb321f2dd1a87857eb875ce22f7e1",
      "name" : "file1"
    }, {
      "type" : "",
      "sha1" : "b29930dda02406077d96a7b7a08ce282b3de6961",
      "sha256" : "47d741b6059c6d7e99be25ce46fb9ba099cfd6515de1ef7681f93479d25996a4",
      "md5" : "9b2bb321f5dd1a87857eb875ce22f7e1",
      "name" : "file2"
    } ]
  } ],
  "buildDependencies" : [ ],
  "governance" : {
    "blackDuckProperties" : {
      "runChecks" : false,
      "includePublishedArtifacts" : false,
      "autoCreateMissingComponentRequests" : false,
      "autoDiscardStaleComponentRequests" : false
    }
  }
}



Upload spec source with Job configuration



If we want to upload some output directory to Artifactory it is enough to set URL for Artifactory server, choose Job configuration as Upload spec source and set Spec as e.g.:

{
   "files": [
      {
         "pattern":"./output",
         "target": "path/to/output/",
         "flat": false
      }   
   ] 
}

output is directory on TeamCity and path/to/output/ is path to the target directory on the Artifactory. In this example content in Artifactory will be at path artifactory.example.com/path/to/output/output/*.

To avoid this, we can set working directory to ./output/ and then set pattern to "./". In that case the content would be at the path artifactory.example.com/path/to/output/*.


It is possible to use TeamCity variables in Custom published artifacts value:

data-vol/artefacts/=>MyArtifactoryFolder/artefacts-%build.number%.zip


https://teamcity-support.jetbrains.com/hc/en-us/community/posts/206163909-Select-branch-combination-from-different-VCS-Roots

https://tc.example.com/viewLog.html?buildId=15995461&buildTypeId=Browser_AdminNew_RunWpTools&tab=artifacts


Download by Specs


Download spec source = Job configuration

Artifactory page URL example:

https://artifactory.example.com/artifactory/webapp/#/artifacts/browse/tree/General/ProductXYZ/ModuleA

Repository Path: ProductXYZ/ModuleA


Spec: (example when we want to download all files with extension .ext from ModuleA)

{
   "files": [
      {
         "pattern":"ProductXYZ/ModuleA/*.ext",
         "target": "./data-ext/",
         "flat": true
      }   
   ] 
}


JFrog CLI


JFrog CLI is a client app for JFrog products e.g. Artifactory. To use it from TeamCity, the easiest way is to run it from ready-made Docker container. In the Command line -based build step, set the following:

Docker Settings

Run step within Docker container: docker.bintray.io/jfrog/jfrog-cli-go:latest
E.g. ruby:2.4. TeamCity will start a container from the specified image and will try to run this build step within this container.  

Docker image platform: Linux

Additional docker run arguments: --env JFROG_CLI_OFFER_CONFIG=false

Custom script contains jfrog CLI commands which should be in the following form:

$ jfrog target command-name global-options command-options arguments

target - The product on which you wish to execute the command. rt is used for JFrog Artifactory


Custom script can be like this:

echo env.artifactory.deployer.username = %env.artifactory.deployer.username% 
echo env.artifactory.deployer.password = %env.artifactory.deployer.password%

echo Checking if Artifactory is accessible...
env

jfrog -v

jfrog rt c my-artifactory --url=https://artifactory.example.com/artifactory --apikey=%system.artifactory.apikey.my-test-user%

jfrog rt use my-artifactory
jfrog rt ping
jfrog rt c show

jfrog rt u  ./ rootdir/MyApp/test/

---

c = config
my-artifactory - custom name that we want to use as unique ID  for the new Artifactory server configuration
--url - default Artifactory URL to be used for the other commands
--apikey - default API key to be used for the other commands

c show - Shows the stored configuration. In case this argument is followed by a configured server ID, then only this server's configurations is shown

use - used for specifying which of the configured Artifactory instances should be used for the following CLI commands.

ping -  command can be used to verify that Artifactory is accessible by sending an applicative ping to Artifactory.

u = upload



If we didn't set JFROG_CLI_OFFER_CONFIG=false for 

jfrog rt ping

command we'd get the following error:

[15:24:27][Step 2/3] Checking if Artifactory is accessible...
[15:24:27][Step 2/3] To avoid this message in the future, set the JFROG_CLI_OFFER_CONFIG environment variable to false.
[15:24:27][Step 2/3] The CLI commands require the Artifactory URL and authentication details
[15:24:27][Step 2/3] Configuring JFrog CLI with these parameters now will save you having to include them as command options.
[15:24:27][Step 2/3] You can also configure these parameters later using the 'config' command.
[15:24:27][Step 2/3] [Error] The --url option is mandatory
[15:24:27][Step 2/3] Configure now? (y/n): 
[15:24:27][Step 2/3] Process exited with code 1
[15:24:27][Step 2/3] Process exited with code 1 (Step: Pushing to Artifactory via JFrog CLI (Command Line))


Wednesday 14 August 2019

How To Install Postman on Ubuntu

Download installer archive file from Postman's Download page.

Unpack the archive:

$ sudo tar -xzvf Postman-linux-x64-7.5.0.tar.gz -C /opt

Verify the content of the unpack directory:

$ ls -la /opt/Postman/
total 12
drwxr-xr-x 3  999 docker 4096 Aug 12 13:14 .
drwxr-xr-x 8 root root   4096 Aug 14 11:08 ..
drwxr-xr-x 4  999 docker 4096 Aug 12 13:14 app
lrwxrwxrwx 1  999 docker   13 Aug 12 13:14 Postman -> ./app/Postman

Remove the archive as it's not needed anymore:

$ rm Postman-linux-x64-7.5.0.tar.gz

Create Postman.desktop file:

$ touch ~/.local/share/applications/Postman.desktop

Open it:

$ gedit ~/.local/share/applications/Postman.desktop

Edit it:

[Desktop Entry]
Encoding=UTF-8
Name=Postman
Exec=/opt/Postman/app/Postman %U
Icon=/opt/Postman/app/resources/app/assets/icon.png
Terminal=false
Type=Application
Categories=Development;

Save it and close the editor.

Postman now appears in the list of Ubuntu applications.

References:

Postman - Linux installation

Saturday 3 August 2019

Testing Go with Ginkgo


>ginkgo help
Ginkgo Version 1.8.0

ginkgo --
--------------------------------------------
Run the tests in the passed in (or the package in the current directory if left blank).
Any arguments after -- will be passed to the test.
Accepts the following flags:
-a Force rebuilding of packages that are already up-to-date.
-afterSuiteHook string
Run a command when a suite test run completes
-asmflags string
Arguments to pass on each go tool asm invocation.
-blockprofilerate int
Control the detail provided in goroutine blocking profiles by calling runtime.SetBlockProfileRate with the given value. (default 1)
-buildmode string
Build mode to use. See 'go help buildmode' for more.
-compiler string
Name of compiler to use, as in runtime.Compiler (gccgo or gc).
-compilers int
The number of concurrent compilations to run (0 will autodetect)
-cover
Run tests with coverage analysis, will generate coverage profiles with the package name in the current directory.
-covermode string
Set the mode for coverage analysis.
-coverpkg string
Run tests with coverage on the given external modules.
-coverprofile string
Write a coverage profile to the specified file after all tests have passed.
-cpuprofile string
Write a CPU profile to the specified file before exiting.
-debug
If set, ginkgo will emit node output to files when running in parallel.
-dryRun
If set, ginkgo will walk the test hierarchy without actually running anything. Best paired with -v.
-failFast
If set, ginkgo will stop running a test suite after a failure occurs.
-failOnPending
If set, ginkgo will mark the test suite as failed if any specs are pending.
-flakeAttempts int
Make up to this many attempts to run each spec. Please note that if any of the attempts succeed, the suite will not be failed. But any failures will still be recorded. (default 1)
-focus string
If set, ginkgo will only run specs that match this regular expression.
-gccgoflags string
Arguments to pass on each gccgo compiler/linker invocation.
-gcflags string
Arguments to pass on each go tool compile invocation.
-installsuffix string
A suffix to use in the name of the package installation directory.
-keepGoing
When true, failures from earlier test suites do not prevent later test suites from running
-ldflags string
Arguments to pass on each go tool link invocation.
-linkshared
Link against shared libraries previously created with -buildmode=shared.
-memprofile string
Write a memory profile to the specified file after all tests have passed.
-memprofilerate int
Enable more precise (and expensive) memory profiles by setting runtime.MemProfileRate.
-mod string
Go module control. See 'go help modules' for more.
-msan
Enable interoperation with memory sanitizer.
-n go test
Have go test print the commands but do not run them.
-noColor
If set, suppress color output in default reporter.
-nodes int
The number of parallel test nodes to run (default 1)
-noisyPendings
If set, default reporter will shout about pending tests. (default true)
-noisySkippings
If set, default reporter will shout about skipping tests. (default true)
-outputdir string
Place output files from profiling in the specified directory.
-p Run in parallel with auto-detected number of nodes
-pkgdir string
install and load all packages from the given dir instead of the usual locations.
-progress
If set, ginkgo will emit progress information as each spec runs to the GinkgoWriter.
-r Find and run test suites under the current directory recursively.
-race
Run tests with race detection enabled.
-randomizeAllSpecs
If set, ginkgo will randomize all specs together. By default, ginkgo only randomizes the top level Describe, Context and When groups.
-randomizeSuites
When true, Ginkgo will randomize the order in which test suites run
-regexScansFilePath
If set, ginkgo regex matching also will look at the file path (code location).
-requireSuite
Fail if there are ginkgo tests in a directory but no test suite (missing RunSpecs)
-seed int
The seed used to randomize the spec suite. (default 1552989981)
-skip string
If set, ginkgo will only run specs that do not match this regular expression.
-skipMeasurements
If set, ginkgo will skip any measurement specs.
-skipPackage string
A comma-separated list of package names to be skipped. If any part of the package's path matches, that package is ignored.
-slowSpecThreshold float
(in seconds) Specs that take longer to run than this threshold are flagged as slow by the default reporter. (default 5)
-stream
stream parallel test output in real time: less coherent, but useful for debugging (default true)
-succinct
If set, default reporter prints out a very succinct report
-tags string
A list of build tags to consider satisfied during the build.
-timeout duration
Suite fails if it does not complete within the specified timeout (default 24h0m0s)
-toolexec string
a program to use to invoke toolchain programs like vet and asm.
-trace
If set, default reporter prints out the full stack trace when a failure occurs
-untilItFails
When true, Ginkgo will keep rerunning tests until a failure occurs
-v If set, default reporter print out all specs as they begin.
-vet string
Configure the invocation of 'go vet' to use the comma-separated list of vet checks. If list is 'off', 'go test' does not run 'go vet' at all.
-work
Print the name of the temporary work directory and do not delete it when exiting.
-x go test
Have go test print the commands.

ginkgo watch --
--------------------------------------------------
Watches the tests in the passed in and runs them when changes occur.
Any arguments after -- will be passed to the test.
Accepts all the flags that the ginkgo command accepts except for --keepGoing and --untilItFails

ginkgo build
-------------------------------
Build the passed in (or the package in the current directory if left blank).
Accepts the following flags:
-a Force rebuilding of packages that are already up-to-date.
-asmflags string
Arguments to pass on each go tool asm invocation.
-blockprofilerate int
Control the detail provided in goroutine blocking profiles by calling runtime.SetBlockProfileRate with the given value. (default 1)
-buildmode string
Build mode to use. See 'go help buildmode' for more.
-compiler string
Name of compiler to use, as in runtime.Compiler (gccgo or gc).
-cover
Run tests with coverage analysis, will generate coverage profiles with the package name in the current directory.
-covermode string
Set the mode for coverage analysis.
-coverpkg string
Run tests with coverage on the given external modules.
-coverprofile string
Write a coverage profile to the specified file after all tests have passed.
-cpuprofile string
Write a CPU profile to the specified file before exiting.
-gccgoflags string
Arguments to pass on each gccgo compiler/linker invocation.
-gcflags string
Arguments to pass on each go tool compile invocation.
-installsuffix string
A suffix to use in the name of the package installation directory.
-ldflags string
Arguments to pass on each go tool link invocation.
-linkshared
Link against shared libraries previously created with -buildmode=shared.
-memprofile string
Write a memory profile to the specified file after all tests have passed.
-memprofilerate int
Enable more precise (and expensive) memory profiles by setting runtime.MemProfileRate.
-mod string
Go module control. See 'go help modules' for more.
-msan
Enable interoperation with memory sanitizer.
-n go test
Have go test print the commands but do not run them.
-outputdir string
Place output files from profiling in the specified directory.
-pkgdir string
install and load all packages from the given dir instead of the usual locations.
-r Find and run test suites under the current directory recursively.
-race
Run tests with race detection enabled.
-requireSuite
Fail if there are ginkgo tests in a directory but no test suite (missing RunSpecs)
-skipPackage string
A comma-separated list of package names to be skipped. If any part of the package's path matches, that package is ignored.
-tags string
A list of build tags to consider satisfied during the build.
-toolexec string
a program to use to invoke toolchain programs like vet and asm.
-vet string
Configure the invocation of 'go vet' to use the comma-separated list of vet checks. If list is 'off', 'go test' does not run 'go vet' at all.
-work
Print the name of the temporary work directory and do not delete it when exiting.
-x go test
Have go test print the commands.

ginkgo bootstrap
------------------------
Bootstrap a test suite for the current package
Accepts the following flags:
-agouti
If set, bootstrap will generate a bootstrap file for writing Agouti tests
-internal
If set, generate will generate a test file that uses the regular package name
-nodot
If set, bootstrap will generate a bootstrap file that does not . import ginkgo and gomega
-template string
If specified, generate will use the contents of the file passed as the bootstrap template

ginkgo generate
-----------------------------
Generate a test file named filename_test.go
If the optional argument is omitted, a file named after the package in the current directory will be created.
Accepts the following flags:
-agouti
If set, generate will generate a test file for writing Agouti tests
-internal
If set, generate will generate a test file that uses the regular package name
-nodot
If set, generate will generate a test file that does not . import ginkgo and gomega

ginkgo nodot
------------
Update the nodot declarations in your test suite
Any missing declarations (from, say, a recently added matcher) will be added to your bootstrap file.
If you've renamed a declaration, that name will be honored and not overwritten.

ginkgo convert /path/to/package
-------------------------------
Convert the package at the passed in path from an XUnit-style test to a Ginkgo-style test

ginkgo unfocus (or ginkgo blur)
-------------------------------
Recursively unfocuses any focused tests under the current directory

ginkgo version
--------------
Print Ginkgo's version

ginkgo help
---------------------
Print usage information. If a command is passed in, print usage information just for that command.


---

To create test file for some package (example):

../github.com/BojanKomazec/go-demo/internal/pkg/stringdemo$ ginkgo bootstrap
Generating ginkgo test suite bootstrap for stringdemo in:
        stringdemo_suite_test.go

Generated file stringdemo_suite_test.go looks like this:

package stringdemo_test

import (
"testing"

. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)

func TestStringdemo(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "Stringdemo Suite")
}

This file has to have name in form *_test.go.

package name can be adjusted to match the name of the package which is under test (e.g. package stringdemo instead of stringdemo_test). This can be done during the bootstrap by passing -internal argument to it.

---

To run all tests across all packages in the project and also print the coverage % use:

ginkgo -r -v -cover

To achieve the same with native go test do the following:

go test ./... -v -cover


Panic in a goroutine crashes test suite

If running dlv test on tests written with Ginkgo framework:

API server listening at: 127.0.0.1:9379

Usage of C:\...\git.bk.com\example+project\internal\example_package\debug.test:
  -ginkgo.debug
    If set, ginkgo will emit node output to files when running in parallel.
  -ginkgo.dryRun
    If set, ginkgo will walk the test hierarchy without actually running anything.  Best paired with -v.
  -ginkgo.failFast
    If set, ginkgo will stop running a test suite after a failure occurs.
  -ginkgo.failOnPending
    If set, ginkgo will mark the test suite as failed if any specs are pending.
  -ginkgo.flakeAttempts int
    Make up to this many attempts to run each spec. Please note that if any of the attempts succeed, the suite will not be failed. But any failures will still be recorded. (default 1)
  -ginkgo.focus string
    If set, ginkgo will only run specs that match this regular expression.
  -ginkgo.noColor
    If set, suppress color output in default reporter.
  -ginkgo.noisyPendings
    If set, default reporter will shout about pending tests. (default true)
  -ginkgo.noisySkippings
    If set, default reporter will shout about skipping tests. (default true)
  -ginkgo.parallel.node int
    This worker node's (one-indexed) node number.  For running specs in parallel. (default 1)
  -ginkgo.parallel.streamhost string
    The address for the server that the running nodes should stream data to.
  -ginkgo.parallel.synchost string
    The address for the server that will synchronize the running nodes.
  -ginkgo.parallel.total int
    The total number of worker nodes.  For running specs in parallel. (default 1)
  -ginkgo.progress
    If set, ginkgo will emit progress information as each spec runs to the GinkgoWriter.
  -ginkgo.randomizeAllSpecs
    If set, ginkgo will randomize all specs together.  By default, ginkgo only randomizes the top level Describe, Context and When groups.
  -ginkgo.regexScansFilePath
    If set, ginkgo regex matching also will look at the file path (code location).
  -ginkgo.seed int
    The seed used to randomize the spec suite. (default 1553106348)
  -ginkgo.skip string
    If set, ginkgo will only run specs that do not match this regular expression.
  -ginkgo.skipMeasurements
    If set, ginkgo will skip any measurement specs.
  -ginkgo.slowSpecThreshold float
    (in seconds) Specs that take longer to run than this threshold are flagged as slow by the default reporter. (default 5)
  -ginkgo.succinct
    If set, default reporter prints out a very succinct report
  -ginkgo.trace
    If set, default reporter prints out the full stack trace when a failure occurs
  -ginkgo.v
    If set, default reporter print out all specs as they begin.
  -test.bench regexp
    run only benchmarks matching regexp
  -test.benchmem
    print memory allocations for benchmarks
  -test.benchtime d
    run each benchmark for duration d (default 1s)
  -test.blockprofile file
    write a goroutine blocking profile to file
  -test.blockprofilerate rate
    set blocking profile rate (see runtime.SetBlockProfileRate) (default 1)
  -test.count n
    run tests and benchmarks n times (default 1)
  -test.coverprofile file
    write a coverage profile to file
  -test.cpu list
    comma-separated list of cpu counts to run each test with
  -test.cpuprofile file
    write a cpu profile to file
  -test.failfast
    do not start new tests after the first test failure
  -test.list regexp
    list tests, examples, and benchmarks matching regexp then exit
  -test.memprofile file
    write an allocation profile to file
  -test.memprofilerate rate
    set memory allocation profiling rate (see runtime.MemProfileRate)
  -test.mutexprofile string
    write a mutex contention profile to the named file after execution
  -test.mutexprofilefraction int
    if >= 0, calls runtime.SetMutexProfileFraction() (default 1)
  -test.outputdir dir
    write profiles to dir
  -test.parallel n
    run at most n tests in parallel (default 4)
  -test.run regexp
    run only tests and examples matching regexp
  -test.short
    run smaller test suite to save time
  -test.testlogfile file
    write test action log to file (for use only by cmd/go)
  -test.timeout d
    panic test binary after duration d (default 0, timeout disabled)
  -test.trace file
    write an execution trace to file
  -test.v
    verbose: print additional output

Current Directory in Unit Tests


If test file is in module/module_test.go, the current directory (denoted with "./") in unit tests in module_test.go will be set to module.


DescribeTable

ginkgo/table.go at master · onsi/ginkgo:
Under the hood, DescribeTable simply generates a new Ginkgo Describe
Each Entry is turned into an It within the Describe
It's important to understand that the Describes and Its are generated at evaluation time (i.e. when Ginkgo constructs the tree of tests and before the tests run).

The last paragraph explains why it is not good idea to use values that are results of calls on SUT as values passed to Entry. SUT might not be constructed or be in the proper state for those calls so we try to access SUT in evaluation time we might get errors like:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x38 pc=0x80e29e]

In general, good TDD practice is to use hardcoded values as expected values, not those calculated from SUT as they might be wrong.


Use --failFast to stop executing tests as soon as one test fails:

$ ginkgo --failFast ./...

 

Resources: