Thursday 15 August 2019

Introduction to TeamCity


Each project is identified by its name and unique ID:

General Settings

Project dashboard contains General Settings menu in the upper left corner. It looks like this:

VCS Roots

TeamCity builds need to get a code that has to be built into a binary. That code is usually kept in some Version Control System (VCS). We can add VCS to TeamCity via VCS Roots view:

Click on Create VCS root button opens a new page where we can select a type of VCS. For example, if we choose Git, we'll get:

VCS root name can be a name of the repository e.g. my-repo.

VCS root ID gets automatically generated as you type VCS root name.

Fetch URL can be in SSH form e.g.

When uploaded key is selected (from a list which is automatically populated with SSH keys added in SSH Keys view) a field for entering private key password appears dynamically:

When TeamCity has to check out a repository from VCS, it needs to authenticate. Using SSH keys is preferred way. We can create SSH key pair on a dev machine and upload private key on TeamCity server and public key on VCS (e.g. GitHub). I wrote earlier about how to generate SSH key pairs on Ubuntu

After adding public SSH key to the list of Deploy keys for the given repo in VCS, we can click the button Test connection and if everything is ok, we'll see:

If you've forgot to add public SSH key to the repo in VCS, you might get this error:

Now we need to click Create button in order to this VCS root configuration to be saved.

Report Tabs



Shared Resources


Meta-Runner is a generalized build step that can be used across different build configurations.

Meta-runners are created by selecting a build step that we want to reuse/generalise and selecting in the upper right corner Actions >> Extract meta-runner... which opens a new window where we can define the following meta-runner's attributes:
  • Project (so all build configurations withing that project can use it)
  • Name
  • ID
  • Description
Meta-runners are stored in XML format which contains a list of relevant parameters and the script which performs meta-runner's action. Here is an example:

<?xml version="1.0" encoding="UTF-8"?>
<meta-runner name="7z_Extract_Archive">
  <description>Use 7z to extract an archive</description>
      <param name="" value="" spec="text display='normal' validationMode='not_empty'" />
      <param name="" value="" spec="text display='normal' validationMode='not_empty'" />
      <runner name="Use 7z to extract the archive" type="simpleRunner">
          <param name="script.content" value="7z.exe e" />
          <param name="teamcity.step.mode" value="default" />
          <param name="use.custom.script" value="true" />
    <requirements />

Meta-runners are used in a build configuration as the custom types of runners for build steps. When we want to add a new build step we first need to choose its Runner type from a drop-down list. This list shows first meta-runners from this project, than its those from Root project. If we choose 7z_Extract_Archive as a runner for some step, then its params and will be automatically added to this build configuration and they will be set to and So where to we get these and whose values are referenced via %param%?


Hand in hand with meta-runners go templates. Templates are build configurations which don't have any build steps but only have/define a set of parameters and their default values. In our case, we'll create a template named e.d. 7z_Extract_Archive_Policy which defines two parameters and

Build config which contains a step based on 7z_Extract_Archive meta-runner should be attached to a template 7z_Extract_Archive_Policy as this is the way for it to get all parameters required/used by 7z_Extract_Archive.

This explains why in meta-runner 7z_Extract_Archive we set parameter to value - this is going to be a value taken from a parameter inherited from 7z_Extract_Archive_Policy.

Meta-runners define WHAT has to be done and templates define HOW.

Q: How to create template which references some property which belongs to some meta-runner or template (policy) as templates are not inheritable from other templates; the issue is that if we reference it, build would use this one. 

A: If build config is based on two templates which share the same property, it will be given the property of the template of the higher rank order (the one that is listed first in the list of templates)

Maven Settings

Issue Trackers

Cloud Profiles

Clean-up Rules

Versioned Settings

Artifacts Storage

SonarQube Servers

NuGet Feed

SSH Keys

This is where we add private SSH keys for authenticating TeamCity with VCS.

Click on the button Upload SSH Key opens the following dialog:


Build Configuration

All paths in build configuration settings should be relative paths (relative to the build checkout directory). The checkout directory is the directory where all the sources (configured as VCS roots) are placed by the TeamCity.

General Settings

Name (text field)

This is a custom name of this build configuration.

Build configuration ID (text field)

Default value is in form: ProjectName_SubProjectName_BuildConfigurationName

Description (text field)

This is a custom description of this build configuration.

Build configuration type (combo box)

This can have one of these 3 values:
  • Regular
  • Composite (aggregating results)
  • Deployment

Build number format (text field)


Build counter (text field)

Publish artifacts (combo box)

3 options are available:

  • Even if build fails
  • Only if build status is successful
  • Always, even if build stop command was issued

Artifact paths (text field)

The build artifacts are files that you want to publish as the results of your build. If the build generates it's output in the folder named "output", you can just set "output" as your artifacts path.

Let's assume that some build step creates directory output/content and this is the place where artifacts are stored. If value of this field is set to:

output/content => content

...then upon successful build, in the build list view, we can see for this build an Artifacts icon enabled and if we click on it, we can see a folder named content which we can expand and see its content.


Build step >> Working Directory: ./output/
Build step >> Custom script: mkdir -p ./dirA/ && echo "content1" > ./dirA/file1
Artifact paths: ./output/ => output_content

Upon build artifacts are available at URL

We can omit => target and specify just files/directories we want to be picked as artefacts. 

Example: to pick all exe and ps1 files from the root directory we can set:


Build options

Enable hanging builds detection

Allow triggering personal builds

Enable status widget

Limit the number of simultaneously running builds (0 — unlimited)

  • set it to 1 to prevent parallel executions completely

Build Step: Command Line

Working directory

Allows starting a build in a subdirectory of the checkout directory (use a relative path).
When not specified, the build is started in the checkout directory.
All relative paths in TeamCity are relative to checkout directory.
If specified directory doesn't exist, TeamCity will create it (there is no need for mkdir to be used).

Agent Requirements

How to allow only those agents whose name starts with some specific string?

Add a new requirement with the following settings:

Parameter Name:
Condition: starts with
Value: <string>

How to allow only those agents which are running Linux?

Parameter Name: docker.server.osType
Condition: equals
Value: linux


Snapshot Dependencies

Snapshot dependencies are used to create build chains. When being a part of build chain the build of this configuration will start only when all dependencies are built. If necessary, the dependencies will be triggered automatically. Build configurations linked by a snapshot dependency can optionally use revisions synchronization to ensure the same snapshot of the sources.

Artifact Dependencies

Artifact dependency allows using artifacts produced by another build. 

Add new artifact dependency button opens a dialog where we can choose:

Depend on (string) - this is the name of the TeamCity build config which will be Artifacts source.

Get artifacts from:
  • Latest successful build
  • Latest pinned build
  • Latest finished build
  • Latest finished build with specified tag
  • Build from the same chain
  • Build with specified build number
Artifacts rules (string) - this is the pattern which defines which directory from artifacts to be copied to which directory in the local build. Provide here a newline-delimited set of rules in the form of
[+:|-:]SourcePath[!ArchivePath][=>DestinationPath]. Example:

ouptut => data/input/ 

ouptut is a directory from published artifacts from previous build and data/input/ is local path in the current build.

Dependencies can be temporary disabled which is useful when testing build configs.


Version Control Settings

Updating sources: auto checkout (on agent)
[18:21:53]Will use agent side checkout
[18:21:53]VCS Root: My App 
[18:21:53]checkout rules: =>my-app; revision: 8259e79c8c472a31ddf041ffd6e99308905913c6
[18:21:53]Git version:
[18:21:53]Update checkout directory (/home/docker-agent/work/5094643590a7b75e/my-app)
[18:21:53]/usr/bin/git config core.sparseCheckout true
[18:21:53]/usr/bin/git config http.sslCAInfo
[18:21:53]/usr/bin/git show-ref
[18:21:53]/usr/bin/git ls-remote origin
[18:21:54]/usr/bin/git show-ref refs/remotes/origin/feature/dummy-feature
[18:21:54]/usr/bin/git log -n1 --pretty=format:%H%x20%s 8259e79c8c472a31ddf041ffd6e99308905913c6 --
[18:21:54]/usr/bin/git branch
[18:21:54]/usr/bin/git reset --hard 8259e79c8c472a31ddf041ffd6e99308905913c6
[18:21:54]/usr/bin/git branch --set-upstream-to=refs/remotes/origin/feature/dummy-feature
[18:21:54]Cleaning My App in /home/docker-agent/work/5094603590a7b75e/my-app the file set ALL_UNTRACKED
[18:21:54]/usr/bin/git clean -f -d -x
[18:21:54]Failed to perform checkout on agent: '/usr/bin/git clean -f -d -x' command failed.
exit code: 1
stderr: warning: failed to remove .jfrog/projects
[18:21:55]Error message is logged

Fix: Enable the following option:

Version Control Settings >>  Clean build >>  Delete all files in the checkout directory before the build

Writing Build Steps

Writing bash scripts

To print some predefined property use:

echo teamcity.agent.home.dir = %teamcity.agent.home.dir%
echo =
echo =

In the output log we'll have e.g.:

teamcity.agent.home.dir = /home/agent-docker-01

If we have a build step which uses Command Line runner to run Custom script we can use variables in that script as:

echo MY_VAR value is: ${MY_VAR}

It is possible to concatenate string variable with string value of the build property:



If we want to echo text which contains parentheses, we need to escape them:

echo Parentheses test: \(This text is between parentheses\)

If we want to find the id and name of the current user and group, we can use the following in the bash script:

echo user:group \(id\) = $(id -u):$(id -g)
echo user:group \(name\) = $(id -un):$(id -gn)

Log output is like:

user:group (id) = 1008:1008
user:group (name) = docker-slave-73:docker-slave-73
How to ping some remote server:

echo Pinging
ping -v -c 4

How to get current agent's public IP address?

Get agent\'s public IP address:
dig TXT +short
TeamCity uses /bin/sh by default. On Ubuntu, /bin/sh is typically a symbolic link to /bin/dash, a Bourne-like shell that doesn't support arrays. That will give you the error:

Syntax error: "(" unexpected ...

Syntax error: "(" unexpected (expecting "done")

To instruct TeamCity's Linux agent to use /bin/bash (and therefore support arrays in bash scripts), add a bash shebang #!/bin/bash at the beginning of the script:

To test whether this works, add a test Command Line step as the 1st step in your job and use this snippet for Custom script:

array=(1 2 3 4 5)
echo ${array[*]}


Running the build

If there are no compatible agents and you try to run the build, the following message appears:

Warning: No enabled compatible agents for this build configuration. Please register a build agent or tweak build configuration requirements.

TC automatically detects if there are any agents compatible with build steps (Build Steps item in the left-hand menu). They are shown in: 

Agent Requirements >> Agents Compatibility 
[In this section you can see which agents are compatible with the requirements and which are not.]

Integration with Artifacotry

TeamCity Artifactory Plug-in

Once plugin is installed and integrated, we can see Artifactory Integration section in build step settings. It contains the following settings:

  • Artifactory server URL (string)
  • Override default deployer credentials (boolean)
  • Publish build info (boolean)
  • Run license checks (boolean)
  • Download and upload by:
    • Specs
    • Legacy patterns (deprecated)
  • Download spec source:
    • Job configuration
    • File
  • Spec (string)
  • Upload spec source:
    • Job configuration
    • File
  • Spec (string)

Artifactory server URL


Publish build info

If this is checked then Artifactory plugin creates and publishes on Artifactory server a json file which contains build info with the plain list of all artifacts. Path to this file on Artifactory is:

Artifact Repository Browser >> artifactory-build-info/<build_configuration_id>/xx-xxxxxxxxxxxxx.json

Build configuration id is usually in form ProjectName_BuildConfigurationName.

The content of that json file is like:

  "version" : "1.0.1",
  "name" : "My_Proj_Test",
  "number" : "25",
  "type" : "GENERIC",
  "buildAgent" : {
    "name" : "simpleRunner"
  "agent" : {
    "name" : "TeamCity",
    "version" : "2019.1.2 (build 66342)"
  "started" : "2019-08-19T10:47:12.557+0200",
  "durationMillis" : 112,
  "principal" : "Bojan Komazec",
  "artifactoryPrincipal" : "deployer",
  "artifactoryPluginVersion" : "2.8.0",
  "url" : "",
  "vcs" : [ ],
  "licenseControl" : {
    "runChecks" : false,
    "includePublishedArtifacts" : false,
    "autoDiscover" : true,
    "licenseViolationsRecipientsList" : "",
    "scopesList" : ""
  "modules" : [ {
    "id" : "My_Proj_Test :: 25",
    "artifacts" : [ {
      "type" : "",
      "sha1" : "b29930daa02406077d96a7b7a08ce282b3de6961",
      "sha256" : "47d741b6059c6d7e99be23ce46fb9ba099cfd6515de1ef7681f93479d25996a4",
      "md5" : "9b2bb321f2dd1a87857eb875ce22f7e1",
      "name" : "file1"
    }, {
      "type" : "",
      "sha1" : "b29930dda02406077d96a7b7a08ce282b3de6961",
      "sha256" : "47d741b6059c6d7e99be25ce46fb9ba099cfd6515de1ef7681f93479d25996a4",
      "md5" : "9b2bb321f5dd1a87857eb875ce22f7e1",
      "name" : "file2"
    } ]
  } ],
  "buildDependencies" : [ ],
  "governance" : {
    "blackDuckProperties" : {
      "runChecks" : false,
      "includePublishedArtifacts" : false,
      "autoCreateMissingComponentRequests" : false,
      "autoDiscardStaleComponentRequests" : false

Upload spec source with Job configuration

If we want to upload some output directory to Artifactory it is enough to set URL for Artifactory server, choose Job configuration as Upload spec source and set Spec as e.g.:

   "files": [
         "target": "path/to/output/",
         "flat": false

output is directory on TeamCity and path/to/output/ is path to the target directory on the Artifactory. In this example content in Artifactory will be at path*.

To avoid this, we can set working directory to ./output/ and then set pattern to "./". In that case the content would be at the path*.

It is possible to use TeamCity variables in Custom published artifacts value:


Download by Specs

Download spec source = Job configuration

Artifactory page URL example:

Repository Path: ProductXYZ/ModuleA

Spec: (example when we want to download all files with extension .ext from ModuleA)

   "files": [
         "target": "./data-ext/",
         "flat": true


JFrog CLI is a client app for JFrog products e.g. Artifactory. To use it from TeamCity, the easiest way is to run it from ready-made Docker container. In the Command line -based build step, set the following:

Docker Settings

Run step within Docker container:
E.g. ruby:2.4. TeamCity will start a container from the specified image and will try to run this build step within this container.  

Docker image platform: Linux

Additional docker run arguments: --env JFROG_CLI_OFFER_CONFIG=false

Custom script contains jfrog CLI commands which should be in the following form:

$ jfrog target command-name global-options command-options arguments

target - The product on which you wish to execute the command. rt is used for JFrog Artifactory

Custom script can be like this:

echo env.artifactory.deployer.username = %env.artifactory.deployer.username% 
echo env.artifactory.deployer.password = %env.artifactory.deployer.password%

echo Checking if Artifactory is accessible...

jfrog -v

jfrog rt c my-artifactory --url=

jfrog rt use my-artifactory
jfrog rt ping
jfrog rt c show

jfrog rt u  ./ rootdir/MyApp/test/


c = config
my-artifactory - custom name that we want to use as unique ID  for the new Artifactory server configuration
--url - default Artifactory URL to be used for the other commands
--apikey - default API key to be used for the other commands

c show - Shows the stored configuration. In case this argument is followed by a configured server ID, then only this server's configurations is shown

use - used for specifying which of the configured Artifactory instances should be used for the following CLI commands.

ping -  command can be used to verify that Artifactory is accessible by sending an applicative ping to Artifactory.

u = upload

If we didn't set JFROG_CLI_OFFER_CONFIG=false for 

jfrog rt ping

command we'd get the following error:

[15:24:27][Step 2/3] Checking if Artifactory is accessible...
[15:24:27][Step 2/3] To avoid this message in the future, set the JFROG_CLI_OFFER_CONFIG environment variable to false.
[15:24:27][Step 2/3] The CLI commands require the Artifactory URL and authentication details
[15:24:27][Step 2/3] Configuring JFrog CLI with these parameters now will save you having to include them as command options.
[15:24:27][Step 2/3] You can also configure these parameters later using the 'config' command.
[15:24:27][Step 2/3] [Error] The --url option is mandatory
[15:24:27][Step 2/3] Configure now? (y/n): 
[15:24:27][Step 2/3] Process exited with code 1
[15:24:27][Step 2/3] Process exited with code 1 (Step: Pushing to Artifactory via JFrog CLI (Command Line))

No comments: