Thursday 15 August 2019

Introduction to TeamCity

Project 

Each project is identified by its name and unique ID:




General Settings


Project dashboard contains General Settings menu in the upper left corner. It looks like this:




VCS Roots


TeamCity builds need to get a code that has to be built into a binary. That code is usually kept in some Version Control System (VCS). We can add VCS to TeamCity via VCS Roots view:


Click on Create VCS root button opens a new page where we can select a type of VCS. For example, if we choose Git, we'll get:



VCS root name can be a name of the repository e.g. my-repo.

VCS root ID gets automatically generated as you type VCS root name.

Fetch URL can be in SSH form e.g. git@git.example.com:project/my-repo.git.

When uploaded key is selected (from a list which is automatically populated with SSH keys added in SSH Keys view) a field for entering private key password appears dynamically:




When TeamCity has to check out a repository from VCS, it needs to authenticate. Using SSH keys is preferred way. We can create SSH key pair on a dev machine and upload private key on TeamCity server and public key on VCS (e.g. GitHub). I wrote earlier about how to generate SSH key pairs on Ubuntu

After adding public SSH key to the list of Deploy keys for the given repo in VCS, we can click the button Test connection and if everything is ok, we'll see:


If you've forgot to add public SSH key to the repo in VCS, you might get this error:




Now we need to click Create button in order to this VCS root configuration to be saved.

Report Tabs


Parameters


Connections


Shared Resources


Meta-Runners

Meta-Runner is a generalized build step that can be used across different build configurations.

Meta-runners are created by selecting a build step that we want to reuse/generalise and selecting in the upper right corner Actions >> Extract meta-runner... which opens a new window where we can define the following meta-runner's attributes:
  • Project (so all build configurations withing that project can use it)
  • Name
  • ID
  • Description
Meta-runners are stored in XML format which contains a list of relevant parameters and the script which performs meta-runner's action. Here is an example:

<?xml version="1.0" encoding="UTF-8"?>
<meta-runner name="7z_Extract_Archive">
  <description>Use 7z to extract an archive</description>
  <settings>
    <parameters>
      <param name="7z.input.archive.name" value="%7z.input.archive.name%" spec="text display='normal' validationMode='not_empty'" />
      <param name="7z.output.directory.name" value="%7z.output.directory.name%" spec="text display='normal' validationMode='not_empty'" />
      ...
    </parameters>
    <build-runners>
      <runner name="Use 7z to extract the archive" type="simpleRunner">
        <parameters>
          ...
          <param name="script.content" value="7z.exe e %7z.input.archive.name% -o%7z.output.directory.name%" />
          <param name="teamcity.step.mode" value="default" />
          <param name="use.custom.script" value="true" />
        </parameters>
      </runner>
    </build-runners>
    <requirements />
  </settings>
</meta-runner>

Meta-runners are used in a build configuration as the custom types of runners for build steps. When we want to add a new build step we first need to choose its Runner type from a drop-down list. This list shows first meta-runners from this project, than its parent...to those from Root project. If we choose 7z_Extract_Archive as a runner for some step, then its params 7z.input.archive.name and 7z.output.directory.name will be automatically added to this build configuration and they will be set to %7z.input.archive.name% and %7z.output.directory.name%. So where to we get these 7z.input.archive.name and 7z.output.directory.name whose values are referenced via %param%?

Templates


Hand in hand with meta-runners go templates. Templates are build configurations which don't have any build steps but only have/define a set of parameters and their default values. In our case, we'll create a template named e.d. 7z_Extract_Archive_Policy which defines two parameters 7z.input.archive.name and 7z.output.directory.name.

Build config which contains a step based on 7z_Extract_Archive meta-runner should be attached to a template 7z_Extract_Archive_Policy as this is the way for it to get all parameters required/used by 7z_Extract_Archive.

This explains why in meta-runner 7z_Extract_Archive we set parameter 7z.input.archive.name to value %7z.input.archive.name% - this is going to be a value taken from a parameter 7z.input.archive.name inherited from 7z_Extract_Archive_Policy.

Meta-runners define WHAT has to be done and templates define HOW.

Q: How to create template which references some property which belongs to some meta-runner or template (policy) as templates are not inheritable from other templates; the issue is that if we reference it, build would use this one. 

A: If build config is based on two templates which share the same property, it will be given the property of the template of the higher rank order (the one that is listed first in the list of templates)

Maven Settings


Issue Trackers


Cloud Profiles


Clean-up Rules


Versioned Settings


Artifacts Storage


SonarQube Servers


NuGet Feed


SSH Keys


This is where we add private SSH keys for authenticating TeamCity with VCS.


Click on the button Upload SSH Key opens the following dialog:




Suggestions





Build Configuration

All paths in build configuration settings should be relative paths (relative to the build checkout directory). The checkout directory is the directory where all the sources (configured as VCS roots) are placed by the TeamCity.


General Settings

Name (text field)


This is a custom name of this build configuration.

Build configuration ID (text field)


Default value is in form: ProjectName_SubProjectName_BuildConfigurationName

Description (text field)


This is a custom description of this build configuration.

Build configuration type (combo box)


This can have one of these 3 values:
  • Regular
  • Composite (aggregating results)
  • Deployment

Build number format (text field)


Example:
%build.counter%

Build counter (text field)

Publish artifacts (combo box)


3 options are available:

  • Even if build fails
  • Only if build status is successful
  • Always, even if build stop command was issued

Artifact paths (text field)


The build artifacts are files that you want to publish as the results of your build. If the build generates it's output in the folder named "output", you can just set "output" as your artifacts path.

Let's assume that some build step creates directory output/content and this is the place where artifacts are stored. If value of this field is set to:

output/content => content

...then upon successful build, in the build list view, we can see for this build an Artifacts icon enabled and if we click on it, we can see a folder named content which we can expand and see its content.

Example:

Build step >> Working Directory: ./output/
Build step >> Custom script: mkdir -p ./dirA/ && echo "content1" > ./dirA/file1
Artifact paths: ./output/ => output_content

Upon build artifacts are available at URL https://tc.example.com/viewLog.html?buildId=1123342&buildTypeId=MyApp_Test&tab=artifacts


We can omit => target and specify just files/directories we want to be picked as artefacts. 

Example: to pick all exe and ps1 files from the root directory we can set:

.\*.exe
.\*MyTool.ps1


Build options


Enable hanging builds detection

Allow triggering personal builds

Enable status widget

Limit the number of simultaneously running builds (0 — unlimited)

  • set it to 1 to prevent parallel executions completely


Build Step: Command Line


Working directory


Allows starting a build in a subdirectory of the checkout directory (use a relative path).
When not specified, the build is started in the checkout directory.
All relative paths in TeamCity are relative to checkout directory.
If specified directory doesn't exist, TeamCity will create it (there is no need for mkdir to be used).

Agent Requirements


How to allow only those agents whose name starts with some specific string?


Add a new requirement with the following settings:

Parameter Name: system.agent.name
Condition: starts with
Value: <string>


How to allow only those agents which are running Linux?


Parameter Name: docker.server.osType
Condition: equals
Value: linux


Dependencies


Snapshot Dependencies


Snapshot dependencies are used to create build chains. When being a part of build chain the build of this configuration will start only when all dependencies are built. If necessary, the dependencies will be triggered automatically. Build configurations linked by a snapshot dependency can optionally use revisions synchronization to ensure the same snapshot of the sources.

Artifact Dependencies


Artifact dependency allows using artifacts produced by another build. 

Add new artifact dependency button opens a dialog where we can choose:

Depend on (string) - this is the name of the TeamCity build config which will be Artifacts source.

Get artifacts from:
  • Latest successful build
  • Latest pinned build
  • Latest finished build
  • Latest finished build with specified tag
  • Build from the same chain
  • Build with specified build number
Artifacts rules (string) - this is the pattern which defines which directory from artifacts to be copied to which directory in the local build. Provide here a newline-delimited set of rules in the form of
[+:|-:]SourcePath[!ArchivePath][=>DestinationPath]. Example:

ouptut => data/input/ 

ouptut is a directory from published artifacts from previous build and data/input/ is local path in the current build.

Dependencies can be temporary disabled which is useful when testing build configs.

TBC...


Version Control Settings


Updating sources: auto checkout (on agent)
[18:21:53]Will use agent side checkout
[18:21:53]VCS Root: My App 
[18:21:53]checkout rules: =>my-app; revision: 8259e79c8c472a31ddf041ffd6e99308905913c6
[18:21:53]Git version: 2.7.4.0
[18:21:53]Update checkout directory (/home/docker-agent/work/5094643590a7b75e/my-app)
[18:21:53]/usr/bin/git config core.sparseCheckout true
[18:21:53]/usr/bin/git config http.sslCAInfo
[18:21:53]/usr/bin/git show-ref
[18:21:53]/usr/bin/git ls-remote origin
[18:21:54]/usr/bin/git show-ref refs/remotes/origin/feature/dummy-feature
[18:21:54]/usr/bin/git log -n1 --pretty=format:%H%x20%s 8259e79c8c472a31ddf041ffd6e99308905913c6 --
[18:21:54]/usr/bin/git branch
[18:21:54]/usr/bin/git reset --hard 8259e79c8c472a31ddf041ffd6e99308905913c6
[18:21:54]/usr/bin/git branch --set-upstream-to=refs/remotes/origin/feature/dummy-feature
[18:21:54]Cleaning My App in /home/docker-agent/work/5094603590a7b75e/my-app the file set ALL_UNTRACKED
[18:21:54]/usr/bin/git clean -f -d -x
[18:21:54]Failed to perform checkout on agent: '/usr/bin/git clean -f -d -x' command failed.
exit code: 1
stderr: warning: failed to remove .jfrog/projects
[18:21:55]Error message is logged

Fix: Enable the following option:

Version Control Settings >>  Clean build >>  Delete all files in the checkout directory before the build

Writing Build Steps

Writing bash scripts

---
To print some predefined property use:

echo teamcity.agent.home.dir = %teamcity.agent.home.dir%
echo teamcity.agent.work.dir = %teamcity.agent.work.dir%
echo system.teamcity.build.checkoutDir = %system.teamcity.build.checkoutDir%

In the output log we'll have e.g.:

teamcity.agent.home.dir = /home/agent-docker-01

---
If we have a build step which uses Command Line runner to run Custom script we can use variables in that script as:

MY_VAR=test
echo MY_VAR value is: ${MY_VAR}

It is possible to concatenate string variable with string value of the build property:

SCHEMAS_DIR=output_schemas_%myapp.env%

---

If we want to echo text which contains parentheses, we need to escape them:

echo Parentheses test: \(This text is between parentheses\)
---

If we want to find the id and name of the current user and group, we can use the following in the bash script:

echo user:group \(id\) = $(id -u):$(id -g)
echo user:group \(name\) = $(id -un):$(id -gn)

Log output is like:

user:group (id) = 1008:1008
user:group (name) = docker-slave-73:docker-slave-73
---
How to ping some remote server:

echo Pinging example.com...
ping -v -c 4 example.com

---
How to get current agent's public IP address?

Get agent\'s public IP address:
dig TXT +short o-o.myaddr.l.google.com @ns1.google.com
---
TeamCity uses /bin/sh by default. On Ubuntu, /bin/sh is typically a symbolic link to /bin/dash, a Bourne-like shell that doesn't support arrays. That will give you the error:

Syntax error: "(" unexpected ...

e.g.
Syntax error: "(" unexpected (expecting "done")

To instruct TeamCity's Linux agent to use /bin/bash (and therefore support arrays in bash scripts), add a bash shebang #!/bin/bash at the beginning of the script:

To test whether this works, add a test Command Line step as the 1st step in your job and use this snippet for Custom script:

#!/bin/bash
array=(1 2 3 4 5)
echo ${array[*]}

---


Running the build


If there are no compatible agents and you try to run the build, the following message appears:

Warning: No enabled compatible agents for this build configuration. Please register a build agent or tweak build configuration requirements.

TC automatically detects if there are any agents compatible with build steps (Build Steps item in the left-hand menu). They are shown in: 

Agent Requirements >> Agents Compatibility 
[In this section you can see which agents are compatible with the requirements and which are not.]


https://stackoverflow.com/questions/4737114/build-project-how-to-checkout-different-repositories-in-different-folders

https://confluence.jetbrains.com/display/TCD5/VCS+Checkout+Rules

https://www.jetbrains.com/help/teamcity/2019.1/integrating-teamcity-with-docker.html#IntegratingTeamCitywithDocker-DockerSupportBuildFeature

Integration with Artifacotry


TeamCity Artifactory Plug-in

Once plugin is installed and integrated, we can see Artifactory Integration section in build step settings. It contains the following settings:

  • Artifactory server URL (string)
  • Override default deployer credentials (boolean)
  • Publish build info (boolean)
  • Run license checks (boolean)
  • Download and upload by:
    • Specs
    • Legacy patterns (deprecated)
  • Download spec source:
    • Job configuration
    • File
  • Spec (string)
  • Upload spec source:
    • Job configuration
    • File
  • Spec (string)

Artifactory server URL

Example:
https://artifactory.example.com/artifactory


Publish build info


If this is checked then Artifactory plugin creates and publishes on Artifactory server a json file which contains build info with the plain list of all artifacts. Path to this file on Artifactory is:

Artifact Repository Browser >> artifactory-build-info/<build_configuration_id>/xx-xxxxxxxxxxxxx.json

Build configuration id is usually in form ProjectName_BuildConfigurationName.

The content of that json file is like:

{
  "version" : "1.0.1",
  "name" : "My_Proj_Test",
  "number" : "25",
  "type" : "GENERIC",
  "buildAgent" : {
    "name" : "simpleRunner"
  },
  "agent" : {
    "name" : "TeamCity",
    "version" : "2019.1.2 (build 66342)"
  },
  "started" : "2019-08-19T10:47:12.557+0200",
  "durationMillis" : 112,
  "principal" : "Bojan Komazec",
  "artifactoryPrincipal" : "deployer",
  "artifactoryPluginVersion" : "2.8.0",
  "url" : "https://teamcity.iexample.com/viewLog.html?buildId=16051507&buildTypeId=My_Proj_Test",
  "vcs" : [ ],
  "licenseControl" : {
    "runChecks" : false,
    "includePublishedArtifacts" : false,
    "autoDiscover" : true,
    "licenseViolationsRecipientsList" : "",
    "scopesList" : ""
  },
  "modules" : [ {
    "id" : "My_Proj_Test :: 25",
    "artifacts" : [ {
      "type" : "",
      "sha1" : "b29930daa02406077d96a7b7a08ce282b3de6961",
      "sha256" : "47d741b6059c6d7e99be23ce46fb9ba099cfd6515de1ef7681f93479d25996a4",
      "md5" : "9b2bb321f2dd1a87857eb875ce22f7e1",
      "name" : "file1"
    }, {
      "type" : "",
      "sha1" : "b29930dda02406077d96a7b7a08ce282b3de6961",
      "sha256" : "47d741b6059c6d7e99be25ce46fb9ba099cfd6515de1ef7681f93479d25996a4",
      "md5" : "9b2bb321f5dd1a87857eb875ce22f7e1",
      "name" : "file2"
    } ]
  } ],
  "buildDependencies" : [ ],
  "governance" : {
    "blackDuckProperties" : {
      "runChecks" : false,
      "includePublishedArtifacts" : false,
      "autoCreateMissingComponentRequests" : false,
      "autoDiscardStaleComponentRequests" : false
    }
  }
}



Upload spec source with Job configuration



If we want to upload some output directory to Artifactory it is enough to set URL for Artifactory server, choose Job configuration as Upload spec source and set Spec as e.g.:

{
   "files": [
      {
         "pattern":"./output",
         "target": "path/to/output/",
         "flat": false
      }   
   ] 
}

output is directory on TeamCity and path/to/output/ is path to the target directory on the Artifactory. In this example content in Artifactory will be at path artifactory.example.com/path/to/output/output/*.

To avoid this, we can set working directory to ./output/ and then set pattern to "./". In that case the content would be at the path artifactory.example.com/path/to/output/*.


It is possible to use TeamCity variables in Custom published artifacts value:

data-vol/artefacts/=>MyArtifactoryFolder/artefacts-%build.number%.zip


https://teamcity-support.jetbrains.com/hc/en-us/community/posts/206163909-Select-branch-combination-from-different-VCS-Roots

https://tc.example.com/viewLog.html?buildId=15995461&buildTypeId=Browser_AdminNew_RunWpTools&tab=artifacts


Download by Specs


Download spec source = Job configuration

Artifactory page URL example:

https://artifactory.example.com/artifactory/webapp/#/artifacts/browse/tree/General/ProductXYZ/ModuleA

Repository Path: ProductXYZ/ModuleA


Spec: (example when we want to download all files with extension .ext from ModuleA)

{
   "files": [
      {
         "pattern":"ProductXYZ/ModuleA/*.ext",
         "target": "./data-ext/",
         "flat": true
      }   
   ] 
}


JFrog CLI


JFrog CLI is a client app for JFrog products e.g. Artifactory. To use it from TeamCity, the easiest way is to run it from ready-made Docker container. In the Command line -based build step, set the following:

Docker Settings

Run step within Docker container: docker.bintray.io/jfrog/jfrog-cli-go:latest
E.g. ruby:2.4. TeamCity will start a container from the specified image and will try to run this build step within this container.  

Docker image platform: Linux

Additional docker run arguments: --env JFROG_CLI_OFFER_CONFIG=false

Custom script contains jfrog CLI commands which should be in the following form:

$ jfrog target command-name global-options command-options arguments

target - The product on which you wish to execute the command. rt is used for JFrog Artifactory


Custom script can be like this:

echo env.artifactory.deployer.username = %env.artifactory.deployer.username% 
echo env.artifactory.deployer.password = %env.artifactory.deployer.password%

echo Checking if Artifactory is accessible...
env

jfrog -v

jfrog rt c my-artifactory --url=https://artifactory.example.com/artifactory --apikey=%system.artifactory.apikey.my-test-user%

jfrog rt use my-artifactory
jfrog rt ping
jfrog rt c show

jfrog rt u  ./ rootdir/MyApp/test/

---

c = config
my-artifactory - custom name that we want to use as unique ID  for the new Artifactory server configuration
--url - default Artifactory URL to be used for the other commands
--apikey - default API key to be used for the other commands

c show - Shows the stored configuration. In case this argument is followed by a configured server ID, then only this server's configurations is shown

use - used for specifying which of the configured Artifactory instances should be used for the following CLI commands.

ping -  command can be used to verify that Artifactory is accessible by sending an applicative ping to Artifactory.

u = upload



If we didn't set JFROG_CLI_OFFER_CONFIG=false for 

jfrog rt ping

command we'd get the following error:

[15:24:27][Step 2/3] Checking if Artifactory is accessible...
[15:24:27][Step 2/3] To avoid this message in the future, set the JFROG_CLI_OFFER_CONFIG environment variable to false.
[15:24:27][Step 2/3] The CLI commands require the Artifactory URL and authentication details
[15:24:27][Step 2/3] Configuring JFrog CLI with these parameters now will save you having to include them as command options.
[15:24:27][Step 2/3] You can also configure these parameters later using the 'config' command.
[15:24:27][Step 2/3] [Error] The --url option is mandatory
[15:24:27][Step 2/3] Configure now? (y/n): 
[15:24:27][Step 2/3] Process exited with code 1
[15:24:27][Step 2/3] Process exited with code 1 (Step: Pushing to Artifactory via JFrog CLI (Command Line))


No comments: