This content has moved - please find it at https://devblog.cyotek.com.

Although these pages remain accessible, some content may not display correctly in future as the new blog evolves.

Visit https://devblog.cyotek.com.

Using parameters with Jenkins pipeline builds

After my first experiment in building and publishing our Nuget packages using Jenkins, I wasn't actually anticipating writing a follow up post. As it transpires however, I was unhappy with the level of duplication - at the moment I have 19 packages for our internal libraries, and there are around 70 other non-product libraries that could be turned into packages. I don't really want 90+ copies of that script!

As I did mention originally, Jenkins does recommend that the build script is placed into source control, so I started looking at doing that. I wanted to have a single version that was capable of handling different configurations that some projects have and that would receive any required parameters directly from the Jenkins job.

Fortunately this is both possible and easy to do as you can add custom properties to a Jenkins job which the Groovy scripts can then access. This article will detail how I took my original script, and adapted it to handle 19 (and counting!) package compile and publish jobs.

Defining parameters

An example of a parameterised build

Parameters are switched off and hidden by default, but it's easy enough to enable them. In the General properties for your job, find and tick the option marked This project is parameterised.

This will then show a button marked Add Parameter which, when clicked, will show a drop-down of the different parameter types available. For my script, I'm going to use single line string, multi-line string and boolean parameters.

The parameter name is used as environment variables in batch jobs, therefore you should try and avoid common parameter names such as PATH and also ensure that the name doesn't include special characters such as spaces.

By the time I'd added 19 pipeline projects (including converting the four I'd created earlier) into parameterised builds running from the same source script, I'd ended up with the following parameters

Type Name Example Value
String LIBNAME Cyotek.Core
String TESTLIBNAME Cyotek.Core.Tests
String LIBFOLDERNAME src
String TESTLIBFOLDERNAME tests
Multi-line EXTRACHECKOUTREMOTE /source/Libraries/Cyotek.Win32
Multi-line EXTRACHECKOUTLOCAL .\source\Libraries\Cyotek.Win32
Boolean SIGNONLY false

More parameters than I really wanted, but it covers the different scenarios I need. Note that with the exception of LIBNAME, all other parameters are optional and the build should still run even if they aren't actually defined.

Accessing parameters

There are at least 3 ways that I know of accessing the parameters from your script

  • env.<ParameterName> - returns the string parameter from environment variables. (You can also use env. to get other environment variables, for example env.ProgramFiles)
  • params.<ParameterName> - returns the strongly typed parameter
  • "${<ParameterName>}" - returns the value via interpolation

Of the three types above, the first two return null if you request a parameter which doesn't exist - very helpful for when you decide to add a new parameter later and don't want to update all the existing projects!

The third however, will crash the build. It'll be easy to diagnose if this happens as the output log for the build will contain lines similar to the following

groovy.lang.MissingPropertyException: No such property: LIBFOLDERNAME for class: groovy.lang.Binding
  at groovy.lang.Binding.getVariable(Binding.java:63)
  at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetProperty(SandboxInterceptor.java:224)
  at org.kohsuke.groovy.sandbox.impl.Checker$4.call(Checker.java:241)
  at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:238)
  at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:221)
  at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:221)
  at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.getProperty(SandboxInvoker.java:28)
  at com.cloudbees.groovy.cps.impl.PropertyAccessBlock.rawGet(PropertyAccessBlock.java:20)
  at WorkflowScript.run(WorkflowScript:84)
  ... and much more!

So my advice is to only use the interpolation versions when you can guarantee the parameters will exist.

Adapting the previous script

In my first attempt at creating the pipeline job, I had a block of variables defined at the top of the script so I could easily edit them when creating the next pipeline. I'm now going to adapt that block to use parameters.

def libName     = params.LIBNAME
def testLibName = params.TESTLIBNAME

def sourceRoot  = 'source\\Libraries\\'

def slnPath     = "${WORKSPACE}\\${sourceRoot}${libName}\\"
def slnName     = "${slnPath}${libName}.sln"
def projPath    = combinePath(slnPath, params.LIBFOLDERNAME)
def projName    = "${projPath}${libName}.csproj"
def testsPath   = combinePath(slnPath, params.TESTLIBFOLDERNAME)

def hasTests    = testLibName != null && testLibName.length() > 0

I'm using params to access the parameters to avoid any interpolation crashes. As it's possible the path parameters could be missing or empty, I'm also using a combinePath helper function. This is a very naive implementation and should probably be made a little more robust. Although Java has a File object which we could use, it is blocked by default as Jenkins runs scripts in a sandbox. As I don't think turning off security features is particularly beneficial, this simple implementation will serve the requirements of my build jobs easily enough.

{
  def result;
  
  // This is a somewhat naive implementation, but it's sandbox safe
  
  if(path2 == null || path2.length() == 0)
  {
    result = path1
  }
  else
  {
    result = path1 + path2
  }
  
  if(result.charAt(result.length() - 1) != '\\')
  {
    result += '\\'
  }
  
  return result
}

Note: The helper function must be placed outside node statements

Using multi-line string parameters

The multi-line string parameter is exactly the same as a normal string parameter, the difference simply seems to be the type of editor they use. So if you want to treat them as an array of values, you will need to build this yourself using the split function.

if(additionalCheckoutRemote != null && additionalCheckoutRemote.length() > 0)
{
  def additionalCheckoutRemotes = additionalCheckoutRemote.split("\\r?\\n")

  // do stuff with the string array created above 
}

Performing multiple checkouts

Some of my projects are slightly naughty and pull code files from outside their respective library folders. The previous version of the script had these extra checkout locations hard-coded, but that clearly will no longer suffice. Instead, by leveraging the multi-line string parameters, I have let each job define zero or more locations and check them out that way.

I chose to use two parameters, one for the remote source and one for the local destination even though this complicates things slightly - but I felt it was better than trying to munge both values into a single line

if(additionalCheckoutRemote != null && additionalCheckoutRemote.length() > 0)
{
  def additionalCheckoutRemotes = additionalCheckoutRemote.split("\\r?\\n")
  def additionalCheckoutLocals  = params.EXTRACHECKOUTLOCAL.split("\\r?\\n")

  for (int i = 0; i < additionalCheckoutRemotes.size(); i++) 
  {
    checkout(changelog: false, poll: false, scm: 
      [
        $class: 'SubversionSCM', 
        additionalCredentials: [], 
        excludedCommitMessages: '', 
        excludedRegions: '', 
        excludedRevprop: '', 
        excludedUsers: '', 
        filterChangelog: false, 
        ignoreDirPropChanges: true, 
        includedRegions: '', 
        locations: [[credentialsId: '<SVNCREDENTIALSID>', depthOption: 'infinity', ignoreExternalsOption: true, local: additionalCheckoutLocals[i], remote: svnRoot + additionalCheckoutRemotes[i]]], 
        workspaceUpdater: [$class: 'UpdateWithCleanUpdater']
      ]
    )
  }
}

I simply parse the two parameters, and issue a checkout command for each pair. It would possibly make more sense to do only a single checkout command with multiple locations, but this way got the command up and running with minimum fuss.

Running the tests

As not all my libraries have dedicated tests yet, I had defined a hasTests variable at the top of the script which will be true if the TESTLIBNAME parameter has a value. I could then use this to exclude the NUnit execution and publish steps from my earlier script, but that would still mean a Test stage would be present. Somewhat to my surprise, I found wrapping the stage statement in an if block works absolutely fine, although it has a bit of an odour. It does mean that empty test stages won't be display though.

if(hasTests)
{
  stage('Test')
  {
    try
    {
      // call nunit2
      // can't use version 3 as the results plugin doesn't support the v3 output XML format
      bat("${nunitRunner} \"${testsPath}bin/${config}/${testLibName}.dll\" /xml=\"./testresults/${testLibName}.xml\" /nologo /nodots /framework:net-4.5")
    }
    finally
    {
      // as no subsequent stage will be ran if the tests fail, make sure we publish the results regardless of outcome
      // http://stackoverflow.com/a/40609116/148962
      step([$class: 'NUnitPublisher', testResultsPattern:'testresults/*.xml', debug: false, keepJUnitReports: true, skipJUnitArchiver: false, failIfNoResults: true])
    }
  }
}

Those were pretty much the only modifications I made to the existing script to convert it from something bound to a specific project to something I could use in multiple projects.

Archiving the artefacts

Build artefacts published to Jenkins

In my original article, I briefly mentioned one of the things I wanted the script to do was to archive the build artefacts but then never mentioned it again. That was simply because I couldn't get the command to work and I forgot to state that in the post. As it happens, I realised what was wrong while working on the improved version - I'd made all the paths in the script absolute, but this command requires them to be relative to the workspace.

The following command will archive the contents of the libraries output folder along with the generated Nuget package.

archiveArtifacts artifacts: "${sourceRoot}${libName}\\${LIBFOLDERNAME}\\bin\\${config}\\*,nuget\\*.nupkg", caseSensitive: false, onlyIfSuccessful: true

Updating the pipeline to use a "Jenkinsfile"

Now that I've got a (for the moment!) final version of the script, it's time to add it to SVN and then tell Jenkins where to find it. This way, all pipeline jobs can use the one script and automatically inherit any changes to it.

The steps below will configure an existing pipeline job to use a script file taken from SVN.

  • In the Pipeline section of your jobs properties, set the Definition field to be Pipeline script from SCM
  • Select Subversion from the SCM field
  • Set the Repository URL to the location where the script is located
  • Specify credentials as appropriate
  • Click Advanced to show advanced settings
  • Check the Ignore Property Changes on directories option
  • Enter .* in the Excluded Regions field
  • Set the Script Path field to match the filename of your groovy script
  • Click Save to save the job details

Now instead of using an in-line script, the pipeline will pull the script right out of version control.

There are a couple of things to note however

  • This repository becomes part of the polling of the job (if polling is configured). Changing the Ignore Property Changes on directories and Excluded Regions settings will prevent changes to the script for triggering unnecessary rebuilds
  • The specified repository is checked out into a sub-folder of the job data named workspace@script. In other-words, it is checked out directly into your Jenkins installation. Originally I located the script in my \build folder along with all other build files, until I noted all the files were being checked out into multiple server paths, not the temporary work spaces. My advice therefore is to stick the script by itself in a folder so that it is the only file that is checked out, and perhaps change the Repository depth field to files.

It is worth reiterating the point, the contents of this folder will be checked out onto the server where you have installed Jenkins, not slave work-spaces

Cloning the pipeline

As it got a little tiresome creating the jobs manually over and over again, I ended up creating a dummy pipeline for testing. I created a new pipeline project, defined all the variables and then populated these based on the requirements of one of my libraries. Then I'd try and build the project.

If (or once) the build was successful I'd clone that template project as the "official" pipeline, then update the template pipeline for the next project. Rinse and repeat!

To create a new pipeline based on an existing job

  • From the Jenkins dashboard choose New Item from Jenkins
  • Enter a unique name
  • Scroll to the bottom of the page, and in Copy from field, start typing the name of your template job - when the autocomplete lists your job, click it or press Tab
  • Click OK to create the template

Using this approach saved me a ton of work setting up quite a few pipeline jobs.

Are we done yet?

My Jenkins dashboard showing 19 parameterised pipeline jobs running from one script

Of course, as I was finalising the draft of this this post it occurred to me that with a bit more work I could actually get rid of virtually all the parameters I'd just added.

  • All my pipeline projects are named after the library, so I could discard the LIBNAME parameter in favour of the built in JOB_BASE_NAME parameter
  • Given the relevant test projects are all named <ProjectName>.Tests, I could auto generate that value and use the fileExists command to detect if a test project was present
  • The LIBFOLDERNAME and TESTLIBFOLDERNAME parameters are required because not all my libraries are consistent with their paths - some are directly in /src, some are in /src/<ProjectName> and so on. Spending a little time reworking the file system to be consistent means I could drop another two parameters

Happily thanks to having all the builds running from one script, this means when I get around to making these improvements there's only one script to update (excluding deleting the obsolete parameters of course).

And this concludes my second articles on Jenkins pipelines, as always comments welcome.

Update History

  • 2017-01-20 - First published
  • 2020-11-21 - Updated formatting

Downloads

Filename Description Version Release Date
jenkins-nuget-pipeline-example-v2.groovy
  • sha256: 66f2a49c13ad72784c5e580173e35a1b9b04784b653ddc7eb177645a9ad28205

Sample script for the using parameters with Jenkins pipeline builds blog post.

20/01/2017 Download

About The Author

Gravatar

The founder of Cyotek, Richard enjoys creating new blog content for the site. Much more though, he likes to develop programs, and can often found writing reams of code. A long term gamer, he has aspirations in one day creating an epic video game. Until that time, he is mostly content with adding new bugs to WebCopy and the other Cyotek products.

Leave a Comment

While we appreciate comments from our users, please follow our posting guidelines. Have you tried the Cyotek Forums for support from Cyotek and the community?

Styling with Markdown is supported