4

Background: Jenkins is running inside a docker container, which works great, but by design we want all build processes to run inside docker containers to minimize the software installed inside the Jenkins container.

Problem: How do I build a 3-stage process using two different docker containers, where all three steps share files?

Step 1: Build

  • npm build

Step 2: Test

  • npm test`

Step 3: Run AWS code-deploy

  • aws deploy push --application-name app-name --s3-location s3://my-bucket/app-name --ignore-hidden-files
  • aws deploy create-deployment --application-name app-name --s3-location bucket=my-bucket,key=app-name,bundleType=zip --deployment-group-name dg

How do I break up the Jenkins file into multiple stages, and share the output from the first stage for the other stages?

Simple two-stage Jenkinsfile

pipeline {
  agent {
    docker {
      image 'node:10.8.0'
    }
  }
  stages {
    stage('Build') {
      steps {
        sh 'npm install'
      }
    }
    stage('Test') {
      steps {
        sh 'npm test'
      }
    }
  }
}

But, when I add in the third stage, things get more interesting, since I can't use a global docker image (agent)

pipeline {
  agent none
  stages {
    stage('Build') {
      agent {
        docker { image 'node:10.8.0' }
      }
      steps {
        sh 'npm install'
      }
    }
    stage('Test') {
      agent {
        docker { image 'node:10.8.0' }
      }
      steps {
        sh 'npm test'
      }
    }
    stage('Deploy') {
      agent {
        docker { image 'coreos/awscli' }
      }
      steps {
        sh 'echo "Deploying to AWS"'
        sh 'aws help'
      }
    }
  }
}

In the above, 'npm test' fails because the results of the build stage are lost. And, the code deploy wouldn't work because all the build artifacts are lost.

One workaround for having the test work is to have a 'BuildAndTest' stage that uses an image, but this loses some of the advantages of separate steps.

pipeline {
  agent none
  stages {
    stage('Build And Test') {
      agent {
        docker { image 'node:10.8.0' }
      }
      steps {
        sh 'npm install'
        sh 'npm test'
      }
    }
    stage('Deploy') {
      agent {
        docker { image 'coreos/awscli' }
      }
      steps {
        sh 'echo "Deploying to AWS"'
        sh 'aws help'
      }
    }
  }
}

One other (super ugly) solution is to create a custom image that has both node and aws installed, but that means everytime we migrate to a newer version of either node and/or aws, we have to create another docker image with the updated version, when really they are completely separate tasks.

The other solution is to mount a shared image among all the instances, but how do I create a 'temporary' image that is only shared for this build, and that gets deleted after the build completes?

1 Answer 1

3

Just after I posted the question, I found an article from a Jenkins dev

https://jenkins.io/blog/2018/07/02/whats-new-declarative-piepline-13x-sequential-stages/

This is what I came up with

pipeline {
  agent none
  stages {
    stage('Build and Test') {
      agent {
        docker 'node:10.8.0'
      }
      stages {
        stage('Build') {
          steps {
            sh 'npm install'
          }
        }
        stage('Test') {
          steps {
            sh 'npm test'
          }
        }
      }
      post {
        success {
          stash name: 'artifacts', includes: "node_modules/**/*"
        }
      }
    }

    stage('Deploy') {
      agent {
        docker 'coreos/awscli'
      }
      steps {
        unstash 'artifacts'
        sh 'echo "Deploying to AWS"'
        sh 'aws help'
      }
    }
  }
}

Jenkins now allows for multiple stages inside the declarative, and I didn't know about the 'stash/unstash' commands, which work great for me.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .