Background: Jenkins is running inside a docker container, which works great, but by design we want all build processes to run inside docker containers to minimize the software installed inside the Jenkins container.
Problem: How do I build a 3-stage process using two different docker containers, where all three steps share files?
Step 1: Build
- npm build
Step 2: Test
- npm test`
Step 3: Run AWS code-deploy
- aws deploy push --application-name app-name --s3-location s3://my-bucket/app-name --ignore-hidden-files
- aws deploy create-deployment --application-name app-name --s3-location bucket=my-bucket,key=app-name,bundleType=zip --deployment-group-name dg
How do I break up the Jenkins file into multiple stages, and share the output from the first stage for the other stages?
Simple two-stage Jenkinsfile
pipeline {
agent {
docker {
image 'node:10.8.0'
}
}
stages {
stage('Build') {
steps {
sh 'npm install'
}
}
stage('Test') {
steps {
sh 'npm test'
}
}
}
}
But, when I add in the third stage, things get more interesting, since I can't use a global docker image (agent)
pipeline {
agent none
stages {
stage('Build') {
agent {
docker { image 'node:10.8.0' }
}
steps {
sh 'npm install'
}
}
stage('Test') {
agent {
docker { image 'node:10.8.0' }
}
steps {
sh 'npm test'
}
}
stage('Deploy') {
agent {
docker { image 'coreos/awscli' }
}
steps {
sh 'echo "Deploying to AWS"'
sh 'aws help'
}
}
}
}
In the above, 'npm test' fails because the results of the build stage are lost. And, the code deploy wouldn't work because all the build artifacts are lost.
One workaround for having the test work is to have a 'BuildAndTest' stage that uses an image, but this loses some of the advantages of separate steps.
pipeline {
agent none
stages {
stage('Build And Test') {
agent {
docker { image 'node:10.8.0' }
}
steps {
sh 'npm install'
sh 'npm test'
}
}
stage('Deploy') {
agent {
docker { image 'coreos/awscli' }
}
steps {
sh 'echo "Deploying to AWS"'
sh 'aws help'
}
}
}
}
One other (super ugly) solution is to create a custom image that has both node and aws installed, but that means everytime we migrate to a newer version of either node and/or aws, we have to create another docker image with the updated version, when really they are completely separate tasks.
The other solution is to mount a shared image among all the instances, but how do I create a 'temporary' image that is only shared for this build, and that gets deleted after the build completes?