diff --git a/docs/continuous-delivery/cd-infrastructure/aws-cdk/aws-cdk.md b/docs/continuous-delivery/cd-infrastructure/aws-cdk/aws-cdk.md index 05b70f1f12b..aadd59c60bc 100644 --- a/docs/continuous-delivery/cd-infrastructure/aws-cdk/aws-cdk.md +++ b/docs/continuous-delivery/cd-infrastructure/aws-cdk/aws-cdk.md @@ -217,6 +217,16 @@ You can access the AWS CDK plugin images from the following repositories: Harness also supports **`amd64`** architecture for these plugin images. You can find the corresponding tags (such as `harness/aws-cdk-plugin:1.3.0-2.1019.2-linux-amd64-unified`) on [Docker Hub](https://hub.docker.com/r/harness/aws-cdk-plugin/tags?name=amd64). +Harness releases new AWS CDK Plugin images once every 3 months. If you want to use the latest AWS CDK Plugin images, you can build your own image using the [AWS CDK Plugin Image Builder](/docs/continuous-delivery/cd-infrastructure/aws-cdk/cdk-image-build). + + +### Build your own image + +You can also build your own image based on the base image provided by Harness and use it in a step. For example, if your CDK app uses a specific CDK version, you can use the base image provided by Harness and create your own image containing your dependencies. Here is the pipeline and explanation to build your own image: + +For more information, go to [Build your own image](/docs/continuous-delivery/cd-infrastructure/aws-cdk/cdk-image-build). + + ## Git Clone step The Git Clone step is the first stage **Execution** step added to the containerized step group for Harness CDK. diff --git a/docs/continuous-delivery/cd-infrastructure/aws-cdk/cdk-image-build.md b/docs/continuous-delivery/cd-infrastructure/aws-cdk/cdk-image-build.md new file mode 100644 index 00000000000..2832c96f367 --- /dev/null +++ b/docs/continuous-delivery/cd-infrastructure/aws-cdk/cdk-image-build.md @@ -0,0 +1,343 @@ +--- +title: Building AWS CDK Runtime Images +description: A reusable Harness pipeline to help you build customized AWS CDK plugin images. +tags: + - aws-cdk-image + - plugin-builder +sidebar_position: 2 +--- + +This page provides a Harness CD pipeline designed to help you build your own Docker images for the AWS CDK plugin. + +The purpose of this pipeline is to give you flexibility—so you can adopt newer AWS CDK versions or tailor runtime environments to your needs. + +## What This Pipeline Does + +This pipeline automates building AWS CDK images for different programming languages using Harness. It enables you to keep up with the latest CDK runtimes and apply customizations as required for your projects. + +You can find the full pipeline YAML in the [Pipeline YAML](#pipeline-yaml) section below. + +## Key Pre-requisites + +- **Kubernetes Cluster & Connector:** You must have a Kubernetes cluster set up (using `KubernetesDirect` infrastructure). The cluster must allow privileged containers. + - *Managed K8s (such as GKE):* Do **not** use GKE Autopilot clusters—use a standard node pool that allows privileged mode. + - Set up a Kubernetes Cluster connector in Harness referencing your cluster. +- **Docker Registry & Git Connectors:** Properly configure connectors for Docker registries (`account.dockerhub` or your own) and any required Git repos. +- **Secrets & Variables:** Store Docker registry credentials and secret variables in Harness secrets management. +- **Pipeline Variables:** Be ready to set variables like `VERSION`, `AWS_CDK_VERSION`, `ARCH`, and `TARGET_REPO` at runtime or with defaults. + +You can get the latest CDK version from the [AWS CDK NPM page](https://www.npmjs.com/package/aws-cdk). + +## Supported Runtimes and Base Images + +The pipeline builds images for the following runtime environments: + +- **Python:** Python3, pip, bash, curl, git, Node.js 20, AWS CDK CLI +- **Java:** OpenJDK 11, Maven 3.9.11, bash, curl, git, Node.js 20, AWS CDK CLI +- **DotNet:** .NET runtime and dependencies, bash, icu-libs, git, Node.js 20, AWS CDK CLI +- **Go:** Bash, curl, git, Node.js 20, AWS CDK CLI + +All runtime images derive from the supported Harness [base plugin images](https://hub.docker.com/r/harness/aws-cdk-plugin/tags) and runtime-specific Node.js OS base images. + +**Example image tag format**: +`harness/aws-cdk-plugin:---linux-` + +## Pipeline Steps and Execution Flow + +1. **Authentication Setup:** Creates Docker config for registry authentication. +2. **Dockerfile Generation:** Dynamically generates Dockerfiles per runtime: + - Multi-stage (base + runtime image) + - Plugin and scripts copied from base image + - Installs language runtimes and AWS CDK + - Configures Node.js, metadata, and entrypoint +3. **Image Build and Push:** Uses Docker to build and push tagged runtime images. + +## Privileged Mode Requirement + +Certain pipeline steps (such as Docker-in-Docker for image build and push) require privileged execution. +**Privileged steps** are not standard pipeline steps—they run with escalated permissions and must be explicitly enabled with `privileged: true` in the pipeline YAML. + +**How to enable privileged mode:** +Set `privileged: true` in your step group or individual step under `spec`. +Your Kubernetes cluster must be configured to allow privileged containers. + +```yaml +stepGroup: + privileged: true + name: k8s-step-group + sharedPaths: + - /var/run + - /var/lib/docker +``` + +For individual steps: +``` +step: + name: dinD + privileged: true + ... +``` +Without this setting, Docker builds and image pushes may fail due to insufficient permissions inside the container. + +## Quick Start + +1. Copy the [pipeline YAML](/docs/continuous-delivery/cd-infrastructure/aws-cdk/cdk-image-build#pipeline-yaml) into your Harness Project. +2. Add an empty/do-nothing service to the pipeline. +3. Configure a Kubernetes environment in Harness. +4. In the **Execution section**, enable **container-based execution** in the **step group**. Add the Kubernetes cluster connector inside the container step group. Save the pipeline. +5. Click **Run Pipeline**. +6. Fill in all required variables (see [Pipeline Variables](#pipeline-variables)). + +## Pipeline Variables and Runtime Inputs + +Here are the variables that you have to set in the pipeline YAML and at runtime. + +### Pipeline variables + +| Variable | Description | Example | +| ---------------- | ---------------------------------- | ------------------------- | +| `TARGET_REPO` | Docker repository | `harness/aws-cdk-plugin` | +| `DOCKER_USERNAME`| Docker registry username | `your-dockerhub-username` | +| `DOCKER_PASSWORD`| Docker registry password/token | *(from secrets)* | + +### Runtime inputs + +| Variable | Description | Example | +| ---------------- | ---------------------------------- | ------------------------- | +| `VERSION` | Harness base image version | `1.4.0` | +| `AWS_CDK_VERSION`| AWS CDK CLI version | `2.1029.1` | +| `ARCH` | Image build architecture | `amd64` or `arm64` | + + +
+ +
+ +## Pipeline YAML + +This is the YAML for the AWS CDK image build pipeline. You can copy and paste it into your Harness Project. + +This is how the stage would look in the UI: + +
+ +
+ +
+Pipeline YAML + +Parameters to change after you copy the pipeline YAML and paste it in your Harness Project: +- `projectIdentifier`, `orgIdentifier`, `environmentRef`, `infrastructureDefinitions`, `connectorRef` - docker-connector, `connectorRef` - k8s-connector. + +```yaml +pipeline: + projectIdentifier: your_project_identifier + orgIdentifier: your_org_identifier + tags: {} + stages: + - stage: + identifier: cdk + type: Deployment + name: ckd + spec: + deploymentType: Kubernetes + service: + serviceRef: service + environment: + environmentRef: your_environment_identifier + deployToAll: false + infrastructureDefinitions: + - identifier: your_infrastructure_identifier + execution: + steps: + - stepGroup: + identifier: build + name: build + sharedPaths: + - /var/run + - /var/lib/docker + steps: + - step: + type: Background + name: dinD + identifier: Background + spec: + connectorRef: your_connector_identifier + image: docker:24.0-dind + shell: Sh + privileged: true + - step: + type: Run + name: Build and push + identifier: Run_2 + spec: + connectorRef: your_connector_identifier + image: docker:24.0-dind + shell: Sh + command: |- + #!/bin/bash + set -euo pipefail + # Install common dependencies once - git/node/python/bash etc. + apk add --no-cache bash icu-libs krb5-libs libgcc libintl libssl3 libstdc++ zlib git curl python3 py3-pip bash curl + export VERSION="<+pipeline.variables.VERSION>" + export AWS_CDK_VERSION="<+pipeline.variables.AWS_CDK_VERSION>" + export ARCH="<+pipeline.variables.ARCH>" + export TARGET_REPO="<+pipeline.variables.TARGET_REPO>" + DOCKER_USERNAME=<+pipeline.variables.DOCKER_USERNAME> + DOCKER_PASSWORD=<+pipeline.variables.DOCKER_PASSWORD> + SOURCE_REGISTRY="harness/aws-cdk-plugin" + SCRATCH_IMAGE="${SOURCE_REGISTRY}:${VERSION}-base-${ARCH}" + docker version + docker info + echo "Logging into docker registry" + echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin + echo "Pulling base scratch image: ${SCRATCH_IMAGE}" + docker pull "${SCRATCH_IMAGE}" + # ##### Python image ##### + PY_IMAGE="${TARGET_REPO}:python-${VERSION}-${AWS_CDK_VERSION}-linux-${ARCH}" + cat > Dockerfile.python << EOF + FROM ${SCRATCH_IMAGE} as scratch-content + FROM node:20-alpine3.16 + COPY --from=scratch-content /opt/harness/plugin /opt/harness/aws-cdk-plugin + COPY --from=scratch-content /opt/harness/scripts /opt/harness/scripts + RUN chmod +x /opt/harness/aws-cdk-plugin /opt/harness/scripts/run.sh + RUN apk add --no-cache python3 py3-pip bash curl git + RUN pip3 install --upgrade pip + RUN node --version && npm --version + RUN npm install -g aws-cdk@${AWS_CDK_VERSION} + RUN cdk --version + LABEL org.label-schema.runtime="python" + ENTRYPOINT ["/opt/harness/scripts/run.sh"] + EOF + echo "Building Python runtime image" + docker build -t "${PY_IMAGE}" -f Dockerfile.python . + echo "Pushing Python runtime image" + docker push "${PY_IMAGE}" + ##### Java image ##### + JAVA_IMAGE="${TARGET_REPO}:java-${VERSION}-${AWS_CDK_VERSION}-linux-${ARCH}" + MAVEN_VERSION=3.9.11 + cat > Dockerfile.java << EOF + FROM ${SCRATCH_IMAGE} as scratch-content + FROM node:20-alpine3.16 + # Copy plugin binary to expected path matching run.sh + COPY --from=scratch-content /opt/harness/plugin /opt/harness/aws-cdk-plugin + COPY --from=scratch-content /opt/harness/scripts /opt/harness/scripts + RUN chmod +x /opt/harness/aws-cdk-plugin /opt/harness/scripts/run.sh + RUN apk add --no-cache openjdk11-jre curl bash git + RUN curl -LO https://dlcdn.apache.org/maven/maven-3/${MAVEN_VERSION}/binaries/apache-maven-${MAVEN_VERSION}-bin.tar.gz && \\ + tar -xzf apache-maven-${MAVEN_VERSION}-bin.tar.gz -C /usr/local && \\ + rm apache-maven-${MAVEN_VERSION}-bin.tar.gz + ENV PATH=/usr/local/apache-maven-${MAVEN_VERSION}/bin:\$PATH + RUN java -version + RUN mvn -v + RUN node --version && npm --version + RUN npm install -g aws-cdk@${AWS_CDK_VERSION} + RUN cdk --version + LABEL org.label-schema.runtime="java" + ENTRYPOINT ["/opt/harness/scripts/run.sh"] + EOF + echo "Building Java runtime image" + docker build -t "${JAVA_IMAGE}" -f Dockerfile.java . + echo "Pushing Java runtime image" + docker push "${JAVA_IMAGE}" + echo "✅ Java runtime image built and pushed successfully." + # ##### Dotnet image ##### + DOTNET_IMAGE="${TARGET_REPO}:dotnet-${VERSION}-${AWS_CDK_VERSION}-linux-${ARCH}" + cat > Dockerfile.dotnet << EOF + FROM ${SCRATCH_IMAGE} as scratch-content + FROM node:20-alpine3.16 + COPY --from=scratch-content /opt/harness/plugin /opt/harness/aws-cdk-plugin + COPY --from=scratch-content /opt/harness/scripts /opt/harness/scripts + RUN chmod +x /opt/harness/aws-cdk-plugin /opt/harness/scripts/run.sh + RUN apk add --no-cache bash icu-libs krb5-libs libgcc libintl libssl3 libstdc++ zlib curl nodejs npm git + RUN echo "http://dl-3.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories + RUN apk add --no-cache libgdiplus + RUN node --version && npm --version + RUN npm install -g aws-cdk@${AWS_CDK_VERSION} + RUN cdk --version + LABEL org.label-schema.runtime="dotnet" + ENTRYPOINT ["/opt/harness/scripts/run.sh"] + EOF + echo "Building Dotnet runtime image" + docker build -t "${DOTNET_IMAGE}" -f Dockerfile.dotnet . + echo "Pushing Dotnet runtime image" + docker push "${DOTNET_IMAGE}" + # ##### Go image ##### + GO_IMAGE="${TARGET_REPO}:go-${VERSION}-${AWS_CDK_VERSION}-linux-${ARCH}" + cat > Dockerfile.go << EOF + FROM ${SCRATCH_IMAGE} as scratch-content + FROM node:20-alpine3.16 + COPY --from=scratch-content /opt/harness/plugin /opt/harness/aws-cdk-plugin + COPY --from=scratch-content /opt/harness/scripts /opt/harness/scripts + RUN chmod +x /opt/harness/aws-cdk-plugin /opt/harness/scripts/run.sh + RUN apk add --no-cache bash curl git nodejs npm + RUN node --version && npm --version + RUN npm install -g aws-cdk@${AWS_CDK_VERSION} + RUN cdk --version + LABEL org.label-schema.runtime="go" + ENTRYPOINT ["/opt/harness/scripts/run.sh"] + EOF + echo "Building Go runtime image" + docker build -t "${GO_IMAGE}" -f Dockerfile.go . + echo "Pushing Go runtime image" + docker push "${GO_IMAGE}" + echo "All runtime images built and pushed successfully." + description: Build and push images for all rumtimes + stepGroupInfra: + type: KubernetesDirect + spec: + connectorRef: your_connector_identifier + rollbackSteps: [] + failureStrategies: + - onFailure: + errors: + - AllErrors + action: + type: StageRollback + tags: {} + variables: + - name: VERSION + type: String + description: Version of the plugin (without 'v' prefix) + required: true + value: <+input>.default(1.4.0) + - name: AWS_CDK_VERSION + type: String + description: AWS CDK version to install + required: true + value: <+input>.default(2.1029.1) + - name: ARCH + type: String + description: Architecture to build for + required: true + value: <+input>.allowedValues(amd64,arm64) + - name: TARGET_REPO + type: String + description: Target registry URL + required: true + value: your_target_registry_url + - name: DOCKER_USERNAME + type: String + description: Registry username + required: true + value: your_registry_username + - name: DOCKER_PASSWORD + type: String + description: Registry password + required: true + value: <+secrets.getValue("your-docker-pat")> + identifier: cdk-build-push + name: cdkbuildandpush +``` +
+ +## Output Images + +After a successful build, you will have four tagged images in your target Docker repository: + +- **Python:** `{TARGET_REPO}/harness-cdk-plugin:python-{VERSION}-{AWS_CDK_VERSION}-linux-{ARCH}` +- **Java:** `{TARGET_REPO}/harness-cdk-plugin:java-{VERSION}-{AWS_CDK_VERSION}-linux-{ARCH}` +- **DotNet:** `{TARGET_REPO}/harness-cdk-plugin:dotnet-{VERSION}-{AWS_CDK_VERSION}-linux-{ARCH}` +- **Go:** `{TARGET_REPO}/harness-cdk-plugin:go-{VERSION}-{AWS_CDK_VERSION}-linux-{ARCH}` + +Each image will include the required runtime, the AWS CDK CLI, and the Harness plugin—ready for production use. diff --git a/docs/continuous-delivery/cd-infrastructure/aws-cdk/static/cdk-build-push-2.png b/docs/continuous-delivery/cd-infrastructure/aws-cdk/static/cdk-build-push-2.png new file mode 100644 index 00000000000..bb7a1813135 Binary files /dev/null and b/docs/continuous-delivery/cd-infrastructure/aws-cdk/static/cdk-build-push-2.png differ diff --git a/docs/continuous-delivery/cd-infrastructure/aws-cdk/static/cdk-image-pipeline.png b/docs/continuous-delivery/cd-infrastructure/aws-cdk/static/cdk-image-pipeline.png new file mode 100644 index 00000000000..d3e06a016b1 Binary files /dev/null and b/docs/continuous-delivery/cd-infrastructure/aws-cdk/static/cdk-image-pipeline.png differ diff --git a/docs/continuous-delivery/deploy-srv-diff-platforms/aws/sam-image-build.md b/docs/continuous-delivery/deploy-srv-diff-platforms/aws/sam-image-build.md index 29ad027f45a..38ea092a0dc 100644 --- a/docs/continuous-delivery/deploy-srv-diff-platforms/aws/sam-image-build.md +++ b/docs/continuous-delivery/deploy-srv-diff-platforms/aws/sam-image-build.md @@ -1,14 +1,21 @@ --- -title: SAM Plugin Image Builder -description: Build your AWS SAM plugin image using Harness. +title: Building AWS SAM Runtime Images +description: A reusable Harness pipeline to build customized AWS SAM images. +tags: + - aws-sam + - image-builder sidebar_position: 4 --- -# Overview +This page provides a Harness CD pipeline to help you build your own Docker images for the AWS SAM CLI. -This guide walks you through the process of building custom AWS SAM plugin images for use with Harness Continuous Delivery. By following these instructions, you can create compatible Docker images that combine the Harness SAM plugin with specific AWS Lambda runtime environments. +The purpose of this pipeline is to give you flexibility—so you can adopt newer AWS Lambda runtimes or tailor the image to your specific serverless application needs. -These custom images enable you to deploy serverless applications written in your preferred programming language while leveraging Harness deployment capabilities. +## What This Pipeline Does + +This pipeline automates building AWS SAM images for different programming languages using Harness. It enables you to keep up with the latest SAM versions and apply customizations as required for your projects. + +You can find the full pipeline YAML in the [Pipeline YAML](#pipeline-yaml) section below. ## Understanding SAM Runtimes @@ -21,17 +28,35 @@ Common SAM runtimes include: - **Ruby**: Versions like ruby3.2 - **Go**: Versions like go1.x -When you build your own image using the Harness pipeline, you're combining the Harness SAM plugin (which provides the integration with Harness CD) with a specific SAM runtime image from AWS. This allows you to deploy serverless applications written in your preferred programming language while leveraging Harness deployment capabilities. +When you build your own image using the Harness pipeline, you're combining the Harness SAM base image (which provides the integration with Harness CD) with a specific SAM runtime image from AWS. This allows you to deploy serverless applications written in your preferred programming language while leveraging Harness deployment capabilities. + +## Key Components and pre-requisites + +This pipeline helps you build **custom AWS SAM images** using Harness, integrating the Harness SAM image with supported AWS Lambda runtimes. Key details about the pipeline include: + +- **Deployment Stage with Kubernetes Infrastructure** + - Uses a Deployment stage configured to run on Kubernetes. -## Key Components +- **Privileged Mode and Kubernetes Cluster Setup** + - The pipeline requires privileged mode enabled on the Kubernetes step group to support Docker-in-Docker (DinD) for building and pushing images. + - This mode grants necessary permissions to install and run Docker CLI commands and access the Docker daemon inside pipeline containers. + - When using managed Kubernetes services like GKE, **do not use Autopilot clusters**, which restrict privileged containers. + - Instead, use standard clusters with node pools configured to permit privileged pods. + - Connect your Kubernetes cluster to Harness via a Kubernetes Cluster connector with appropriate permissions. -- Uses a Deployment stage with Kubernetes infrastructure -- Runs in a step group with KubernetesDirect infrastructure -- Takes SAM base image from AWS ECR public gallery -- Extracts runtime and version information from the base image name -- Final image format: `aws-sam-plugin:${VERSION}-${SAM_RUNTIME}-${SAM_VERSION}-linux-amd64` +- **Use of Official AWS SAM Images** + - Pulls SAM base images exclusively from the [AWS ECR public gallery](https://gallery.ecr.aws/sam?page=1) for compatibility. -## Pipeline Runner Permissions and Privileged Settings +- **Automatic Extraction of Runtime and Version** + - The pipeline extracts runtime and version details directly from the SAM base image name. + +- **Final Image Naming Convention** + - The final built images follow the pattern: + `aws-sam-plugin:{VERSION}-{SAM_RUNTIME}-{SAM_VERSION}-linux-amd64` + Example: `aws-sam-plugin:1.1.2-nodejs18.x-1.143.0-linux-amd64` + + +### Pipeline Runner Privileged Mode Requirement Certain steps in the pipeline require the Kubernetes pod to run in privileged mode. This is necessary for starting Docker daemons (DinD), building container images inside pipeline steps, and granting the permissions Docker needs at runtime. @@ -63,40 +88,32 @@ Without this setting, Docker builds and image pushes may fail due to insufficien ## Quick Start -1. Copy the provided pipeline yaml and paste it in your Harness Project. -2. Add an empty/do-nothing service to the pipeline. -3. Add a Kubernetes environment to the pipeline. -4. In the execution section, enable container-based execution and add the Kubernetes cluster connector to the pipeline. Save the pipeline. +1. Copy the provided [pipeline YAML](/docs/continuous-delivery/deploy-srv-diff-platforms/aws/sam-image-build#pipeline-yaml) and paste it in your Harness Project. +2. Add an **empty/do-nothing service** to the pipeline. +3. Add a **Kubernetes environment** to the pipeline. +4. In the **Execution section**, enable **container-based execution** in the **step group**. Add the Kubernetes cluster connector inside the container step group. Save the pipeline. 5. Click **Run Pipeline**. 6. Enter the required parameters: - **VERSION**: Version number for your plugin (e.g., `1.1.2`). With each new code change, a new tag and Docker image are published, letting users access specific plugin versions. - . You can find the Harness base image from [Harness DockerHub](https://hub.docker.com/r/harness/aws-sam-plugin/tags) + . You can find the Harness base image on [Harness DockerHub](https://hub.docker.com/r/harness/aws-sam-plugin/tags) - **SAM_BASE_IMAGE**: SAM base image from AWS ECR Gallery (e.g., `public.ecr.aws/sam/build-nodejs18.x:1.143.0-20250502200316-x86_64`). -## Base Image Requirements - -**SAM Base Image Format** - -The pipeline supports only full formats for the SAM base image: - -Full Format: `public.ecr.aws/sam/build-nodejs18.x:1.143.0-20250502200316-x86_64` +### Base Image Requirements -**SAM Base Image Requirements** - -:::warning Only official AWS SAM build images from the [AWS ECR Public Gallery](https://gallery.ecr.aws/sam?page=1) are supported. -::: - Only use SAM base images from: AWS ECR Gallery - SAM - Only `x86_64` architecture images are supported - Using different base images may cause library dependency issues - Non-standard base images may cause the plugin to not function as required -Example of supported base image: `public.ecr.aws/sam/build-nodejs18.x:1.143.0-20250502200316-x86_64` +**SAM Base Image Format** + +The pipeline supports only full formats for the SAM base image: -# Image Configuration +Full Format: `public.ecr.aws/sam/build-nodejs18.x:1.143.0-20250502200316-x86_64` -## Final Image Naming +#### Image Configuration The final image follows this naming pattern: ``` @@ -109,41 +126,51 @@ aws-sam-plugin:1.1.2-nodejs18.x-1.138.0-linux-amd64 ``` Where: -- `VERSION`: Your plugin version (e.g., `1.1.2`) +- `VERSION`: Harness base image (e.g., `1.1.2`) - `SAM_RUNTIME`: Runtime extracted from SAM base image (e.g., `nodejs18.x`) - `SAM_VERSION`: Version extracted from SAM base image (e.g., `1.143.0`) -## Variables Used in Privileged Steps +### Variables Used in Pipeline + +These variables are actively used in the pipeline for building and pushing the image that you need to configure: + +**Pipeline variables:** - **TARGET_REPO**, **DOCKER_USERNAME**, and **DOCKER_PASSWORD** are set once as pipeline-level variables. -These variables are actively used in the privileged steps of your pipeline for building and pushing the image that you need to con: | Variable | Description | Example | Required | |-----------------|---------------------------------------------|------------------------------------------------------------|----------| -| VERSION | Plugin image/version tag | `1.1.2` | Yes | -| BASE_IMAGE | Reference to your built base image | `harness/aws-sam-plugin:1.1.2-beta-base-image` | Yes | -| SAM_BASE_IMAGE | AWS SAM base image from ECR | `public.ecr.aws/sam/build-python3.12:1.143.0-20250822194415-x86_64` | Yes | | TARGET_REPO | Target Docker repository | `your_account/aws-sam-plugin` | Yes | | DOCKER_USERNAME | Docker registry username | `your_dockerhub_username` | Yes | | DOCKER_PASSWORD | Docker registry password/token | `your_dockerhub_pat` | Yes | -TARGET_REPO, DOCKER_USERNAME and DOCKER_PASSWORD are the variables that you set one time in the pipeline. VERSION, BASE_IMAGE and SAM_BASE_IMAGE are the variables that you set every time you run the pipeline. +**Runtime inputs:** + +| Variable | Description | Example | Required | +|-----------------|---------------------------------------------|------------------------------------------------------------|----------| +| VERSION | Harness base image-version tag | `1.1.2` | Yes | +| Harness_BASE_IMAGE | Reference to your built base image | `harness/aws-sam-plugin:1.1.2-beta-base-image` | Yes | +| SAM_BASE_IMAGE | AWS SAM base image from ECR | `public.ecr.aws/sam/build-python3.12:1.143.0-20250822194415-x86_64` | Yes | + +### Pipeline YAML -## Pipeline Configuration +This is the YAML for the AWS CDK image build pipeline. You can copy and paste it into your Harness Project. + +This is how the stage would look in the UI: + +
+ +
Pipeline YAML -Additional parameters you need to change: - -- `projectIdentifier`: Your Harness project identifier -- `orgIdentifier`: Your Harness organization identifier -- `connectorRef`: Your Kubernetes cluster connector identifier -- `your_k8s_connector`: Your Kubernetes cluster connector identifier +Parameters to change after you copy the pipeline YAML and paste it in your Harness Project: +- `projectIdentifier`, `orgIdentifier`, `environmentRef`, `infrastructureDefinitions`, `connectorRef` - docker-connector, `connectorRef` - k8s-connector. ```yaml pipeline: name: sam-image-build - identifier: sam-image-build + identifier: samimagebuild projectIdentifier: your_project orgIdentifier: default tags: {} @@ -151,7 +178,7 @@ pipeline: - stage: name: combineImages identifier: combineImages - description: Combine scratch image with SAM base image and push to Docker + description: Combine Harness base image with SAM base image and push to Docker type: Deployment spec: deploymentType: Kubernetes @@ -201,7 +228,7 @@ pipeline: export TZ=UTC VERSION="${VERSION:-<+pipeline.variables.VERSION>}" - SCRATCH_IMAGE="${SCRATCH_IMAGE:-<+pipeline.variables.SCRATCH_IMAGE>}" + HARNESS_BASE_IMAGE="${HARNESS_BASE_IMAGE:-<+pipeline.variables.HARNESS_BASE_IMAGE>}" SAM_BASE_IMAGE="${SAM_BASE_IMAGE:-<+pipeline.variables.SAM_BASE_IMAGE>}" TARGET_REPO="${TARGET_REPO:-<+pipeline.variables.TARGET_REPO>}" DOCKER_USERNAME="<+pipeline.variables.DOCKER_USERNAME>" @@ -275,8 +302,8 @@ pipeline: mkdir -m 777 -p /opt/harness/scripts/ && \\ mkdir -m 777 -p /opt/harness/client-tools/ - COPY --from=${SCRATCH_IMAGE} /opt/harness/bin/harness-sam-plugin /opt/harness/bin/harness-sam-plugin - COPY --from=${SCRATCH_IMAGE} /opt/harness/scripts/ /opt/harness/scripts/ + COPY --from=${HARNESS_BASE_IMAGE} /opt/harness/bin/harness-sam-plugin /opt/harness/bin/harness-sam-plugin + COPY --from=${HARNESS_BASE_IMAGE} /opt/harness/scripts/ /opt/harness/scripts/ RUN chmod +x /opt/harness/bin/harness-sam-plugin && \\ chmod +x /opt/harness/scripts/sam-plugin.sh @@ -323,7 +350,7 @@ pipeline: - step: identifier: buildAndPushFinal type: Run - name: buildAndPushFinal + name: buildAndPushImage spec: connectorRef: account.dockerhub image: ubuntu:20.04 @@ -343,13 +370,13 @@ pipeline: # Define variables from pipeline variables VERSION="<+pipeline.variables.VERSION>" - SCRATCH_IMAGE="<+pipeline.variables.SCRATCH_IMAGE>" + HARNESS_BASE_IMAGE="<+pipeline.variables.HARNESS_BASE_IMAGE>" SAM_BASE_IMAGE="<+pipeline.variables.SAM_BASE_IMAGE>" TIMESTAMP="<+pipeline.variables.TIMESTAMP>" # Print all variables for debugging echo "VERSION: $VERSION" - echo "SCRATCH_IMAGE: $SCRATCH_IMAGE" + echo "HARNESS_BASE_IMAGE: $HARNESS_BASE_IMAGE" echo "SAM_BASE_IMAGE: $SAM_BASE_IMAGE" echo "TIMESTAMP: $TIMESTAMP" apt-get update && apt-get install -y docker.io @@ -394,9 +421,6 @@ pipeline: memory: 8Gi cpu: 4000m timeout: 30m - when: - stageStatus: Success - condition: "false" stepGroupInfra: type: KubernetesDirect spec: @@ -415,17 +439,17 @@ pipeline: type: String description: Plugin version (e.g., 1.1.2-beta) required: true - value: <+input> - - name: SCRATCH_IMAGE + value: <+input>.default(1.1.2) + - name: HARNESS_BASE_IMAGE type: String - description: Scratch image from Pipeline 1 + description: harness base image from Pipeline 1 required: true - value: <+input> + value: <+input>.default(harness/aws-sam-plugin:1.1.2-beta-base-image) - name: SAM_BASE_IMAGE type: String description: SAM base image (e.g., public.ecr.aws/sam/build-python3.12:1.143.0-20250822194415-x86_64) required: true - value: <+input> + value: <+input>.default(public.ecr.aws/sam/build-nodejs22.x:1.144.0-20250911030138-x86_64) - name: DOCKER_USERNAME type: String description: Docker Hub username @@ -440,7 +464,7 @@ pipeline: type: String description: Target repository required: false - value: vishalav95/plugin-test-vishal + value: your_target_repository - name: TIMESTAMP type: String description: Build timestamp @@ -448,19 +472,4 @@ pipeline: value: <+execution.steps.k8sstepgroup.steps.sampreparebuild.output.outputVariables.TIMESTAMP> ``` -
- ---- - -## How Pipeline Works - -### Pipeline Stages - -- **Generate Timestamp:** - - Creates a timestamp for image labels - - Extracts SAM runtime and version from the base image name - -- **Build and Push Final Image:** - - Attempts multiple strategies to build the final image in order of preference: - - **Strategy 1:** Buildah (rootless) - Tries to build a container without privileged access - - **Strategy 2:** Skopeo - Falls back to copying the scratch image as the final solution \ No newline at end of file + \ No newline at end of file diff --git a/docs/continuous-delivery/deploy-srv-diff-platforms/aws/static/sam-build-push.png b/docs/continuous-delivery/deploy-srv-diff-platforms/aws/static/sam-build-push.png new file mode 100644 index 00000000000..b8da43b9c23 Binary files /dev/null and b/docs/continuous-delivery/deploy-srv-diff-platforms/aws/static/sam-build-push.png differ diff --git a/docs/continuous-delivery/deploy-srv-diff-platforms/serverless/serverless-image-build.md b/docs/continuous-delivery/deploy-srv-diff-platforms/serverless/serverless-image-build.md index d0c0670a71c..5bcf673f97c 100644 --- a/docs/continuous-delivery/deploy-srv-diff-platforms/serverless/serverless-image-build.md +++ b/docs/continuous-delivery/deploy-srv-diff-platforms/serverless/serverless-image-build.md @@ -1,16 +1,25 @@ --- -title: Serverless Plugin Image Builder -description: Build your serverless plugin image using Harness. +title: Building Serverless Framework Images +description: A reusable Harness pipeline to build customized Serverless images. +tags: + - serverless + - image-builder sidebar_position: 5 --- -## Overview +This page provides a Harness CD pipeline to help you build your own Docker images for the Serverless framework. -This guide walks you through the process of building custom serverless plugin images that can be used for Serverless Lambda Deployments. By following these instructions, you can create compatible Docker images that combine the Harness Serverless plugin with specific AWS Lambda runtime environments. +The purpose of this pipeline is to give you flexibility—so you can adopt newer AWS Lambda runtimes or tailor the image to your specific serverless application needs. -These custom images enable you to deploy serverless applications written in your preferred programming language (Node.js, Python, Java, Ruby) while leveraging Harness deployment capabilities. +## What This Pipeline Does -Serverless runtimes refer to the programming language environments that AWS Lambda supports for function development. Each runtime provides the necessary language-specific libraries, tools, and dependencies needed to build, test, and deploy serverless applications. +This pipeline automates building Serverless images for different programming languages using Harness. It enables you to keep up with the latest Serverless versions and apply customizations as required for your projects. + +You can find the full pipeline YAML in the [Pipeline YAML](#pipeline-yaml) section below. + +## Understanding Serverless Runtimes + +Serverless runtimes refer to the programming language environments that the Serverless framework supports for function development. Each runtime provides the language-specific libraries, tools, and dependencies to build, test, and deploy serverless applications. Common Serverless runtimes include: @@ -19,52 +28,95 @@ Common Serverless runtimes include: - **Java**: Versions like java8, java17, java21 - **Ruby**: Version like ruby2.7, ruby3.2 -When you build your own image using the Harness pipeline, you're combining the Harness Serverless plugin (which provides the integration with Harness CD) with a specific runtime image from AWS. This allows you to deploy serverless applications written in your preferred programming language while leveraging Harness deployment capabilities. +When you build your image using the Harness pipeline, you combine the Harness Serverless plugin (which provides the integration with Harness CD) with a specific runtime image from AWS. This allows you to deploy serverless applications written in your preferred programming language while leveraging Harness deployment capabilities. + +## Key Components and pre-requisites + +This pipeline helps you build **custom serverless plugin images** using Harness, enabling integration of the Harness plugin with supported AWS Lambda runtimes. Below are the key details you should know about the pipeline: + +- **Deployment Stage with Kubernetes Infrastructure** + - Utilizes a Deployment stage configured to run on Kubernetes. + +- **Kubernetes Cluster Requirement and Privileged Mode** + - Requires a Kubernetes cluster set up by the user. + - The pipeline’s step group runs with privileged: true mode enabled to allow Docker-in-Docker and image build operations. + - This privileged mode requires the Kubernetes cluster nodes to permit privileged containers. + - For example, if using Google Kubernetes Engine (GKE), do not use Autopilot clusters, as they restrict privileged containers. Instead, use a standard GKE cluster with node pools configured to allow privileged pods. + - Connect the Kubernetes cluster to Harness via a Kubernetes Cluster connector. + +- **Use of Official AWS SAM Images** + - Pulls SAM base images from the [AWS ECR public gallery](https://gallery.ecr.aws/sam?page=1) for compatibility. + +- **Automatic Extraction of Runtime and Version** + - Extracts runtime name and version details directly from the SAM base image name. + +- **Final Image Naming Convention** + - Images are named following the format: + `serverless-plugin:{VERSION}-{RUNTIME_NAME}-{VERSION}-linux-amd64` + Example: `serverless-plugin:1.1.0-beta-python3.12-1.1.0-beta-linux-amd64` -## Key Components +### Pipeline Runner Privileged Mode Requirement -- Uses a Deployment stage with Kubernetes infrastructure -- Runs in a step group with KubernetesDirect infrastructure -- Takes SAM base image from AWS ECR public gallery -- Extracts runtime and version information from the base image name -- Final image format: `serverless-plugin:${VERSION}-${RUNTIME}-${VERSION}-linux-amd64` +Certain steps in the pipeline require the Kubernetes pod to run in privileged mode. This is necessary for starting Docker daemons (DinD), building container images inside pipeline steps, and granting the permissions Docker needs at runtime. + +**Why privileged mode is required:** + +- Enables Docker-in-Docker (DinD) support for building and pushing images. +- Allows installation and execution of docker CLI and manipulation of containers within the build step. +- Required for root access and mounting Docker volumes. + +To enable privileged execution, set privileged: true in the step group or step-level security context. Example: + +```yaml +stepGroup: + privileged: true + name: k8s-step-group + sharedPaths: + - /var/run + - /var/lib/docker +``` + +For individual steps: +``` +step: + name: dinD + privileged: true + ... +``` +Without this setting, Docker builds and image pushes may fail due to insufficient permissions inside the container. ## Quick Start -1. Copy the pipeline yaml provided and paste it in your Harness Project. +1. Copy and paste the [pipeline YAML](/docs/continuous-delivery/deploy-srv-diff-platforms/serverless/serverless-image-build#pipeline-yaml) provided into your Harness Project. 2. Add an empty/do nothing service to the pipeline. 3. Add a Kubernetes environment to the pipeline. -4. In the execution section, enable container based execution and add the Kubernetes cluster connector to the pipeline. Save the pipeline. +4. In the **Execution section**, enable **container-based execution** in the **step group**. Add the Kubernetes cluster connector inside the container step group. Save the pipeline. 5. Click **Run Pipeline** 6. Enter the required parameters: - - **VERSION**: Version number of Harness base image (e.g., `1.1.0-beta`). VERSION represents specific code changes in the Harness repository. With each new code change, we push a new tag and publish new Docker images with these tags, allowing users to access specific versions of the plugin. - - **Harness_base_image**: Harness base image from AWS ECR Gallery (e.g., `harness/serverless-plugin:1.1.0-beta-base-image -`). You can find the Harness base image from [Harness DockerHub]https://hub.docker.com/r/harness/serverless-plugin/tags) + - **VERSION**: The version number of the Harness base image (e.g., `1.1.0-beta`). VERSION represents specific code changes in the Harness repository. With each new code change, we push a new tag and publish new Docker images with these tags, allowing users to access specific versions of the plugin. + - **Harness_base_image**: You can find the Harness base image with the specific release versions from [Harness DockerHub](https://hub.docker.com/r/harness/serverless-plugin/tags). - **RUNTIME_BASE_IMAGE_VERSION**: Runtime base image from AWS ECR (e.g., `public.ecr.aws/sam/build-python3.12:1.142.1-20250701194731-x86_64`) - **NODEJS_BASE_IMAGE_VERSION**: Node.js base image from AWS ECR (e.g., `public.ecr.aws/sam/build-nodejs20.x:1.142.1-20250701194712-x86_64`) - **SERVERLESS_VERSION**: Serverless Framework version (e.g., `3.39.0`) -## Base Image Requirements - -### Serverless Base Image Format +### Serverless Plugin Image Pre-requisites -The pipeline supports only full formats for the base images: +The pipeline supports only complete formats for the base images: - `public.ecr.aws/sam/build-java21:1.140.0-20250605234711-x86_64` - `public.ecr.aws/sam/build-nodejs18.x:1.120.0-20240626164104-x86_64` -### Serverless Base Image Requirements +#### Serverless Base Image Pre-requisites -> **IMPORTANT**: Only official AWS SAM build images from the AWS ECR Public Gallery are supported. +Only official AWS SAM build images from the [AWS ECR Public Gallery](https://gallery.ecr.aws/sam?page=1) are supported. - Use SAM base images only from: [AWS ECR Gallery - SAM](https://gallery.ecr.aws/sam?page=1) - Only x86_64 architecture images are supported - Using different base images may cause library dependency issues -- Non-standard base images may cause the plugin to not function as required +- Non-standard base images may cause the plugin not to function as required +- You must use the final image at the sep level of your serverless deployment. This plugin cannot be used in [Plugin info](/docs/continuous-delivery/deploy-srv-diff-platforms/serverless/serverless-lambda-cd-quickstart#plugin-info) at the service level, as this setting at the service level fetches only from the Harness official Dockerhub repository. -### Image Configuration - -#### Final Image Naming +#### Image Configuration The pipeline produces two types of images with the following naming patterns: @@ -86,54 +138,59 @@ Where: - `SERVERLESS_VERSION`: Serverless Framework version (e.g., `3.39.0`) - `VERSION`: Harness plugin version (e.g., `1.1.0-beta`) -## Variables Used in Privileged Steps +### Variables Used in Pipeline + +These variables are actively used in the pipeline for building and pushing the image that you need to configure: -These variables are actively used in the privileged steps of your Serverless plugin image build and push pipeline. They must be configured properly to build and push your serverless plugin images successfully. +**Pipeline variables:** - **TARGET_REPO**, **DOCKER_USERNAME**, and **DOCKER_PASSWORD** are set once as pipeline-level variables. | Variable | Description | Example | Required | |----------------------------|----------------------------------------------------------------|-----------------------------------------------------------------------|----------| -| VERSION | Plugin image/version tag | `1.1.2` | Yes | -| RUNTIME_BASE_IMAGE_VERSION | AWS SAM runtime base image from ECR | `public.ecr.aws/sam/build-python3.12:1.143.0-20250822194415-x86_64` | Yes | -| NODEJS_BASE_IMAGE_VERSION | AWS SAM Node.js base image from ECR | `public.ecr.aws/sam/build-nodejs22.x:1.143.0-20250822194415-x86_64` | Yes | -| HARNESS_BASE_IMAGE | Harness base scratch image used in build | `harness/serverless-plugin:1.1.0-beta-base-image -` | Yes | -| SERVERLESS_VERSION | Serverless Framework version to install | `3.39.0` | Yes | | TARGET_REPO | Target Docker repository to push built plugin images | `your_account/serverless-plugin` | Yes | | DOCKER_USERNAME | Docker registry username | `dockerhub_username` | Yes | | DOCKER_PASSWORD | Docker registry password or Personal Access Token (PAT) | `` | Yes | -- **TARGET_REPO**, **DOCKER_USERNAME**, and **DOCKER_PASSWORD** are typically set once as pipeline-level variables. -- **VERSION**, **RUNTIME_BASE_IMAGE_VERSION**, **NODEJS_BASE_IMAGE_VERSION**, **HARNESS_BASE_IMAGE**, and **SERVERLESS_VERSION** are user inputs set each pipeline run to specify exact versions for the builds. +**Runtime inputs:** - **VERSION**, **RUNTIME_BASE_IMAGE_VERSION**, **NODEJS_BASE_IMAGE_VERSION**, **HARNESS_BASE_IMAGE**, and **SERVERLESS_VERSION** are user inputs set for each pipeline run to specify exact versions for the builds. -## Compatibility Validation +| Variable | Description | Example | Required | +|----------------------------|----------------------------------------------------------------|-----------------------------------------------------------------------|----------| +| VERSION | Plugin image/version tag | `1.1.2` | Yes | +| RUNTIME_BASE_IMAGE_VERSION | AWS SAM runtime base image from ECR | `public.ecr.aws/sam/build-python3.12:1.143.0-20250822194415-x86_64` | Yes | +| NODEJS_BASE_IMAGE_VERSION | AWS SAM Node.js base image from ECR | `public.ecr.aws/sam/build-nodejs22.x:1.143.0-20250822194415-x86_64` | Yes | +| HARNESS_BASE_IMAGE | Harness base image used in build | `harness/serverless-plugin:1.1.0-beta-base-image` | Yes | +| SERVERLESS_VERSION | Serverless Framework version to install | `3.39.0` | Yes | + + +### Compatibility Validation -- **Runtime Compatibility**: Always verify compatibility between runtime and Node.js images before building. The Serverless Framework requires Node.js to function properly. +- **Runtime Compatibility**: Always verify compatibility between runtime and Node.js images before building. The Serverless Framework requires Node.js to function correctly. - **Library Dependencies**: Check that both images share the same C++ libraries (especially `libstdc++.so`) to ensure proper operation. ### Validating Image Compatibility -To ensure the runtime and Node.js base images are compatible, verify they share the same system libraries and dependencies. This is crucial because the Serverless Framework (which requires Node.js) must run properly on your chosen runtime image. +Verify that the runtime and Node.js base images are compatible by sharing the same system libraries and dependencies. This is crucial because the Serverless Framework (which requires Node.js) must run properly on your chosen runtime image. #### Step 1: Pull and Inspect Both Images First, pull both images locally: +``` docker pull public.ecr.aws/sam/build-java21:1.140.0-20250605234711-x86_64 docker pull public.ecr.aws/sam/build-nodejs22.x:1.140.0-20250605234713-x86_64 - +``` text #### Step 2: Check C++ Library Compatibility Check that both images have the same version of `libstdc++.so`: - +``` docker run --rm public.ecr.aws/sam/build-java21:1.140.0-20250605234711-x86_64 ls -l /lib64/libstdc++.so.6* docker run --rm public.ecr.aws/sam/build-nodejs22.x:1.140.0-20250605234713-x86_64 ls -l /lib64/libstdc++.so.6* - +``` text -If both images show the same version (e.g., `libstdc++.so.6.0.33`), they are compatible. +They are compatible if both images show the same version (e.g., `libstdc++.so.6.0.33`). #### Recommended Compatible Combinations @@ -149,13 +206,26 @@ If both images show the same version (e.g., `libstdc++.so.6.0.33`), they are com | java8.al2 | nodejs18.x | java8.al2 | | ruby3.2 | nodejs18.x | ruby3.2 | ---- +### Pipeline YAML + +This is the YAML for the AWS CDK image build pipeline. You can copy and paste it into your Harness Project. + +This is how the stage would look in the UI: -## Pipeline YAML +
+ +
Pipeline YAML +Parameters you need to change: + +- `projectIdentifier`: Your Harness project identifier +- `orgIdentifier`: Your Harness organization identifier +- `connectorRef`: Your Kubernetes cluster connector identifier +- `your_k8s_connector`: Your Kubernetes cluster connector identifier + ```yaml pipeline: projectIdentifier: your_project_identifier @@ -165,7 +235,7 @@ pipeline: - stage: name: combineImages identifier: combineImages - description: Combine scratch image with SAM base image and push to Docker + description: Combine Harness base image with SAM base image and push to Docker type: Deployment spec: deploymentType: Kubernetes @@ -198,7 +268,7 @@ pipeline: - step: identifier: generateTimestamp type: Run - name: serverless-prepare-build + name: serverless-build-push spec: connectorRef: account.your_dockerhub_connector image: docker:24 @@ -235,7 +305,7 @@ pipeline: export TZ=UTC VERSION="${VERSION:-<+pipeline.variables.VERSION>}" - SCRATCH_IMAGE="${SCRATCH_IMAGE:-<+pipeline.variables.SCRATCH_IMAGE>}" + HARNESS_BASE_IMAGE="${HARNESS_BASE_IMAGE:-<+pipeline.variables.HARNESS_BASE_IMAGE>}" RUNTIME_BASE_IMAGE_VERSION="${RUNTIME_BASE_IMAGE_VERSION:-<+pipeline.variables.RUNTIME_BASE_IMAGE_VERSION>}" NODEJS_BASE_IMAGE_VERSION="${NODEJS_BASE_IMAGE_VERSION:-<+pipeline.variables.NODEJS_BASE_IMAGE_VERSION>}" SERVERLESS_VERSION="${SERVERLESS_VERSION:-<+pipeline.variables.SERVERLESS_VERSION>}" @@ -272,7 +342,7 @@ pipeline: fi # Compose final image tag - local FINAL_IMAGE="vishalav95/plugin-test-vishal:${RUNTIME_NAME}-${SERVERLESS_TAG_PART}${VERSION}-linux-amd64" + local FINAL_IMAGE="${TARGET_REPO}/serverless-plugin:${RUNTIME_NAME}-${SERVERLESS_TAG_PART}${VERSION}-linux-amd64" echo "Building ${IMAGE_TYPE} image: ${FINAL_IMAGE}" @@ -327,8 +397,8 @@ pipeline: EOF cat >> Dockerfile << EOF - COPY --from=${SCRATCH_IMAGE} /opt/harness/bin/harness-serverless-plugin /opt/harness/bin/harness-serverless-plugin - COPY --from=${SCRATCH_IMAGE} /opt/harness/scripts/ /opt/harness/scripts/ + COPY --from=${HARNESS_BASE_IMAGE} /opt/harness/bin/harness-serverless-plugin /opt/harness/bin/harness-serverless-plugin + COPY --from=${HARNESS_BASE_IMAGE} /opt/harness/scripts/ /opt/harness/scripts/ EOF cat >> Dockerfile << 'EOF' @@ -384,7 +454,7 @@ pipeline: fi echo "SUMMARY:" - echo " Source scratch image: ${SCRATCH_IMAGE}" + echo " Source scratch image: ${HARNESS_BASE_IMAGE}" echo " Runtime base: public.ecr.aws/sam/build-${RUNTIME_BASE_IMAGE_VERSION}-x86_64" echo " Node.js base: public.ecr.aws/sam/build-${NODEJS_BASE_IMAGE_VERSION}-x86_64" echo " Built images with Serverless framework and go-template" @@ -421,7 +491,7 @@ pipeline: description: Plugin version (e.g., 1.1.0-beta) required: true value: <+input> - - name: SCRATCH_IMAGE + - name: HARNESS_BASE_IMAGE type: String description: Scratch image from Pipeline 1 required: true @@ -461,7 +531,7 @@ pipeline: description: Serverless Version required: true value: <+input>.selectOneFrom(3.39.0) - identifier: serverless-image-build + identifier: serverlessimagebuild name: serverless-image-build ``` diff --git a/docs/continuous-delivery/deploy-srv-diff-platforms/serverless/static/serverless-build-push.png b/docs/continuous-delivery/deploy-srv-diff-platforms/serverless/static/serverless-build-push.png new file mode 100644 index 00000000000..aa435b25f06 Binary files /dev/null and b/docs/continuous-delivery/deploy-srv-diff-platforms/serverless/static/serverless-build-push.png differ