Maven / GitHub Actions Interview Questions
GitHub Actions is a native CI/CD and workflow-automation platform built directly into GitHub. Instead of connecting an external tool such as Jenkins or CircleCI, you define automation logic in YAML files stored alongside your code. GitHub executes those files on cloud-hosted (or your own self-hosted) machines whenever events you choose — like a push or a pull request — occur in your repository.
Before GitHub Actions, teams had to manage a separate CI server, configure webhooks between that server and GitHub, and maintain credentials in two places. GitHub Actions eliminates that operational overhead: authentication happens automatically through the built-in GITHUB_TOKEN, secrets live in the same GitHub repository settings, and run history is visible right on the pull request page.
The platform solves several concrete problems:
- Automated testing: Run your test suite on every push or pull request without manually triggering a job.
- Continuous deployment: Build a container image, push it to a registry, and deploy to Kubernetes or a cloud service in a single chained workflow.
- Repository maintenance: Automatically label issues, close stale pull requests, or publish release notes when a tag is pushed.
- Cross-platform builds: Use a matrix strategy to compile or test on Linux, macOS, and Windows simultaneously.
Pricing-wise, GitHub Actions is free for public repositories and includes a generous free tier for private ones. Minutes beyond the free tier are billed per minute, varying by runner type.
GitHub Actions is built from five composable pieces that work together to automate your software development lifecycle.
- Workflow — A YAML file stored in
.github/workflows/. A workflow describes when automation should run (the trigger) and what it should do (one or more jobs). A repository can have many workflows running independently. - Job — A set of steps that execute on the same runner. All steps in a job share a filesystem and environment. By default, jobs in the same workflow run in parallel unless you declare dependencies with
needs:. - Step — An individual task inside a job. A step either runs a shell command (
run:) or calls a reusable action (uses:). Steps within a job run sequentially and share the job's working directory. - Action — A reusable unit of automation. An action can be a JavaScript program, a Docker container, or a composite shell script. You reference actions from the Marketplace (e.g.
actions/checkout@v4) or from your own repository. Actions receive inputs and can produce outputs for downstream steps. - Runner — The machine that actually executes a job. GitHub provides hosted runners (Ubuntu, Windows, macOS) that are provisioned fresh for every job. You can also register self-hosted runners on your own infrastructure for larger workloads, custom tooling, or network access to private systems.
The hierarchy is: one workflow contains many jobs; each job runs on a runner and contains many steps; each step optionally calls an action. Understanding this hierarchy explains almost every YAML property you will encounter in GitHub Actions.
Every workflow file must be placed inside the .github/workflows/ directory at the root of your repository and must use the .yml or .yaml extension. GitHub automatically detects any file in that directory and registers it as a workflow.
The top-level keys of a workflow file are:
name:— A human-readable label shown in the GitHub UI (optional but recommended).on:— Declares the event(s) that trigger the workflow (required).env:— Workflow-level environment variables available to all jobs (optional).permissions:— Restricts whatGITHUB_TOKENcan do (optional, security best-practice).jobs:— A map of one or more named jobs (required).
A minimal but realistic example that checks out code and runs tests:
name: CI
on:
push:
branches: [main]
pull_request:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
Each job under jobs: must declare runs-on: (the runner label), and then list its steps:. Step names are optional but make run logs much easier to read. YAML indentation is significant — use spaces, never tabs.
The on: key defines which GitHub events cause a workflow to run. You can listen to a single event, a list of events, or an event with filters. GitHub provides more than 35 distinct event types across three broad categories.
Repository events fire when something happens in your repo:
push— a commit or tag is pushedpull_request— a PR is opened, synchronised, closed, etc.pull_request_target— same as above but runs in the context of the base branch (useful for forks)release— a release is published, edited, or deletedissues,issue_comment,discussioncreate,delete— branch or tag creation/deletion
Scheduled triggers use cron syntax:
on:
schedule:
- cron: '0 6 * * 1' # Every Monday at 06:00 UTC
Manual and cross-workflow triggers:
workflow_dispatch— lets you run the workflow manually from the GitHub UI or API, with optional input parametersworkflow_call— makes the workflow callable from another workflow (reusable workflows)workflow_run— triggers when another named workflow completesrepository_dispatch— triggers via a custom HTTP POST to the GitHub API, useful for external systems
Most event types accept additional filters. For example, push accepts branches:, tags:, and paths: filters so you only trigger on relevant changes instead of every push to every branch.
These three triggers cover the most common CI/CD use-cases but serve very different purposes. Here is a direct comparison:
| Trigger | When it fires | Typical use-case | Key options |
|---|---|---|---|
push |
A commit is pushed to a branch or a tag is created | Deploy to staging/production after merging to main; publish a release on tag push | branches:, tags:, paths: |
pull_request |
A PR is opened, its head branch is updated (synchronised), or specific PR activity occurs (labeled, closed, etc.) | Run tests and linting on every proposed change before it merges; gate merges with required status checks | branches: (base branch filter), types: (activity type), paths: |
workflow_dispatch |
An operator manually triggers the workflow from the GitHub UI, the REST API, or gh workflow run |
Ad-hoc releases, environment-specific deployments, data migration scripts that should not run automatically | inputs: — define typed parameters (string, boolean, choice, environment) that the operator fills in before running |
A common pattern is to combine all three: use pull_request for pre-merge tests, push on main for deployment, and workflow_dispatch for manual rollback or hotfix releases. Each trigger runs independently so you can fine-tune which steps run for each event using if: github.event_name == 'push' inside steps.
A job is a named collection of steps that runs on a single runner from start to finish. Every job gets a fresh, isolated virtual machine (or container), so jobs do not share filesystem state, environment variables, or processes with each other unless you explicitly pass data via artifacts or outputs.
When a workflow contains multiple jobs, GitHub schedules all of them simultaneously by default — there is no implicit ordering. GitHub's scheduler picks up each job as soon as a runner is available, so two jobs in the same workflow can and do run at the same time:
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm run lint
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm test
In this example, lint and test start at the same time on two separate Ubuntu runners. This parallelism is a deliberate design choice: independent tasks like linting, unit testing, and security scanning should not wait for each other.
To make jobs run sequentially, use needs: to declare that one job depends on another. You can also use if: always() combined with needs: to run a cleanup job even if a dependency failed.
Each job also independently declares its own runs-on: label, meaning different jobs in the same workflow can target different runner types — one job on Ubuntu, another on macOS, another inside a custom self-hosted runner with GPU access.
Steps are the individual tasks that make up a job. They run sequentially in the order listed, share the job's working directory and environment variables, and each step can read outputs produced by earlier steps. Every step has an optional name: for display in the logs and can set a conditional if: expression.
The two fundamental forms a step can take are run: and uses::
run:— Executes one or more shell commands directly on the runner. The default shell on Linux/macOS isbash; on Windows it ispwsh. You can override this withshell: pythonorshell: cmd. Userun:for any custom script, build command, or one-liner that does not need to be reused across repositories.uses:— References a pre-built action. The action can come from the GitHub Marketplace (actions/checkout@v4), another repository (org/repo@v1), a local path in the same repo (./my-action), or a Docker image (docker://alpine:3.19). Actions encapsulate reusable logic behind a stable interface with typed inputs and outputs.
steps:
- name: Checkout code
uses: actions/checkout@v4 # calls a reusable action
- name: Build project
run: ./gradlew build # runs a shell command
- name: Run custom script
run: |
echo "Multi-line"
echo "shell script"
shell: bash
A step cannot use both run: and uses: simultaneously — they are mutually exclusive. The key decision rule: reach for uses: when the task is a well-known, versioned operation (checkout, setup-node, docker-login); use run: for project-specific commands unique to your repo.
A runner is the server (physical or virtual) that picks up a queued job and executes its steps. GitHub manages a global pool of hosted runners; alternatively you can register your own machines as self-hosted runners for full control over the environment.
| Dimension | GitHub-Hosted | Self-Hosted |
|---|---|---|
| Setup | Zero configuration — use labels like ubuntu-latest, windows-latest, macos-latest |
You install the runner agent on your own server and register it with your repo/org |
| Environment | Fresh VM per job; pre-installed with common tools (Node, Java, Docker, etc.) | Persistent; you control what is installed; jobs share the same machine state |
| Cost | Free for public repos; metered minutes for private repos | No GitHub billing for compute; you pay for your own infrastructure |
| Performance | Standard 2-core / 7 GB (Linux); larger runners available at extra cost | As powerful as your hardware allows; good for GPU jobs or large build caches |
| Network access | Public internet only | Can reach private VPC resources, on-premise databases, etc. |
| Security | Isolated per run; safe for public repos | Risky for public repos — malicious PRs can run code on your machines |
For most teams GitHub-hosted runners are the right starting point. Self-hosted runners make sense when you need private network access, specialised hardware (GPU, ARM), or very long build times where hosted-runner costs become significant.
The GitHub Actions Marketplace (github.com/marketplace?type=actions) is a public catalogue of reusable actions published by GitHub, major vendors, and the open-source community. At the time of writing it hosts tens of thousands of actions covering everything from language setup (actions/setup-node, actions/setup-java) to cloud deployments, code scanning, notifications, and release automation.
To use a Marketplace action, copy its uses: reference from the Marketplace page into your workflow step:
steps:
- uses: actions/checkout@v4
- uses: actions/setup-java@v4
with:
java-version: '21'
distribution: 'temurin'
The reference format is owner/repo@ref where ref can be a semantic version tag (@v4), a specific commit SHA (@abc1234), or a branch (@main). Pinning to a specific commit SHA is the most secure option because a tag can be moved, while a SHA cannot.
Actions can declare typed inputs (passed via with:) and produce outputs that downstream steps can reference via steps.<id>.outputs.<name>. Before using a third-party action in a production workflow you should review its source code and check that it is maintained, has a published release, and comes from a reputable publisher (GitHub's "Verified Creator" badge helps here).
actions/checkout clones your repository onto the runner so subsequent steps have access to your source code. Without it, the runner's working directory is empty — no source files, no scripts. It is almost always the first step in any CI job.
The simplest usage just checks out the default branch at the ref that triggered the workflow:
steps:
- uses: actions/checkout@v4
Common configuration options via with::
ref:— Check out a specific branch, tag, or SHA. Useful when you need to build a release tag or compare against another branch.fetch-depth:— Number of commits to fetch. Defaults to1(shallow clone). Set to0for a full history (needed for tools likegit logor semantic-release that inspect commit history).token:— Override the defaultGITHUB_TOKENwith a PAT when you need to push commits back or access private submodules.submodules:— Set totrueor'recursive'to initialise Git submodules.
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # full history for changelog generation
submodules: recursive # also clone submodules
The action authenticates using the workflow's GITHUB_TOKEN by default, so it works without any additional secret configuration for normal repository checkouts. For pull requests from forks it checks out a merge commit (the result of merging the fork's head into the base branch) rather than the fork's raw head commit, which prevents untrusted code from poisoning the checkout.
Environment variables and secrets are surfaced inside a workflow through the env: map and the secrets context respectively. They can be declared at three scopes: workflow-level (available to every job), job-level (available to all steps in that job), or step-level (available only to that step).
Environment variables hold non-sensitive configuration values:
env:
APP_ENV: production # workflow-level
jobs:
deploy:
runs-on: ubuntu-latest
env:
REGION: us-east-1 # job-level
steps:
- name: Print env
run: echo "Deploying $APP_ENV to $REGION"
- name: Run with step-level var
env:
LOG_LEVEL: debug # step-level
run: ./deploy.sh
Secrets are encrypted values stored in repository Settings → Secrets and variables → Actions. They are injected at runtime and never appear in plain text in workflow logs:
steps:
- name: Deploy
env:
API_KEY: ${{ secrets.API_KEY }}
DB_PASS: ${{ secrets.DB_PASSWORD }}
run: ./deploy.sh
Secrets are not automatically available as environment variables — you must explicitly map them using env: or pass them as with: inputs to an action. GitHub masks secret values in logs, replacing them with ***, but you should still avoid printing secrets deliberately or constructing log messages that include them.
Organization-level and environment-level secrets also exist and follow the same syntax; they just have a wider or more restricted scope depending on configuration.
All three hold key-value configuration but differ in storage location, security characteristics, and intended use.
| Context | Where it is defined | Encrypted at rest? | Visible in logs? | Typical use |
|---|---|---|---|---|
env: |
Inline in the workflow YAML (workflow/job/step scope) | No — plain text in the repo | Yes | Non-sensitive config like feature flags, version numbers, region names embedded directly in YAML |
secrets: |
Repository / Organisation / Environment Settings → Secrets | Yes — encrypted by GitHub | No — masked as *** |
Passwords, API keys, tokens, certificates — anything that must not be readable in the YAML or logs |
vars: |
Repository / Organisation / Environment Settings → Variables | No — stored as plain text | Yes | Non-sensitive config that should be managed in the GitHub UI without editing YAML (e.g. target environment URL, Node version to use across many workflows) |
The key distinction between env: and vars: is that vars: are managed in the GitHub UI and shared across workflows without touching YAML files, whereas env: values are hardcoded in the YAML itself. Use vars: when you want non-engineers to be able to change configuration without a pull request.
Access syntax: ${{ secrets.MY_KEY }}, ${{ vars.MY_VAR }}, ${{ env.MY_ENV }}.
actions/cache saves and restores a directory between workflow runs so that package managers like npm, Maven, or pip do not re-download the same dependencies on every run. A cache hit can reduce a 3-minute install step to a few seconds.
The action requires two inputs: path (the directory to cache) and key (a string that identifies the cache). If the key matches an existing cache, the directory is restored before your install step. If not, the action records a cache miss and saves the directory at the end of the job for future runs.
- name: Cache npm dependencies
uses: actions/cache@v4
with:
path: ~/.npm
key: npm-${{ runner.os }}-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
npm-${{ runner.os }}-
- name: Install dependencies
run: npm ci
The hashFiles('**/package-lock.json') expression produces a hash of your lock file. When the lock file changes (new dependency added), the hash changes, the old cache is missed, and a fresh install populates a new cache. restore-keys: provides fallback prefixes — if the exact key is not found, GitHub tries caches whose key starts with npm-ubuntu-latest-, giving a partial hit that is still faster than a cold install.
Popular language setups — actions/setup-node, actions/setup-java, actions/setup-python — have a built-in cache: input that wraps actions/cache automatically:
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm' # handles path + key automatically
A matrix strategy tells GitHub Actions to spawn multiple parallel job instances from a single job definition, varying one or more parameters across those instances. This is ideal for testing against several language versions, operating systems, or configuration combinations without duplicating YAML.
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
node: ['18', '20', '22']
fail-fast: false # continue other matrix jobs if one fails
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node }}
- run: npm test
This definition spawns 3 × 3 = 9 parallel jobs, one for each OS/Node combination. Matrix values are referenced with ${{ matrix.<variable> }} anywhere in the job definition — including runs-on:, step inputs, and environment variables.
Key options:
fail-fast: false— by default, if any matrix job fails all remaining ones are cancelled. Set tofalseto let every combination finish regardless.include:— add extra combinations or inject extra variables into specific cells. For example, add a code-coverage flag only on Node 20/Ubuntu.exclude:— remove specific combinations from the matrix (e.g. skip macOS on an older Node version).max-parallel:— cap the number of concurrent jobs to avoid exhausting runner capacity.
Matrices can also be generated dynamically at runtime by having a prior job output a JSON array and referencing it with fromJSON(needs.setup.outputs.matrix).
needs: declares that a job must wait for one or more other jobs to succeed before it starts. This turns the default parallel fan-out into a directed acyclic graph (DAG) of dependencies, allowing you to model pipelines like build → test → deploy.
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: ./gradlew jar
test:
runs-on: ubuntu-latest
needs: build # waits for build to succeed
steps:
- run: ./gradlew test
deploy:
runs-on: ubuntu-latest
needs: [build, test] # waits for BOTH build and test to succeed
steps:
- run: ./deploy.sh
If any job listed in needs: fails, the dependent job is automatically skipped (not failed). You can override this with an explicit condition:
notify:
runs-on: ubuntu-latest
needs: deploy
if: always() # runs even if deploy failed
steps:
- run: ./notify-slack.sh
You can also check the result of a specific dependency using needs.<job-id>.result, which returns 'success', 'failure', 'cancelled', or 'skipped'. This lets downstream jobs make fine-grained decisions about what to do based on which upstream step passed or failed.
Steps within the same job communicate by writing key-value pairs to the special file at the path stored in $GITHUB_OUTPUT. Any subsequent step in the same job can then read that value via ${{ steps.<step-id>.outputs.<name> }}.
jobs:
pipeline:
runs-on: ubuntu-latest
steps:
- name: Generate version
id: versioning # id is required to reference outputs
run: |
VERSION="1.4.${{ github.run_number }}"
echo "version=$VERSION" >> $GITHUB_OUTPUT
- name: Use version
run: echo "Building version ${{ steps.versioning.outputs.version }}"
- name: Tag Docker image
run: |
docker build -t myapp:${{ steps.versioning.outputs.version }} .
docker push myapp:${{ steps.versioning.outputs.version }}
The id: field on the producing step is mandatory — without it, later steps have no handle to reference its outputs. The echo "key=value" >> $GITHUB_OUTPUT syntax appends to the output file; you can write multiple outputs from the same step by appending multiple lines.
Important: The older ::set-output command (written directly to stdout) was deprecated in 2022 and disabled in 2023 due to injection vulnerabilities. Always use $GITHUB_OUTPUT.
For multi-line values, use the heredoc syntax:
run: |
echo "NOTES<> $GITHUB_OUTPUT
cat CHANGELOG.md >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
Because each job in a workflow runs on a separate, isolated runner, files created in one job are not visible to another job by default. actions/upload-artifact and actions/download-artifact bridge this gap by storing files in GitHub's artifact storage during the workflow run.
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build JAR
run: ./gradlew bootJar
- name: Upload JAR artifact
uses: actions/upload-artifact@v4
with:
name: app-jar # artifact name
path: build/libs/*.jar # what to upload
retention-days: 3 # auto-delete after 3 days
deploy:
runs-on: ubuntu-latest
needs: build
steps:
- name: Download JAR artifact
uses: actions/download-artifact@v4
with:
name: app-jar
path: dist/ # where to restore files
- name: Deploy
run: scp dist/*.jar user@server:/opt/app/
The name: field acts as the identifier that links upload to download. The downloading job must declare needs: build to ensure the artifact exists before it tries to fetch it.
Artifact vs cache: Artifacts are for passing build outputs (JARs, test reports, binaries) between jobs or making them available for download from the GitHub UI. Cache is for reusing dependency directories to speed up installs across workflow runs. Do not use one as a substitute for the other — they have different retention policies and semantics.
Artifacts uploaded with v4 default to a 90-day retention period unless overridden with retention-days:. Large artifacts (test videos, coverage HTML) should use short retention to avoid storage costs.
A reusable workflow is a standard workflow file that exposes a workflow_call trigger, making it callable from other workflows. This lets you centralise a common CI/CD pattern (e.g. build-and-push, deploy-to-kubernetes) in one place and have many repositories or workflows invoke it without copy-pasting YAML.
Defining a reusable workflow (.github/workflows/deploy-template.yml in the shared repo):
on:
workflow_call:
inputs:
environment:
required: true
type: string
secrets:
DEPLOY_KEY:
required: true
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: ./deploy.sh ${{ inputs.environment }}
env:
DEPLOY_KEY: ${{ secrets.DEPLOY_KEY }}
Calling the reusable workflow from another workflow:
jobs:
call-deploy:
uses: my-org/shared-workflows/.github/workflows/deploy-template.yml@main
with:
environment: production
secrets:
DEPLOY_KEY: ${{ secrets.PROD_DEPLOY_KEY }}
Key rules to remember:
- A reusable workflow is called as a job, not a step — so it can run in parallel with or be sequenced using
needs:like any other job. - Secrets are not automatically inherited; you must explicitly pass them or use
secrets: inheritto forward all caller secrets. - A caller workflow can nest reusable workflows up to 4 levels deep.
- Outputs declared in the reusable workflow are available to the calling workflow via
needs.<job>.outputs.<name>.
A composite action is a custom action that groups multiple run: and uses: steps into a single reusable unit referenced with uses: inside a step — not as a job. It is defined by an action.yml file in a repository and runs within the calling job's runner, sharing its environment and filesystem.
# .github/actions/setup-env/action.yml
name: 'Setup Build Environment'
description: 'Install tools and restore cache'
inputs:
node-version:
required: true
default: '20'
runs:
using: 'composite'
steps:
- uses: actions/setup-node@v4
with:
node-version: ${{ inputs.node-version }}
- uses: actions/cache@v4
with:
path: ~/.npm
key: npm-${{ runner.os }}-${{ hashFiles('**/package-lock.json') }}
- run: npm ci
shell: bash
Usage in a workflow:
steps:
- uses: actions/checkout@v4
- uses: ./.github/actions/setup-env
with:
node-version: '22'
- run: npm test
| Dimension | Composite Action | Reusable Workflow |
|---|---|---|
| Referenced as | A step (uses:) | A job (uses:) |
| Runner | Caller's runner (shared) | Its own separate runner |
| Secrets access | Via inputs — not directly | Via secrets: block or secrets: inherit |
| Can call other workflows? | No | Yes (nested up to 4 levels) |
| Best for | Small setup sequences reused across steps | Entire pipeline stages shared across repos |
Choose a composite action when you want to extract a few repeated setup steps within a job. Choose a reusable workflow when you want to share a complete, self-contained pipeline job (with its own runner, concurrency, and environment) across multiple repositories.
The services: block on a job starts Docker containers as side-cars alongside the job's steps. This lets you spin up a real PostgreSQL, Redis, or any other service that your integration tests need — without mocking — using the same Docker images you would use in production.
jobs:
integration-tests:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
POSTGRES_DB: testdb
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis:
image: redis:7
ports:
- 6379:6379
steps:
- uses: actions/checkout@v4
- name: Run integration tests
run: ./gradlew integrationTest
env:
DB_URL: jdbc:postgresql://localhost:5432/testdb
DB_USER: testuser
DB_PASS: testpass
REDIS_URL: redis://localhost:6379
A few important details:
- Health checks via
options: --health-cmd ...ensure GitHub waits for the service to be ready before steps begin. Without this your tests may start before PostgreSQL finishes initialising. - Port mapping: the service is accessible from steps at
localhost:<host-port>. The host port and container port do not need to match but must be mapped inports:. - Container jobs: if your job itself runs inside a container (
container:key), services are accessible by the service label name (e.g.postgres:5432) rather thanlocalhost, because Docker networking uses the service name as DNS. - Services are only supported on GitHub-hosted Linux runners and self-hosted Linux runners with Docker available.
The if: key on a job or step controls whether it executes. It accepts a GitHub Actions expression that evaluates to true or false. When false, the step is skipped and shown as greyed-out in the run log — the job does not fail.
Common patterns:
steps:
# Run only on pushes to main
- name: Deploy to production
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: ./deploy.sh
# Run only when a previous step failed (for alerting)
- name: Notify failure
if: failure()
run: ./send-alert.sh
# Run after always — even if prior steps failed
- name: Upload test report
if: always()
uses: actions/upload-artifact@v4
with:
name: test-report
path: target/surefire-reports/
# Skip on draft pull requests
- name: Run expensive checks
if: github.event.pull_request.draft == false
run: ./full-test-suite.sh
Status-check functions available in if: expressions:
success()— true if all prior steps succeeded (the default behaviour)failure()— true if any prior step failedcancelled()— true if the workflow was cancelledalways()— always true regardless of prior step results
You can also combine expressions: if: success() && github.actor != 'dependabot[bot]'. Note that the ${{ }} wrapper is optional for if: — GitHub automatically evaluates the expression.
Contexts are namespaced objects available inside ${{ }} expressions throughout a workflow. Each context exposes a different slice of information about the run, the repository, or the execution environment.
| Context | Key properties | Example use |
|---|---|---|
github |
ref, sha, event_name, actor, repository, run_id, workflow |
if: github.ref == 'refs/heads/main' |
env |
All environment variables set via env: at any scope |
${{ env.APP_VERSION }} |
secrets |
Encrypted secrets from repo/org/environment settings | ${{ secrets.AWS_SECRET_KEY }} |
vars |
Non-sensitive configuration variables from settings | ${{ vars.TARGET_ENV }} |
runner |
os, arch, temp, tool_cache |
key: ${{ runner.os }}-npm-... |
job |
status (success/failure/cancelled) |
if: job.status == 'failure' |
steps |
steps.<id>.outputs, steps.<id>.outcome |
${{ steps.build.outputs.version }} |
needs |
needs.<job>.result, needs.<job>.outputs |
${{ needs.build.outputs.artifact-name }} |
matrix |
Current matrix variables for this job instance | ${{ matrix.node }} |
inputs |
Inputs passed via workflow_dispatch or workflow_call |
${{ inputs.environment }} |
Context availability varies by event. For example, github.event.pull_request is only populated on pull_request events, and needs is only available in jobs that declare needs:. Referencing an undefined context key returns an empty string rather than an error.
The concurrency: key limits how many workflow runs (or jobs) with the same group name can be active simultaneously. Setting cancel-in-progress: true automatically cancels any run in the same group that is already in progress when a new one starts — perfect for preventing stacked deploys or redundant CI runs on fast-pushed branches.
# Cancel any previous CI run on the same branch when a new commit is pushed
concurrency:
group: ci-${{ github.ref }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm test
The group: string is the identifier. Runs sharing the same group string compete for the single-active-run slot. Using ${{ github.ref }} scopes the group to a branch, so pushes to main only cancel each other, not pushes to feature branches.
For deployment workflows you often want a different policy: queue new runs rather than cancel them, and never cancel a run that is already deploying. Achieve this by omitting cancel-in-progress (defaults to false):
concurrency:
group: deploy-${{ github.ref }}
# cancel-in-progress defaults to false → runs are queued, not cancelled
Concurrency can also be set at the job level (not just workflow level) to limit parallelism for a specific job such as a deployment job while leaving other jobs unaffected.
jobs:
deploy:
concurrency:
group: deploy-production
cancel-in-progress: false
GITHUB_TOKEN is a short-lived, automatically generated token that GitHub injects into every workflow run. It is scoped to the repository where the workflow runs, expires when the job finishes, and requires no manual secret configuration. You access it via ${{ secrets.GITHUB_TOKEN }} or the environment variable $GITHUB_TOKEN.
By default the token is granted a set of permissions that cover the most common CI needs. The default permission level depends on your repository settings (either "permissive" or "restricted"). With the permissive default, common grants include:
contents: read— read source code and releasespull-requests: write— add comments, labels, and review status to PRspackages: write— push container images to GitHub Container Registry (GHCR)statuses: write— post commit statuses (used by CI checks)
Best practice is to declare minimum required permissions explicitly in the workflow, both at the workflow level and at the job level:
permissions:
contents: read # default; be explicit
jobs:
release:
permissions:
contents: write # needed to create a GitHub Release
packages: write # needed to push to GHCR
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: gh release create v1.0 --generate-notes
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Setting permissions: {} (all read) at the workflow level and then granting specific write permissions only to the jobs that need them is the principle of least privilege. GITHUB_TOKEN cannot access resources outside the repository that triggered the workflow; for cross-repo operations you need a Personal Access Token (PAT) or a GitHub App token.
workflow_run fires a workflow when a named workflow completes (or starts). This lets you chain independent workflows without merging them into one file — useful for separating CI (fast, runs on all PRs) from CD (slow, only runs after CI passes on main).
# .github/workflows/deploy.yml
on:
workflow_run:
workflows: ["CI"] # exact name of the upstream workflow
types: [completed]
branches: [main] # only when CI ran on main
jobs:
deploy:
if: ${{ github.event.workflow_run.conclusion == 'success' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: ./deploy.sh
The if: check on the job is critical. workflow_run fires regardless of whether the upstream workflow succeeded or failed — the conclusion can be success, failure, cancelled, or timed_out. Without the check, your deploy job would run even on a failed CI.
Important security note: workflow_run always runs in the context of the default branch, not the branch that triggered the upstream workflow. This gives it access to repository secrets even for fork PRs — which is intentional for use-cases like uploading test coverage from fork PRs. However it also means you must be careful not to execute untrusted code from the fork in the workflow_run context.
For simpler same-workflow chaining (one job triggers another), use needs: instead. Use workflow_run only when the two workflows must remain separate files or when you need the default-branch security context.
A JavaScript action consists of two files at minimum: action.yml (the action metadata) and an entry-point JavaScript file. It runs directly on the runner (no container spin-up), which makes it fast. The @actions/core and @actions/github npm packages provide the toolkit for reading inputs, setting outputs, and interacting with the GitHub API.
action.yml:
name: 'Post PR Comment'
description: 'Posts a comment on the triggering pull request'
inputs:
message:
description: 'Comment body'
required: true
outputs:
comment-id:
description: 'ID of the created comment'
runs:
using: 'node20'
main: 'dist/index.js'
src/index.js:
const core = require('@actions/core');
const github = require('@actions/github');
async function run() {
try {
const message = core.getInput('message', { required: true });
const token = core.getInput('github-token');
const octokit = github.getOctokit(token);
const { context } = github;
const issue_number = context.payload.pull_request?.number;
const { data: comment } = await octokit.rest.issues.createComment({
...context.repo,
issue_number,
body: message,
});
core.setOutput('comment-id', comment.id);
} catch (err) {
core.setFailed(err.message);
}
}
run();
Key points:
- Bundle all dependencies into
dist/index.jsusing@vercel/ncc— do not rely onnpm installat runtime. Commit thedist/folder to the action repository. core.setFailed()both logs the error message and exits with code 1, marking the step as failed.- Use
using: 'node20'(ornode16) inaction.ymlto declare the Node.js version. - Test locally with
INPUT_MESSAGE="hello" node dist/index.js— inputs are injected asINPUT_<NAME>environment variables.
A Docker container action packages its logic and dependencies in a Docker image, giving complete control over the execution environment. It is ideal when your action requires a specific OS, binary tools not available on the runner, or a compiled language without a portable pre-built binary.
action.yml:
name: 'OWASP Dependency Check'
description: 'Run dependency vulnerability scan inside Docker'
inputs:
project-name:
description: 'Project name for the report'
required: true
runs:
using: 'docker'
image: 'Dockerfile' # build from local Dockerfile
args:
- ${{ inputs.project-name }}
Dockerfile:
FROM openjdk:21-slim
RUN apt-get update && apt-get install -y curl unzip && curl -Lo dc.zip https://github.com/jeremylong/DependencyCheck/releases/download/v9.0.0/dependency-check-9.0.0-release.zip && unzip dc.zip -d /opt && rm dc.zip
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh:
#!/bin/bash
set -e
PROJECT_NAME="$1"
/opt/dependency-check/bin/dependency-check.sh --project "$PROJECT_NAME" --scan /github/workspace --format HTML --out /github/workspace/dc-report
echo "report-path=dc-report/dependency-check-report.html" >> "$GITHUB_OUTPUT"
Key differences from a JS action:
- Docker container actions always run on Linux — they cannot execute on Windows or macOS GitHub-hosted runners.
- The
/github/workspacepath inside the container is the checked-out repository. - You can also reference a pre-built public image (
image: 'docker://alpine:3.19') instead of a local Dockerfile to skip the build step. - Container build adds latency (~30–60 s) compared to a JS action that starts instantly.
A typical container CI/CD pipeline in GitHub Actions has three stages: build the image, push it to a registry, and trigger a deployment. Here is a production-ready example using GitHub Container Registry (GHCR):
name: Build and Deploy Container
on:
push:
branches: [main]
permissions:
contents: read
packages: write # needed to push to GHCR
jobs:
build-and-push:
runs-on: ubuntu-latest
outputs:
image-tag: ${{ steps.meta.outputs.tags }}
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ghcr.io/${{ github.repository }}
tags: |
type=sha,prefix=sha-
type=ref,event=branch
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
cache-from: type=gha
cache-to: type=gha,mode=max
deploy:
needs: build-and-push
runs-on: ubuntu-latest
environment: production
steps:
- name: Deploy to Kubernetes
run: |
kubectl set image deployment/myapp myapp=ghcr.io/${{ github.repository }}:sha-${{ github.sha }}
env:
KUBECONFIG: ${{ secrets.KUBECONFIG }}
Notable patterns used here:
docker/setup-buildx-actionenables BuildKit for multi-platform builds and layer caching.type=ghacache inbuild-push-actionstores Docker build cache in GitHub Actions cache, dramatically speeding up incremental builds.docker/metadata-actiongenerates consistent image tags from git metadata.- The
deployjob uses a GitHub Environment (environment: production) which can require manual approval, environment-specific secrets, and deployment protection rules.
GitHub Actions supports built-in path filtering on push and pull_request triggers via the paths: and paths-ignore: filters. When set, the workflow only fires if at least one file in the commit diff matches the given glob pattern.
on:
push:
branches: [main]
paths:
- 'backend/**' # any file under backend/
- 'Dockerfile'
- '.github/workflows/backend-ci.yml'
pull_request:
paths-ignore:
- '**.md' # skip when only docs changed
- 'frontend/**'
You can use both paths and paths-ignore but not on the same trigger event simultaneously. Use paths when you want an allow-list and paths-ignore when you want to exclude certain patterns.
Monorepo scenario — for finer-grained per-step filtering (e.g., only run specific jobs when specific subdirectories changed), the community action dorny/paths-filter is widely used:
jobs:
changes:
runs-on: ubuntu-latest
outputs:
backend: ${{ steps.filter.outputs.backend }}
frontend: ${{ steps.filter.outputs.frontend }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v3
id: filter
with:
filters: |
backend:
- 'backend/**'
frontend:
- 'frontend/**'
test-backend:
needs: changes
if: needs.changes.outputs.backend == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: ./gradlew :backend:test
The limitation of the built-in paths: filter is that it applies to the entire workflow; you cannot skip only certain jobs within it. dorny/paths-filter solves this by producing per-path boolean outputs that individual job conditions can check.
When a workflow fails and the log output is not enough to diagnose the problem, GitHub Actions provides two main debugging mechanisms: enhanced log verbosity via repository secrets, and live interactive SSH access to the runner via the tmate action.
1. Enable debug logging by adding two repository secrets (or re-running the workflow with debug enabled in the UI):
ACTIONS_RUNNER_DEBUG=true— enables verbose runner-level diagnostics (why jobs were queued, runner setup details)ACTIONS_STEP_DEBUG=true— enables verbose step-level logs, including inputs/outputs of actions and shell-expanded commands
You can also re-run a failed job with debug logging enabled from the GitHub UI via "Re-run jobs" → "Enable debug logging".
2. Interactive SSH debugging with tmate — the mxschmitt/action-tmate action pauses the runner and opens an SSH tunnel so you can connect to the live runner and explore the filesystem, environment, and run commands manually:
steps:
- uses: actions/checkout@v4
- name: Run tests (may fail)
run: npm test
continue-on-error: true # don't abort before tmate step
- name: Setup tmate debugging
if: failure() # only open SSH if something failed
uses: mxschmitt/action-tmate@v3
timeout-minutes: 15 # auto-close after 15 min
with:
limit-access-to-actor: true # only the workflow triggerer can connect
Other useful debugging techniques:
run: env | sort— print all environment variables at a step to verify injected values.run: cat $GITHUB_EVENT_PATH | python3 -m json.tool— inspect the raw event payload.- Use
actions/upload-artifactto save log files or test-result directories for inspection after the run.
Branch protection rules enforce that certain GitHub Actions jobs must pass before a pull request can be merged into a protected branch. This creates a hard gate preventing broken code from landing on main.
Step 1 — Name your status check in the workflow. Each job name becomes a status check. Name jobs descriptively:
jobs:
unit-tests: # this becomes the status check name
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm test
lint:
runs-on: ubuntu-latest
steps:
- run: npm run lint
Step 2 — Configure the branch protection rule. In GitHub: repository Settings → Branches → Add rule → enter the branch name pattern (e.g. main). Then enable:
- ☑ Require status checks to pass before merging
- ☑ Require branches to be up to date before merging (prevents races)
- Search for and add the exact job names:
unit-testsandlint
Matrix builds create status checks with names like unit-tests (ubuntu-latest, 18) for each combination. You can require all matrix jobs or use a "summary" job pattern — a final job that declares needs: [unit-tests] and always reports success only if all matrix jobs passed — and require only that one summary check.
all-tests-pass:
if: always()
needs: [unit-tests, lint]
runs-on: ubuntu-latest
steps:
- name: Check all jobs
if: contains(needs.*.result, 'failure') || contains(needs.*.result, 'cancelled')
run: exit 1
Requiring the summary job (all-tests-pass) in the branch protection rule gives you a single, stable required check regardless of how many matrix cells exist.
Large monorepos present two main problems: every commit triggers all CI jobs even when only one service changed, and a single workflow file becomes unmanageably large. The solution combines path filtering, dynamic matrices, and workflow decomposition.
Strategy 1 — Per-service workflow files with built-in path filters. Each service gets its own workflow file triggered only when its directory changes:
# .github/workflows/service-auth.yml
on:
push:
paths: ['services/auth/**', '.github/workflows/service-auth.yml']
pull_request:
paths: ['services/auth/**']
Strategy 2 — Centralised change detection with dorny/paths-filter. One workflow detects which services changed and gates subsequent jobs:
jobs:
detect-changes:
outputs:
auth: ${{ steps.filter.outputs.auth }}
payment: ${{ steps.filter.outputs.payment }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v3
id: filter
with:
filters: |
auth: ['services/auth/**']
payment: ['services/payment/**']
build-auth:
needs: detect-changes
if: needs.detect-changes.outputs.auth == 'true'
runs-on: ubuntu-latest
steps:
- run: ./build.sh auth
Strategy 3 — Dynamic matrix from changed services. A detection job produces a JSON array of changed service names, and a single build job consumes it as a matrix, avoiding N duplicated job blocks:
build:
needs: detect-changes
strategy:
matrix:
service: ${{ fromJSON(needs.detect-changes.outputs.changed-services) }}
runs-on: ubuntu-latest
steps:
- run: ./build.sh ${{ matrix.service }}
Additional tips: use concurrency: groups scoped to service + branch to prevent duplicate runs, cache aggressively per service, and consider GitHub Actions' repository-level reusable workflows to share build logic across all services without per-repo duplication.
GitHub Actions can obtain a short-lived OpenID Connect (OIDC) JWT token for each workflow run. Cloud providers (AWS, Azure, GCP) can be configured to accept this token as proof of identity and issue temporary cloud credentials in exchange — eliminating the need to store long-lived API keys or access tokens in GitHub Secrets.
The token contains verifiable claims about the workflow run: repository name, branch, actor, environment, and the workflow ref. The cloud provider's trust policy checks these claims before granting access, so you can limit access to, for example, only the production environment on the main branch.
AWS example using aws-actions/configure-aws-credentials:
permissions:
id-token: write # required to request the OIDC token
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials via OIDC
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/GithubActionsDeployRole
aws-region: us-east-1
# No AWS_ACCESS_KEY_ID or AWS_SECRET_ACCESS_KEY needed
- name: Deploy to S3
run: aws s3 sync dist/ s3://my-bucket/
The AWS IAM role's trust policy specifies which GitHub repository and conditions it trusts:
{
"Condition": {
"StringEquals": {
"token.actions.githubusercontent.com:sub":
"repo:my-org/my-repo:environment:production"
}
}
}
Benefits over static credentials:
- No secret rotation needed — credentials expire automatically (typically 1 hour)
- No secret stored in GitHub — nothing to leak in logs or accidental commits
- Fine-grained trust — limit which repo, branch, or environment can assume the role
GitHub Actions workflows run code triggered by events — including potentially untrusted content from pull requests — so hardening them against secret exposure and code injection is essential.
1. Pin third-party actions to a full commit SHA. A mutable version tag like @v3 can be silently updated to inject malicious code. A SHA cannot be changed:
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
2. Use minimum required permissions. Declare permissions: {} at the workflow level to deny everything, then grant only what specific jobs need:
permissions: {}
jobs:
release:
permissions:
contents: write
packages: write
3. Never interpolate untrusted input directly into run: scripts. Pull request titles, branch names, and issue bodies are attacker-controlled. This is vulnerable to shell injection:
# DANGEROUS — do not do this
- run: echo "PR title: ${{ github.event.pull_request.title }}"
Safe approach — pass through an environment variable:
- run: echo "PR title: $PR_TITLE"
env:
PR_TITLE: ${{ github.event.pull_request.title }}
4. Avoid pull_request_target unless you understand its risks. It runs in the base branch context with access to secrets, so executing checkout + build of the fork code is dangerous.
5. Use GitHub's security features alongside Actions:
- Enable secret scanning to detect accidentally committed credentials
- Use
github/codeql-actionfor SAST in the CI pipeline - Enable Dependabot to auto-update action versions
- Use environment protection rules (required reviewers) for production deployments
All three are CI/CD platforms but differ significantly in architecture, hosting model, and integration depth. Here is a direct comparison across the dimensions that matter most for a team choosing between them:
| Dimension | GitHub Actions | Jenkins | GitLab CI |
|---|---|---|---|
| Hosting | SaaS (GitHub-managed) or self-hosted runners | Always self-hosted; you manage the master + agents | SaaS (gitlab.com) or self-managed GitLab + runners |
| Config format | YAML in .github/workflows/ |
Groovy DSL in Jenkinsfile | YAML in .gitlab-ci.yml |
| Ecosystem / plugins | Actions Marketplace (thousands of actions) | 1,800+ plugins; very mature but plugin conflicts common | Built-in features for SAST, DAST, registry, pages |
| Setup effort | Zero — create a YAML file and push | High — install, configure, maintain Jenkins server | Low on gitlab.com; moderate for self-managed |
| Cost model | Free for public; metered minutes for private | Free software; you pay infrastructure costs | Free tier on gitlab.com; paid tiers for more minutes/features |
| SCM integration | Native — deeply integrated with GitHub PRs, issues, releases | Webhook-based; GitHub plugin required | Native — deeply integrated with GitLab MRs, registry, security scans |
| Reusability | Reusable workflows, composite actions, Marketplace | Shared libraries, shared Jenkinsfiles | Include templates, extends, components catalog |
| Best for | Teams already on GitHub wanting zero-ops CI/CD | Enterprises with complex on-prem requirements and existing Jenkins investment | Teams that want a full DevSecOps platform (code → security → deploy) in one tool |
In practice: if your code is on GitHub and you want to avoid managing infrastructure, GitHub Actions is the natural first choice. Jenkins wins when you have deep customisation requirements or a legacy pipeline that pre-dates modern SaaS offerings. GitLab CI is compelling when you want the full DevSecOps suite (built-in container registry, SAST, DAST, dependency scanning) without stitching together separate tools.
