I have had a small itch for a while when using GitHub Actions to run Terraform deployments where I had no easy way to get a very quick overview of which resources that will be created, changed, or deleted if I approve and merge a pull request. So I spent a little time scratching that itch and figuring out a small solution using Python to parse the output of terraform plan
and direct the output to GitHub's job summary.
The end result of implementing these steps will look similar to this:

In this post I will skip over how to setup the GitHub Action to trigger on pull requests or merges to the main, and the setup and init stages of Terraform. A complete example can be found on gist.github.com
Run terraform plan
The first addition to our usual steps when running terraform deployments in CI/CD is to use terraform show
to convert our plan into json, a format more suitable for exploring than capturing stdout
or the binary format outputted by terraform plan
:
- name: Plan
run: |
terraform plan -input=false -out=tfplan
terraform show -json -no-color tfplan > tfplan.json
Set up python and install dependencies
Next up we need to setup a Python environment in our job and install two depencies. Pandas is a wonderful library for analysis and manipulation of data. It's probably overengineering at its finest for this task, but when it's the hammer you know, you use it for everything that looks like a nail
- name: Setup Python
uses: actions/setup-python@v5
with:
cache: 'pip'
python-version: '3.13'
- name: Install python dependencies
run: pip install -r requirements.txt
requirements.txt
pandas == 2.2.0
tabulate == 0.9.0
The reason why I use a requirements.txt here is to provide a cache. I don't need to spend time waiting for my action to download these dependencies on every run.
Parse the output from terraform plan
The next, and final step, is to use Python to parse our JSON formatted plan, and display an overview of the proposed changes in the job summary:
- name: Parse terraform plan
shell: python
run: |
import json
import os
import pandas as pd
with open('tfplan.json') as f:
data = json.load(f)
df = pd.json_normalize(data['resource_changes']).fillna(0)
df.columns = df.columns.str.replace(".", "_")
df['action'] = [','.join(map(str, l)) for l in df['change_actions']]
df_filtered = df[df['action'].str.contains('no-op') == False]
try:
if df_filtered.shape[0] > 0:
markdown_output=df_filtered[["address", "action"]].to_markdown(index=False)
with open(os.environ['GITHUB_STEP_SUMMARY'], 'a') as gh:
gh.write('### Overview of changes from terraform plan\n\n')
gh.write(markdown_output)
except:
raise
This code can probably be improved quite a bit. I encountered some issues with getting only the subset of data I need and stripping away the rest. The overall logic goes something like this:
- Open the json formatted plan from the file system
- Load the data into a normalized Pandas DataFrame, and replace alle the "NaN" values with zeroes
- Because the normalization flattens all the nested data structures we end up with column names that cause problems that we replace with underscores
- The
change_actions
column contains information on which actions Terraform will take on that specific reasource, but encapsulated in a list. We add a new column to our DataFrame with the result as astring
- Create a new DataFrame where we only add the rows with resources that will be changed, created or destroyed, removing all the resources that are already consistent with our plan
- If we have at least 1 resources that will be changed, output a markdown formatted overview of changes to the job summary