- Cloud Security Lab a Week (S.L.A.W)
- Posts
- Centralize Security Triggers with StackSets and EventBridge
Centralize Security Triggers with StackSets and EventBridge
In part 3 of our series on Epic Automation, we will use CloudFormation StackSets to consistently deploy our EventBridge rule throughout a target OU.
Prerequisites
Complete the previous two labs in this series:
The Lesson
With the mechanics of the last two labs out of the way, it’s time to start wiring up the first part of our automation platform! We set up our central event bus and pushed out required permissions with StackSets, so now it’s time to start feeding the events into our SecurityOperations account.

Yeah, I’m old, get over it.
This week we’ll drop back into CloudFormation StackSets and push out our EventBridge rule to forward the events we are interested in to our SecurityAutomation event bus.
My goal in this series isn’t merely to teach you the mechanics of what we are doing, but the problem solving and reasoning behind it. As a reminder, we have a very specific desired security outcome, which in this case happens to be a security invariant:
If a bucket in a production account is tagged with a key of classification and a value of sensitive, do not share it outside the organization.
This translates to the following detailed requirements:
Identify all buckets tagged sensitive
Identify when a bucket is newly tagged sensitive, or that tag is removed
Determine whether the bucket is accessible from outside the organization
Update the security controls to restrict access to our organization
Track any configuration changes (e.g., bucket policy updates) to ensure access isn’t opened from outside our organization.
There’s a bit of nuance here. We obviously need to find buckets tagged sensitive in production accounts. And we need to do this for anything currently tagged sensitive, anything newly tagged sensitive, or buckets moved into a production state which are… tagged sensitive.
The first part of the problem is understanding the conditions. What are the different ways a bucket could satisfy them? Let’s break it down in more detail, using the overall structure for our Organization.
We know an account is considered production if it is in our Workloads > Production OU. This is actually a major obstacle in many organizations I’ve worked, which aren’t good about OU structure or how they designate accounts. If you work someplace different where this isn’t as straightforward, you’ll need to think about other ways you can track it. One option is whether the bucket is tagged production, but I find that… less reliable.
We know some buckets may already be tagged.
To find these we need to run some API calls to look at every bucket and see whether it’s tagged production.
That kind of scan usually runs on a schedule. We won’t do that part today, but when we do we’ll set it hourly. I call these ‘sweeps’, since the objective is to find new things that otherwise can slip through the cracks.
We also need to identify when a bucket is tagged sensitive. We can do this by watching CloudTrail events, and to do that we will use EventBridge.
Once we know a bucket is tagged sensitive and in a production account, we need to check its permissions.
Since we definitely want to block public access, we can check that setting and turn it on.
We want to disable ACLs, if they are enabled.
Since we want to restrict to only our Organization, the only way to do that is with a bucket policy. We will need to see if the bucket has a policy, and if not create one which restricts access to our org.
If a bucket policy already exists, we want to add a deny statement to restrict access to only our organization.
If a bucket policy is modified, check it to make sure access is still restricted.
If BPA (Block Public Access) is modified, check to make sure it’s still enabled.
I’m skipping checking for enabling ACLs, since BPA protects against that risk (mostly — we may come back to this later).
If we wanted to get even fancier, we could detect immediately when a bucket moves into our Production OU. I’m skipping that since our hourly sweep will find it, and the logic and code to implement that are more complicated.
With those requirements, knowing we have our central SecurityAutomation event bus ready to receive events, what events do we need to send? This is for the real-time detection part of the problem — we will cover how to enable the sweep later.
There are six events which should trigger our real-time automation:
PutBucketTagging
DeleteBucketTagging
PutBucketPolicy
DeleteBucketPolicy
PutPublicAccessBlock
DeletePublicAccessBlock
Make sense? Notice that CreateBucket isn’t there — tagging comes after bucket creation, and that’s when we care. These events should detect when a bucket meets our tag requirement, or is changed in such a way that it might no longer be compliant.
Here’s a pretty picture:

Way back in Use EventBridge for Security Hub Alerts we set up our first EventBridge rule to forward all Security Hub findings to email via Simple Notification Service (SNS). This time we will do something similar, with 4 differences:
We will trigger on CloudFormation events.
We will specify only the events (listed above) which we care about.
We will forward the events to another event bus in another account, instead of to SNS.
We will use StackSets to deploy the EventBridge rule into all accounts in an entire OU (not that we have any accounts there yet).
Okay, what’s my thinking here? There is a cost with pushing events around, and we only care about some events, so let’s just forward those. Since everything is within our own AWS organization, and we plan to use serverless/Lambda functions for analysis and remediation, it makes sense to go from event bus to event bus instead of using SNS. SNS can be a good option for security automation, but it depends on the use case. For example it’s often better for communicating between application components in your stack when you need a pub/sub model. In this case we are working with native AWS events, so there’s no reason to move them into a different service — we only want to move the events into a different account.
Here’s a quick diagram of what we will build in this lab. I’ll update every week, as we add new components to the architecture (I’m using OU folders to indicate things deployed to every account in that OU):

StackSets to deploy, EventBridge to collect
Key Lesson Points
Always define your desired outcome before building any kind of autoremediation.
Use a mix of time-based sweeps and, if needed, real-time detection.
CloudTrail is often the best trigger for remediations based on configuration changes. Once you know the desired outcome and conditions which could cause misconfiguration, you can map the API calls which could trigger automation.
When working with AWS native events, EventBridge to EventBridge using a Rule is a great option to centralize events.
Use StackSets to push the right automations into the desired OUs. This is one reason we spent so much time building out a well-architected OU hierarchy.
The Lab
This is another relatively quick and easy one.
Create an account in our Production OU so we have a target account to work with.
Create a new StackSet which builds an EventBridge Rule to send specific S3 events into our SecurityAutomation event hub, which we previously created.
As a reminder, we already deployed an IAM role with the proper permissions to send events across accounts to that SecurityAutomation hub.
Deploy the stack to our Workloads > Production OU.
Video Walkthrough
Step-by-Step
Start at your Sign-on portal > CloudSLAW > AdministratorAccess > Organizations > Add an AWS account:

Call it Production1 and use your “+” email address. I recommend yourname+slawprod1@yourdomain. com or equivalent. Then Create account:

Wait 30 seconds or so and then refresh the page, scroll down, and check Production1. Then scroll up > Actions > Move:


Then pick Workloads > Prod > Move AWS account (if you see Move bank account as an option, drop me a line):

To finish up you need to copy the ID for your Prod OU. It will start with ‘ou-’

Next close the window logged into your management account. Now we’ll switch over to SecurityOperations > AdministratorAccess > Oregon region > CloudFormation > StackSets > Service-managed > Create StackSet:

Leave the default settings (e.g., Service managed permissions) and paste in this URL for the S3 template https://cloudslaw.s3-us-west-2.amazonaws.com/slaw51.template, then Next:

Since we are, again, pushing CloudFormation into other people’s accounts, it is really very important to be good about clear names and descriptions. I can’t tell you how many times I’ve performed assessments and seen generic names/descriptions which don’t tell the account owner/user who pushed the template, or why. Look, people, it isn’t like they can’t just read the template to see what it’s doing — security through obscurity (or laziness) ain’t gonna help here. Go with SecOpsEventForwarder and EventBridge rule for forwarding certain events to the security operations team.

Next go to the upper-right corner and get your account ID > Parameters > SecOpsAccountID. Just paste it right in there. Since we are in the SecurityOperations account where our SecurityAutomation event hub lives, we can snag the ID nice and easy. Then click Next:


On the next screen Add stacks to stack set > Deploy to organizational units > paste in your Prod OU ID, then scroll down:

Then add the Oregon and Virginia regions. Even though we are deploying from Oregon (us-west-2), CloudFormation will deploy the stack in multiple specified regions. These are the only two we are using, and the rest are blocked by our SCP. Then scroll down and click Next. Since we only have one account in that OU, we don’t need to worry about increasing concurrency.

On the next page click Submit. If you need a screenshot for that step I’ve failed as a teacher and need to go back to working for a living (Oh wait, nobody pays me for this!). As that deploys is a good time to review the template:
Parameters:
SecOpsAccountId:
Type: String
Description: "AWS Account ID where the central event bus is located"
Resources:
S3BucketSecurityMonitorRule:
Type: AWS::Events::Rule
Properties:
Name: s3-bucket-security-monitor
Description: "Monitors S3 bucket policy, tag, and public access block changes"
State: ENABLED
EventPattern: {
"detail-type": ["AWS API Call via CloudTrail"],
"source": ["aws.s3"],
"detail": {
"eventSource": ["s3.amazonaws.com"],
"eventName": [
"PutBucketTagging",
"DeleteBucketTagging",
"PutBucketPolicy",
"DeleteBucketPolicy",
"PutPublicAccessBlock",
"DeletePublicAccessBlock"
]
}
}
Targets:
- Arn: !Sub arn:aws:events:${AWS::Region}:${SecOpsAccountId}:event-bus/SecurityAutomation
Id: "CentralSecurityBus"
RoleArn: !Sub arn:aws:iam::${AWS::AccountId}:role/SecurityOperations/SecurityEventForwarder
A few important things to pay attention to. Notice we use the !Sub option to swap in two variables. ${AWS::Region} will insert the region where the stack is being deployed. This is absolutely essential if you use StackSets to deploy in multiple regions. We also sub in the account ID we entered as a parameter for our SecurityOperations account. That’s because I’m hosting the template and it makes the template portable to your account. And you can see the CloudsTrail events we are pulling and what it looks like to define those.
And that’s it for this week. If you dig into the CloudTrail console you can track the deployment and see it in both regions of the one new account we created:

To review, we now have:
Our event hub to receive the events we are interested in. Soon we will trigger automations from those events.
A role to forward events to the event hub, which we deployed everywhere since we may want to use it for more than this lab.
An EventBridge rule in our lonely production account. But not to worry — since it’s in an auto-deploy StackSet, any time we add an account to that OU, CloudFormation will automatically deploy the stack.
-Rich
Reply