Enabling Logs in Session Manager

We learn a cool way to log activity into S3 using Session Manager

Prerequisites

The Lesson

Waaayyyy back in our early labs we learned about the importance of management plane logging, and we implemented CloudTrail through our AWS Organization. But that’s only one source of logs; I like to break things down into a few broad categories to know which to collect:

  • Management plane logs: the cloud provider’s API calls, including CloudTrail.

  • Service logs: produced by a service when you enable it. Although CloudTrail grabs all the main API calls, some other services produce separate logs. For example GuardDuty, if you set it up to save into logs.

  • Resource logs: from an individual resource such as an EC2 instance or S3 bucket. To be honest these sometimes overlap with service logs, but not always.

Here’s an easy way to think about it. When I make API calls into AWS they are recorded in a management plane log. When I turn some services on they keep their own logs, which might be stored in different places — usually S3 or Amazon CloudWatch. Individual resources including EC2 instances, containers, S3 buckets, VPCs (with flow logs), and Lambda functions can create their own logs.

In AWS we have a weird situation where some types of activity we want to track happen via API, but aren’t considered Management Events which show up in CloudTrail logs by default. Amazon calls these Data Events, and they are for resources like S3 buckets and Lambda functions, where there might be a very high volume and AWS doesn’t want to keep the damn things unless someone is paying for them. We will talk about these when we get to data security.

To me these are resource logs — they just happen to be collected via CloudTrail instead of some other mechanism. You still need to activate them and they cost more.

Do I need all these logs for security? Probably not. All of them are useful in investigations, but many of them are operational and don’t affect us. Storing logs is stupidly expensive, even in cloud — so we need to pick and choose.

Workloads is the term we use for the places where our code and apps run. Workloads include instances, containers, and Lambda functions. For workload logs, the main thing I want to collect is user activity. Like when someone logs in, and what they do.

As previously mentioned, Session Manager has a capability to store logs in S3 or CloudWatch. It will record every command you type into the console (unless you get fancy with SSH tunneling, which we aren’t covering). This is awesome because we can:

  • Restrict who can log into which instances using IAM.

  • Since we enforce MFA when someone logs into IAM Identity Center, this also means all terminal sessions are also protected with MFA.

  • Session Manager tracks all sessions and attributes them back to the user which logged in.

  • If we enable it, Session Manager will log every command someone types during a session.

  • You can trigger notifications and events based on specific activity when you log sessions.

Guess what we are going to do today?

Now a few caveats:

  • This only logs the commands you type during a session.

  • If you want system logs, you need to use a different tool like the CloudWatch Logs Agent (which saves system logs to CloudWatch — or you can send them to your SIEM using other tools).

  • Logs store everything you type, and if that includes passwords you should use the special command line in this documentation.

  • If you let someone log in outside Session Manager (or someone breaks in) that activity won’t be logged (obviously).

  • For reasons we’ll discuss in the Lab, I believe the logs are written out by the SSM agent running inside the instance, and aren’t collected by the service itself. That’s actually a good thing — it means AWS isn’t sniffing our sessions!

Key Lesson Points

  • The main cloud security log types are management plane, service, and resource.

  • Session Manager can store session activity in CloudWatch or S3.

  • These steps record all commands, but if you want full system logs you need a different mechanism.

The Lab

Today we will:

  • Launch our CloudFormation template to create our VPC.

  • Turn on Session Manager logging.

  • Add an inline IAM policy with permissions to use Session Manager and write the logs to that S3 bucket.

    • I could have automated this, but I want you to see how things are wired.

  • Connect, type some stuff, and look at the logs.

🚨 Don’t Make My Mistake 🚨 

In prepping this lab I wasn’t able to make a connection to the instance when I turned on the encryption. At first I thought it was a permissions issue, but then I realized my instance didn’t have a way to talk to S3!

I fixed it by adding a new VPC endpoint for S3 and everything worked. I followed my usual debugging process for these kinds of issues:

  1. Is it a permissions issue? I added the AdministratorAccess policy; when that didn’t work, I knew it wasn’t related to permissions. (Note: only do this in dev/test/sandbox environments, and always remove it when you are done. Many Bothans died to bring us this knowledge).

  2. Is it a connectivity issue? Yep, nailed it in 2! It only took me a few minutes to realize there was no route to S3.

Video Walkthrough

Step-by-Step

A reminder that as we progress I don’t always show screenshots for tasks that we’ve already done a bunch. You can’t really learn to ride a bike if the training wheels stay on! Feel free to review the video if you get stuck.

As always, start with your Sign-in Portal, then go to TestAccount1 > AdministratorAccess > CloudFormation. Verify your are in Oregon, then Create stack and use this template:

Name it SLAW and click through until you can Submit.

As I mentioned, this adds a VPC endpoint for S3 and creates the needed S3 bucket. It’s otherwise identical to our last lab. You’ll need to wait until Create complete before moving on to the next step, which should take around 3 minutes.

Now go to Systems Manager > Session Manager > Preferences > Edit:

Go to S3 Logging > Enable, then check Allow only encrypted S3 buckets, and select the bucket which starts with session-manager-logs, which I created for you with CloudFormation. Then scroll down and click Save:

All new S3 buckets in AWS are encrypted by default using an AWS hosted key. Down the road I’ll teach you how to work with your own keys, but this is good enough for today.

Logging is turned on, but as a reminder the SSM agent software pre-installed into our instances is saving the log files. It doesn’t have access to our S3 bucket, so we need to give it permission (and as a reminder, there is a VPC endpoint for S3 running so it can connect).

Go to IAM > Roles > SSMClient:

Since this is a one-off we don’t need to create a managed policy, and can use our first inline policy. As a reminder this is a policy which is only available to this role, and we can’t reuse it elsewhere (although we could copy and paste to create a new policy). Go to Add permissions > Create inline policy:

Click on JSON, copy this policy, and paste it into the window. Don’t save yet — we need to swap in the ARN of our actual S3 bucket. The little red error marker even tells us where:

{
	"Version": "2012-10-17",
	"Statement": [
        {
          "Effect": "Allow",
          "Action": "s3:PutObject",
          "Resource": "your s3 ARN/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetEncryptionConfiguration"
            ],
            "Resource": "*"
        }
	]
}

I suggest opening up a second tab and going to S3. Here’s the URL if you want to just paste it in: https://us-west-2.console.aws.amazon.com/s3/home?region=us-west-2#

In S3 go to Buckets, then your session-manager-logs-xxx bucket, then Properties, and click the Copy icon next to the Amazon Resource Name (ARN):

Then go back to your IAM Tab and paste the ARN where it says… your S3 ARN. I really try to keep things obvious. It’s vitally important to keep the /* at the end of that line inside the quotes!

Click Next, name it SSMLogs, and Create policy:

Now our instances have access to write files to this encrypted S3 bucket. Let’s test it out!

Go to EC2 > Instances > Slaw > Connect > Session Manager > Connect. Assuming everything was set up correctly you’ll see a normal session manager window:

Now type some commands like ls. Or oops, or really any text. It will all get logged. When you are bored click Terminate.

It will take a short time for the logs to show up, usually 30 seconds to a minute. Click your stopwatch, wait, then go back to S3 > Buckets and this time Objects. You should see a log file. If not refresh until it shows up. Then Click the object:

Then click Download. You can now review your log by opening it in a text editor:

Pretty cool! We now have an easy way to save our logs to some of the cheapest (note, NOT free!) storage out there.

In an enterprise environment I would ship these off to my logging account, but this works well for our lab exercise.

Important Cleanup Instructions

Okay, time to clean up. You can’t delete the CloudFormation template until you do two things!!! CloudFormation can’t always delete things we modify.

First we need to empty our S3 bucket. Go to S3 > Buckets, click the radio button next to your bucket, and then Empty:

Where prompted type permanently delete and click Empty. If you have a ton of objects this can take a very long time. Cleaning out an S3 bucket is one of the most painful processes in AWS.

Next we need to remove our inline policy. Go to IAM > Roles > SSMClient > Permissions, click the SSMLogs policy, then Remove:

All done? Great, go to CloudFormation > Slaw > Delete, and you don’t get a screenshot for that one, which you’ve done something like 10 times by now.

Lab Key Points

  • When saving logs to an S3 bucket using Session Manager, the instance needs a role with permissions to save to the S3 bucket.

  • It also needs a network route to connect. In our case we used a VPC Endpoint again.

-Rich

Reply

or to participate.