What’s the big deal about a SSH tunnel?

The big deal is you can control access to the tunnel via IAM in AWS, so if a user leaves your organization, they can no longer SSH to the machine via SSM even if they have a local SSH user on the machine.

This is great for users who do not deploy IGWs in their VPCs.

I’ve followed instructions, but it’s not working

There are a lot of instructions to get this setup including installing the session plugin client side, setting up your SSH config to use a ProxyCommand. If you follow the docs word for word including adding the VPC endpoints, you may still have issues.

If you have done the optional SSH configuration but you get nothing when SSH’ing, turn on debug mode, and you may end up with output like this:

$ ssh -vvv -i key -l ec2-user i-0b3fb981ffde2c5d3
OpenSSH_7.9p1, LibreSSL 2.7.3
debug1: Reading configuration data /Users/user/.ssh/config
debug1: /Users/user/.ssh/config line 34: Skipping Host block because of negated match for i-*
debug1: /Users/user/.ssh/config line 57: Applying options for i-*
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 48: Applying options for *
debug1: Executing proxy command: exec sh -c "aws ssm start-session --target i-0b3fb981ffde2c5d3 --document-name AWS-StartSSHSession --parameters 'portNumber=22'"
debug1: identity file key type -1
debug1: identity file key-cert type -1
debug1: identity file /Users/user/.ssh/id_rsa type 0
debug1: identity file /Users/user/.ssh/id_rsa-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_7.9
debug1: ssh_exchange_identification:

debug1: ssh_exchange_identification: Starting session with SessionId: user

debug1: ssh_exchange_identification: \033[?1034hsh-4.2$
\033[Ksh-4.2$ SSH-2.0-OpenSSH_7.9


debug1: ssh_exchange_identification: sh: SSH-2.0-OpenSSH_7.9: command not found


debug1: ssh_exchange_identification: sh-4.2$

In the above, you’ll see it never actually tries to authenticate you based on the key you’ve provided.

If this is the case, then update the SSM agent on the EC2 instance. If the EC2 instances do not have a path to the internet, add an S3 endpoint to the VPC first, then run the following commands, replacing the instance ID with the ID of your EC2 instances and the region with the region you’re deploying into:

export instanceid="i-1234567890"
aws ssm send-command --document-name "AWS-UpdateSSMAgent" \
                     --document-version "\$LATEST" \
                     --targets "Key=instanceids,Values=${instanceid}" \
                     --parameters '{"version":[""],"allowDowngrade":["false"]}' \
                     --timeout-seconds 600 \
                     --max-concurrency "50" \
                     --max-errors "0" \
                     --region us-west-2

Once the task run completes, which may take a minute or so, you should then be able to login:

$ ssh -i key -l ec2-user i-0b3fb981ffde2c5d3
Warning: Permanently added 'i-0b3fb981ffde2c5d3' (ECDSA) to the list of known hosts.

       __|  __|_  )
       _|  (     /   Amazon Linux 2 AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-2/
No packages needed for security; 6 packages available
Run "sudo yum update" to apply all updates.
[ec2-user@ip-10-151-0-79 ~]$

If your debug output doesn’t get as far as I had above before the SSM update, check all your policies and settings and make sure your instance is showing up in the inventory of managed instances in the Systems Manager service.