24

I can log into my server with cyberduck or filezilla but cannot read my homedirectory. s3 bucket "mybucket" exists. In cyber duck I see

"Cannot readdir on root. Please contact your web hosting service provider for assistance." and in Filezilla "Error: Reading directory .: permission denied"

even though I can connect to server.

Am I missing some user permission in the policies below ?

These are my permissions

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation"
            ],
            "Resource": "arn:aws:s3:::MYBUCKET"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": "arn:aws:s3:::MYBUCKET/*"
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": "transfer:*",
            "Resource": "*"
        }
    ]
}

These are my trust relationships:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "s3.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        },
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "transfer.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
2
  • Do you use AWS SFTP? You haven't mentioned it in the question Commented Mar 25, 2019 at 14:15
  • Yes, I am using aws sftp. Commented Mar 26, 2019 at 2:51

2 Answers 2

43

User Role should be:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowListingOfUserFolder",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::BUCKET_NAME"
            ]
        },
        {
            "Sid": "HomeDirObjectAccess",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObjectVersion",
                "s3:DeleteObject",
                "s3:GetObjectVersion"
            ],
            "Resource": "arn:aws:s3:::BUCKET_NAME/*"
        }
    ]
}

Trust relationship of User:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": "transfer.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Home directory for your user should be /BUCKET_NAME

10
  • 6
    This should be the accepted answer! Commented Mar 26, 2019 at 17:03
  • Thanks, this resolves my issue. Commented Mar 28, 2019 at 1:15
  • 3
    This answer just saved me a lot of heartache. I was setting up SFTP and my default role/policy had a trust relationship with s3.amazonaws.com. Connecting would give me an error stating "Unable to AssumeRole". The real problem was that I needed a trust relationship with transfer.amazonaws.com instead of s3.amazonaws.com . Commented Apr 17, 2019 at 15:27
  • 1
    Please mark it as the accepted answer. Commented Apr 18, 2019 at 15:20
  • 1
    I want to allow user only to Put objects i.e remove "s3:GetObject", "s3:DeleteObjectVersion", "s3:DeleteObject", "s3:GetObjectVersion" But with that I cannot list objects in the Home directory, Any solution to resolve this greatly appreciated? Commented Nov 6, 2019 at 6:12
4

I had issues with this until I added, specifically, the s3:GetObject permission to the aws_transfer_user policy. I expected s3:ListBucket to be enough, but it was not. sftp> ls would fail until I had GetObject.

Here's the Terraform for it:

resource "aws_transfer_user" "example-ftp-user" {
  count                     = length(var.uploader_users)
  user_name                 = var.uploader_users[count.index].username

  server_id                 = aws_transfer_server.example-transfer.id
  role                      = aws_iam_role.sftp_content_incoming.arn
  home_directory_type       = "LOGICAL"

  home_directory_mappings {
      entry = "/"
      target = "/my-bucket/$${Transfer:UserName}"
    }

    policy = <<POLICY
{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Sid": "AllowSftpUserAccessToS3",
        "Effect": "Allow",
        "Action": [
          "s3:ListBucket",
          "s3:PutObject",
          "s3:GetObject",
          "s3:DeleteObjectVersion",
          "s3:DeleteObject",
          "s3:GetObjectVersion",
          "s3:GetBucketLocation"
        ],
        "Resource": [
          "${aws_s3_bucket.bucket.arn}/${var.uploader_users[count.index].username}",
          "${aws_s3_bucket.bucket.arn}/${var.uploader_users[count.index].username}/*"
        ]
      }
    ]
}
POLICY
}

And I define users in a .tfvars file; e.g.:

uploader_users = [
  {
    username = "firstuser"
    public_key = "ssh-rsa ...."
  },
  {
    username = "seconduser"
    public_key = "ssh-rsa ..."
  },
  {
    username = "thirduser"
    public_key = "ssh-rsa ..."
  }
]

I hope this helps someone. It took me a lot of tinkering before I finally got this working, and I'm not 100% sure of the interactions with other policies might ultimately be in play. But after applying this was the moment I could connect and list bucket contents without getting "Permission denied".

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .