Unable to mount bucket on private s3 server due to url rewritting

您所在的位置:网站首页 女士冬天裙子搭配 Unable to mount bucket on private s3 server due to url rewritting

Unable to mount bucket on private s3 server due to url rewritting

2023-01-19 10:10| 来源: 网络整理| 查看: 265

Issue

My company runs a local instance of s3. There is a folder which I'm trying to mount on my computer. I've tried some options, all failed.

Option 1. $ flags=(-f -s -d -d -o dbglevel=debug -o passwd_file=~/.ssh/s3-passwd) $ s3fs my-bucket-name /home/me/fld flags -o url=https://storage.apps.company.net

This fails with the error

[DBG] curl_handlerpool.cpp:GetHandler(81): Get handler from pool: rest = 31 [INF] curl_util.cpp:prepare_url(254): URL is https://storage.apps.company.net/my-bucket-name/ [INF] curl_util.cpp:prepare_url(287): **URL changed is https://my-bucket-name.storage.apps.company.net/** [DBG] curl.cpp:RequestPerform(2283): connecting to URL https://my-bucket-name.storage.apps.company.net/ [INF] curl.cpp:insertV4Headers(2680): computing signature [GET] [/] [] [] [INF] curl_util.cpp:url_to_host(331): url is https://storage.apps.company.net [ERR] curl.cpp:RequestPerform(2455): ### CURLE_SSL_CACERT [INF] curl.cpp:RequestPerform(2515): ### retrying... [INF] curl.cpp:RemakeHandle(2107): Retry request. [type=5][url=https://my-bucket-name.storage.apps.company.net/][path=/] [INF] curl.cpp:insertV4Headers(2680): computing signature [GET] [/] [] [] [INF] curl_util.cpp:url_to_host(331): url is https://storage.apps.company.net [ERR] curl.cpp:RequestPerform(2455): ### CURLE_SSL_CACERT [ERR] curl.cpp:RequestPerform(2466): curlCode: 60 msg: **SSL peer certificate or SSH remote key was not OK** [ERR] curl.cpp:CheckBucket(3421): Check bucket failed, S3 response: [CRT] s3fs.cpp:s3fs_check_service(3597): unable to connect(host=https://storage.apps.company.net) - result of checking service.

That url rewrite seems to break everything. I've tried to workaround the url rewrite the following way:

Option 2. $ s3fs storage /home/me/fld flags -o url=https://apps.company.net -o bucket=my-bucket-name

This fails ever harder

s3fs: unable to access MOUNTPOINT storage: No such file or directory

So I tried a variation

Option 3 $ s3fs storage /home/me/fld flags -o url=https://apps.company.net/my-bucket-name

Which goes further but fails as well

[DBG] curl_handlerpool.cpp:GetHandler(81): Get handler from pool: rest = 31 [INF] curl_util.cpp:prepare_url(254): URL is https://apps.company.net/my-bucket-name/storage/ [INF] curl_util.cpp:prepare_url(287): URL changed is https://storage.apps.company.net/my-bucket-name/ [DBG] curl.cpp:RequestPerform(2283): connecting to URL https://storage.apps.company.net/my-bucket-name/ [INF] curl.cpp:insertV4Headers(2680): computing signature [GET] [/] [] [] [INF] curl_util.cpp:url_to_host(331): url is https://apps.company.net/my-bucket-name [ERR] curl.cpp:RequestPerform(2363): HTTP response code 403, returning EPERM. Body Text: SignatureDoesNotMatchThe request signature we calculated does not match the signature you provided. Check your AWS secret access key and signing method. For more information, see REST Authentication and SOAP Authentication for details./my-bucket-name/lcrncqpe-50l2jf-a6i [ERR] curl.cpp:CheckBucket(3421): Check bucket failed, S3 response: SignatureDoesNotMatchThe request signature we calculated does not match the signature you provided. Check your AWS secret access key and signing method. For more information, see REST Authentication and SOAP Authentication for details./my-bucket-name/lcrncqpe-50l2jf-a6i ... [ERR] s3fs.cpp:s3fs_exit_fuseloop(3372): Exiting FUSE event loop due to errors

I see both the https://apps.company.net/ and https://storage.apps.company.net/ urls. Seems s3fs is confused which url to use and perhaps that affects something that causes that 403 error.

Option 4

To make sure my configuration works I tried some python code

import os import s3fs url = "https://storage.apps.company.net" auth_key = "..." auth_secret = "..." bucket_name = "my-bucket-name" api_version = "s3v4" print(f"Accessing {url} :: {auth_key} :: {bucket_name}") fs = s3fs.S3FileSystem( anon=False, key=auth_key, secret=auth_secret, client_kwargs={"endpoint_url": url}, config_kwargs={"signature_version": api_version}, ) name_iter = ( "/" + "/".join(filter(bool, (root, name))) + suffix for root, folders, files in fs.walk("/") for names, suffix in ((folders, "/"), (files, "")) for name in names ) print("\n".join(name_iter))

Which prints successfully

/my-bucket-name/file1 /my-bucket-name/file2 /my-bucket-name/file3

So my configuration is correct and can access the bucket from my workstation.

Additional Information Version of s3fs being used (s3fs --version) $ s3fs --version Amazon Simple Storage Service File System V1.90 (commit:unknown) with GnuTLS(gcrypt) Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse or dpkg -s fuse)

FUSE statically compiled into WSL kernel

Kernel information (uname -r) $ uname -r 5.10.16.3-microsoft-standard-WSL2 GNU/Linux Distribution, if applicable (cat /etc/os-release)

Running a docker container with WSL 2 on Windows.



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3