Remote archive volumes on


Has anyone had any success using  as the destination for a remote archive volume? (It's S3 compatible)

I'm getting a generic "Connection Problems" error (with archive options set to "s3,noverifypeer") and I'm curious if it's something inherent to



Thanks Charlie, that got me on the right track.

I'm happy to say we finally got it working although it required recompiling with a minor patch. It turns out that Exasol expects the X-Amz-Bucket-Region HTTP header name to be lowercase while sends it with initial-caps. Thankfully is fairly easy to patch/recompile.

Here are the steps we took to get Exasol to recognize as a remote archive volume (first three thanks to Charlie):

  1. Enable SSL in
  2. Have listen on 443 (Exasol will ignore any port specified)
  3. Assuming that your server is and it has a bucket named backups, create a DNS alias which also resolves to
  4. In the startup set the MINIO_DOMAIN ENV variable to  This will cause to extract the bucket name from the virtual host passed to it instead of extracting it from the URL path (which is the default)
  5. In the startup set the MINIO_REGION_NAME ENV variable to us-east-1. This will cause to include that in all HTTP response headers.
  6. Get the source code from and apply the patch below. See the repository's Dockerfile for how to rebuild it: 
    --- /cmd/api-headers.go
    +++ /cmd/api-headers.go
    @@ -51,7 +51,8 @@ func setCommonHeaders(w http.ResponseWriter) {
        // Set `x-amz-bucket-region` only if region is set on the server
        // by default minio uses an empty region.
        if region := globalServerRegion; region != "" {
    -       w.Header().Set(xhttp.AmzBucketRegion, region)
    +       h := strings.ToLower(xhttp.AmzBucketRegion)
    +       w.Header()[h] = append(w.Header()[h], region)
        w.Header().Set(xhttp.AcceptRanges, "bytes")
  7. In the Exasol interface for adding a remote archive volume
    • Set the Archive URL to
    • Specify the access and secret keys in the username/password fields
    • Specify an Option of s3 (and any other applicable options)




tried it once but never got it to run.

My discoveries so far:

  • you have to enable SSL in
  • you have to run in on port 443 -> exasol will ignore the port number on the provided url
  • you will have to specify the bucket you want your data stored in prior to the hostname as Exasol strips the first part from the url and uses it as bucket
  • Take a backup to the not working remote volume to see more information in the log
    pddserver(1.0): Backup error. Backup could not be written successfully: [Remote volume error: Unable to retrieve S3 region from AWS response]:​

That's the farthest I have come.


That's the trace

 "host": "mybucket.",
 "api": "errorResponseHandler",
 "request": {
  "time": "2020-08-25T09:38:23.9855559Z",
  "proto": "HTTP/1.1",
  "method": "HEAD",
  "path": "/",
  "headers": {
   "Accept": "*/*",
   "Content-Length": "0",
   "Host": "mybucket."
 "response": {
  "time": "2020-08-25T09:38:23.9855559Z",
  "headers": {
   "Accept-Ranges": "bytes",
   "Content-Length": "231",
   "Content-Type": "application/xml",
   "Server": "MinIO/RELEASE.2020-08-25T00-21-20Z",
   "Vary": "Origin"
  "body": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>XMinioUnknownAPIRequest</Code><Message>Unknown API request at /</Message><Resource>/</Resource><RequestId></RequestId><HostId>4055faa6-8d98-4752-9af6-d9bb88510912</HostId></Error>",
  "statusCode": 400
 "callStats": {
  "rx": 27,
  "tx": 374,
  "duration": 0,
  "timeToFirstByte": 0