Help with my S3 Configuration

@grot

Hello,

I’m currently using Loki version 3.2.1 with Docker Compose, and I need assistance with a configuration issue. My Loki configuration is as follows:

auth_enabled: false

server:
  http_listen_port: 3100
  log_level: info
  grpc_listen_port: 9095

query_range:
  parallelise_shardable_queries: true

querier:
  max_concurrent: 1024

frontend:
  max_outstanding_per_tenant: 1024
  compress_responses: true

compactor:
  working_directory: ./data/loki/compactor
  retention_enabled: false
  retention_delete_delay: 2h
  compaction_interval: 1m

ingester:
  chunk_encoding: snappy
  wal:
    dir: ./data/loki/wal
  lifecycler:
    ring:
      kvstore:
        store: inmemory
      replication_factor: 1
  chunk_idle_period: 5m
  max_chunk_age: 1h
  chunk_target_size: 1536000
  chunk_retain_period: 30s

schema_config:
  configs:
    - from: 2020-10-24
      store: tsdb
      object_store: s3
      schema: v13
      index:
        prefix: index_
        period: 24h

storage_config:
  tsdb_shipper:
    active_index_directory: /tmp/loki/tsdb-active
    cache_location: /tmp/loki/tsdb-cache
  aws:
    s3: "s3://${S3_BUCKET_NAME}"
    region: "${AWS_REGION}"
    access_key_id: "${AWS_ACCESS_KEY_ID}"
    secret_access_key: "${AWS_SECRET_ACCESS_KEY}"

limits_config:
  max_query_series: 1000000
  max_entries_limit_per_query: 1000000
  split_queries_by_interval: 5m
  max_label_name_length: 100
  max_label_value_length: 100
  reject_old_samples: true
  reject_old_samples_max_age: 72h
  ingestion_rate_mb: 20
  ingestion_burst_size_mb: 40
  max_query_parallelism: 100
  query_timeout: 30m
  allow_structured_metadata: false

table_manager:
  retention_deletes_enabled: false
  retention_period: 336h

The environment variables, AWS keys, and S3 bucket configurations are all correctly set up and working as expected. However, I am encountering the following error in the logs:
level=error ts=2024-11-18T17:30:32.806075185Z caller=cached_client.go:189 msg="failed to build table names cache" err="InvalidParameter: 1 validation error(s) found.\n- minimum field size of 1, ListObjectsV2Input.Bucket.\n"
level=error ts=2024-11-18T17:14:17.796946776Z caller=compactor.go:544 msg="failed to run compaction" err="failed to list tables: InvalidParameter: 1 validation error(s) found.\n- minimum field size of 1, ListObjectsV2Input.Bucket.\n"

I entered with docker exec -it loki sh and checked, and the environment variables are correct. Also, I downloaded an Amazon AWS CLI inside the Loki container, and the connection to the bucket and permissions are working. Thanks in advance for your help!

Hello @mdavidallen,

The error message you're encountering suggests that the S3 bucket name is not being correctly passed to the ListObjectsV2 API call, which is why you're seeing the "minimum field size of 1" error. This typically indicates that the bucket name is either empty or not being correctly interpolated from the environment variables.

Here are a few steps you can take to troubleshoot and resolve this issue:

  1. Verify Environment Variable Interpolation: Double-check that the environment variables are correctly set and accessible within the Docker container. You can do this by running echo $S3_BUCKET_NAME inside the container to ensure it outputs the expected bucket name.

  2. Check Docker Compose Configuration: Ensure that your Docker Compose file is correctly passing the environment variables to the Loki container. It should look something like this: ```yaml environment:

    • S3_BUCKET_NAME=your-bucket-name
    • AWS_REGION=your-region
    • AWS_ACCESS_KEY_ID=your-access-key-id
    • AWS_SECRET_ACCESS_KEY=your-secret-access-key ```
  3. Configuration Syntax: Make sure that the storage_config section in your Loki configuration file is correctly formatted and that the environment variables are being used properly. The S3 URL should be formatted as: yaml aws: s3: "s3://${S3_BUCKET_NAME}"

  4. Test with Hardcoded Values: As a temporary measure, try replacing the environment variables with hardcoded values in the loki.yaml configuration file to see if the issue persists. This can help determine if the problem is with the environment variable interpolation.

  5. AWS Permissions: Double-check that the IAM role or user associated with the AWS credentials has the necessary permissions to access the S3 bucket. The policy should include actions like s3:ListBucket, s3:PutObject, s3:GetObject, and s3:DeleteObject.

  6. Logs and Debugging: Increase the log level to debug to get more detailed logs, which might provide additional insights into what might be going wrong.

If after these steps the issue persists, please provide any additional logs or configuration details that might help in diagnosing the problem further.

Let me know how it goes!

xoxo Grot

kapa-discourse