Skip to main content
Version: 1.5.1

Amazon S3

Amazon AWS Long Term Storage

Synopsis

Creates a target that writes log messages to Amazon S3 buckets with support for various file formats, authentication methods, and multipart uploads. The target handles large file uploads efficiently with configurable rotation based on size or event count.

Schema

- name: <string>
description: <string>
type: awss3
pipelines: <pipeline[]>
status: <boolean>
properties:
key: <string>
secret: <string>
session: <string>
region: <string>
endpoint: <string>
part_size: <numeric>
bucket: <string>
buckets:
- bucket: <string>
name: <string>
format: <string>
compression: <string>
extension: <string>
schema: <string>
name: <string>
format: <string>
compression: <string>
extension: <string>
schema: <string>
max_size: <numeric>
batch_size: <numeric>
timeout: <numeric>
field_format: <string>
interval: <string|numeric>
cron: <string>
debug:
status: <boolean>
dont_send_logs: <boolean>

Configuration

The following fields are used to define the target:

FieldRequiredDefaultDescription
nameYTarget name
descriptionN-Optional description
typeYMust be awss3
pipelinesN-Optional post-processor pipelines
statusNtrueEnable/disable the target

AWS Credentials

FieldRequiredDefaultDescription
keyN*-AWS access key ID for authentication
secretN*-AWS secret access key for authentication
sessionN-Optional session token for temporary credentials
regionY-AWS region (e.g., us-east-1, eu-west-1)
endpointN-Custom S3-compatible endpoint URL (for non-AWS S3 services)

* = Conditionally required. AWS credentials (key and secret) are required unless using IAM role-based authentication on AWS infrastructure.

Connection

FieldRequiredDefaultDescription
part_sizeN5Multipart upload part size in megabytes (minimum 5MB)
timeoutN30Connection timeout in seconds
field_formatN-Data normalization format. See applicable Normalization section

Files

FieldRequiredDefaultDescription
bucketN*-Default S3 bucket name (used if buckets not specified)
bucketsN*-Array of bucket configurations for file distribution
buckets.bucketY-S3 bucket name
buckets.nameY-File name template
buckets.formatN"json"Output format: json, multijson, avro, parquet
buckets.compressionN-Compression algorithm. See Compression below
buckets.extensionNMatches formatFile extension override
buckets.schemaN*-Schema definition file path (required for Avro and Parquet formats)
nameN"vmetric.{{.Timestamp}}.{{.Extension}}"Default file name template when buckets not used
formatN"json"Default output format when buckets not used. See Compression below
compressionN-Default compression when buckets not used
extensionNMatches formatDefault file extension when buckets not used
schemaN-Default schema path when buckets not used
max_sizeN0Maximum file size in bytes before rotation
batch_sizeN100000Maximum number of messages per file

* = Either bucket or buckets must be specified. When using buckets, schema is conditionally required for Avro and Parquet formats.

note

When max_size is reached, the current file is uploaded to S3 and a new file is created. For unlimited file size, set the field to 0.

Scheduler

FieldRequiredDefaultDescription
intervalNrealtimeExecution frequency. See Interval for details
cronN-Cron expression for scheduled execution. See Cron for details

Debug Options

FieldRequiredDefaultDescription
debug.statusNfalseEnable debug logging
debug.dont_send_logsNfalseProcess logs but don't send to target (testing)

Details

The Amazon S3 target supports writing to different buckets with various file formats and schemas. The target provides enterprise-grade cloud storage integration with comprehensive file format support.

Authentication Methods

Supports static credentials (access key and secret key) with optional session tokens for temporary credentials. When deployed on AWS infrastructure, can leverage IAM role-based authentication without explicit credentials.

File Formats

FormatDescription
jsonEach log entry is written as a separate JSON line (JSONL format)
multijsonAll log entries are written as a single JSON array
avroApache Avro format with schema
parquetApache Parquet columnar format with schema

Compression

Some formats support built-in compression to reduce storage costs and transfer times. When supported, compression is applied at the file/block level before upload.

FormatDefaultCompression Codecs
JSON-Not supported
MultiJSON-Not supported
Avrozstddeflate, snappy, zstd
Parquetzstdgzip, snappy, zstd, brotli, lz4

File Management

Files are rotated based on size (max_size parameter) or event count (batch_size parameter), whichever limit is reached first. Template variables in file names enable dynamic file naming for time-based partitioning.

Templates

The following template variables can be used in file names:

VariableDescriptionExample
{{.Year}}Current year2024
{{.Month}}Current month01
{{.Day}}Current day15
{{.Timestamp}}Current timestamp in nanoseconds1703688533123456789
{{.Format}}File formatjson
{{.Extension}}File extensionjson
{{.Compression}}Compression typezstd
{{.TargetName}}Target namemy_logs
{{.TargetType}}Target typeawss3
{{.Table}}Bucket namelogs

Multipart Upload

Large files automatically use S3 multipart upload protocol with configurable part size (part_size parameter). Default 5MB part size balances upload efficiency and memory usage.

Multiple Buckets

Single target can write to multiple S3 buckets with different configurations, enabling data distribution strategies (e.g., raw data to one bucket, processed data to another).

Schema Requirements

Avro and Parquet formats require schema definition files. Schema files must be accessible at the path specified in the schema parameter during target initialization.

Examples

Basic Configuration

The minimum configuration for a JSON S3 target:

targets:
- name: basic_s3
type: awss3
properties:
key: "AKIAIOSFODNN7EXAMPLE"
secret: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
region: "us-east-1"
bucket: "datastream-logs"

Multiple Buckets

Configuration for distributing data across multiple S3 buckets with different formats:

targets:
- name: multi_bucket_export
type: awss3
properties:
key: "AKIAIOSFODNN7EXAMPLE"
secret: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
region: "eu-west-1"
buckets:
- bucket: "raw-data-archive"
name: "raw-{{.Year}}-{{.Month}}-{{.Day}}.json"
format: "multijson"
compression: "gzip"
- bucket: "analytics-data"
name: "analytics-{{.Year}}/{{.Month}}/{{.Day}}/data_{{.Timestamp}}.parquet"
format: "parquet"
schema: "<schema definition>"
compression: "snappy"

Parquet Format

Configuration for daily partitioned Parquet files:

targets:
- name: parquet_analytics
type: awss3
properties:
key: "AKIAIOSFODNN7EXAMPLE"
secret: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
region: "us-west-2"
bucket: "analytics-lake"
name: "events/year={{.Year}}/month={{.Month}}/day={{.Day}}/part-{{.Timestamp}}.parquet"
format: "parquet"
schema: "<schema definition>"
compression: "snappy"
max_size: 536870912

High Reliability

Configuration with enhanced settings:

targets:
- name: reliable_s3
type: awss3
pipelines:
- checkpoint
properties:
key: "AKIAIOSFODNN7EXAMPLE"
secret: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
region: "us-east-1"
bucket: "critical-logs"
name: "logs-{{.Timestamp}}.json"
format: "json"
timeout: 60
part_size: 10

With Field Normalization

Using field normalization for standard format:

targets:
- name: normalized_s3
type: awss3
properties:
key: "AKIAIOSFODNN7EXAMPLE"
secret: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
region: "us-east-1"
bucket: "normalized-logs"
name: "logs-{{.Timestamp}}.json"
format: "json"
field_format: "cim"

Debug Configuration

Configuration with debugging enabled:

targets:
- name: debug_s3
type: awss3
properties:
key: "AKIAIOSFODNN7EXAMPLE"
secret: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
region: "us-east-1"
bucket: "test-logs"
name: "test-{{.Timestamp}}.json"
format: "json"
debug:
status: true
dont_send_logs: true