Selfuel Docs
  • Welcome to Selfuel Platform
    • Features
    • Capabilities
    • Target Audience
    • $150 Free Trial
  • Registration and Login
  • Platform UI
  • Stream Processing with Cortex
    • Cortex Quickstart Guide
    • Cortex Elements
      • Streams
      • Attributes
      • Mappings
        • 🚧Source Mapping Types
        • 🚧Sink Mapping Types
      • Node and Application Healthchecks
      • Nodes
        • Node Preview
        • Node Connectivites
        • Node Units
      • Expression Builder
        • 🚧Built-in Functions
      • Windows
        • Cron Window
        • Delay Window
        • Unique Event Window
        • First Event Window
        • Sliding Event Count Window
        • Tumbling Event Count Window
        • Session Window
        • Tumbling Event Sort Window
        • Sliding Time Window
        • Tumbling Time Window
        • Sliding Time and Event Count Window
      • Store and Cache
        • RDBMS
        • MongoDB
        • Redis
        • Elasticsearch
    • Applications
      • Applications Page
      • Creating Applications using Canvas
      • Connector Nodes Cluster
        • Source Nodes
          • CDC Source
          • Email Source
          • HTTP Source
          • HTTP Call Response Source
          • HTTP Service Source
          • Kafka Source
          • RabbitMQ Source
          • gRPC Source
          • JMS Source
          • Kafka Multi DC Source
          • JMS Source
          • AWS S3 Source
          • Google Pub-sub Source
          • AWS SQS Source
          • MQTT Source
          • Google Cloud Storage Source
          • HTTP SSE Source
          • WebSubHub Source
        • Sink Nodes
          • Email Sink
          • HTTP Sink
          • HTTP Service Response Sink
          • HTTP Call Sink
          • Kafka Sink
          • RabbitMQ Sink
          • gRPC Sink
          • JMS Sink
          • Kafka Multi DC Sink
          • AWS S3 Sink
          • Google Pub-sub Sink
          • AWS SQS Sink
          • MQTT Sink
          • Google Cloud Storage Sink
          • HTTP SSE Sink
          • WebSubHub Sink
      • Processing Nodes Cluster
        • Query
        • Join
        • Pattern
        • Sequence
        • Processor
        • 🚧On-demand Query
      • Buffer Nodes Cluster
        • Stream
        • Table
        • Window
        • Aggregation
        • Trigger
    • Run Applications
      • Run Applications Using Runners
      • Update Running Applications
      • Application Versioning
  • Data Integration with Nexus
    • Nexus Quickstart Guide
    • Nexus Elements
      • Concept
        • Config
        • Schema Feature
        • Speed Control
      • Connectors
        • Source
          • Source Connector Features
          • Source Common Options
          • AmazonDynamoDB
          • AmazonSqs
          • Cassandra
          • Clickhouse
          • CosFile
          • DB2
          • Doris
          • Easysearch
          • Elasticsearch
          • FakeSource
          • FtpFile
          • Github
          • Gitlab
          • GoogleSheets
          • Greenplum
          • Hbase
          • HdfsFile
          • Hive
          • HiveJdbc
          • Http
          • Apache Iceberg
          • InfluxDB
          • IoTDB
          • JDBC
          • Jira
          • Kingbase
          • Klaviyo
          • Kudu
          • Lemlist
          • Maxcompute
          • Milvus
          • MongoDB CDC
          • MongoDB
          • My Hours
          • MySQL CDC
          • MySQL
          • Neo4j
          • Notion
          • ObsFile
          • OceanBase
          • OneSignal
          • OpenMldb
          • Oracle CDC
          • Oracle
          • OssFile
          • OssJindoFile
          • Paimon
          • Persistiq
          • Phoenix
          • PostgreSQL CDC
          • PostgreSQL
          • Apache Pulsar
          • Rabbitmq
          • Redis
          • Redshift
          • RocketMQ
          • S3File
          • SftpFile
          • Sls
          • Snowflake
          • Socket
          • SQL Server CDC
          • SQL Server
          • StarRocks
          • TDengine
          • Vertica
          • Web3j
          • Kafka
        • Sink
          • Sink Connector Features
          • Sink Common Options
          • Activemq
          • AmazonDynamoDB
          • AmazonSqs
          • Assert
          • Cassandra
          • Clickhouse
          • ClickhouseFile
          • CosFile
          • DB2
          • DataHub
          • DingTalk
          • Doris
          • Druid
          • INFINI Easysearch
          • Elasticsearch
          • Email
          • Enterprise WeChat
          • Feishu
          • FtpFile
          • GoogleFirestore
          • Greenplum
          • Hbase
          • HdfsFile
          • Hive
          • Http
          • Hudi
          • Apache Iceberg
          • InfluxDB
          • IoTDB
          • JDBC
          • Kafka
          • Kingbase
          • Kudu
          • Maxcompute
          • Milvus
          • MongoDB
          • MySQL
          • Neo4j
          • ObsFile
          • OceanBase
          • Oracle
          • OssFile
          • OssJindoFile
          • Paimon
          • Phoenix
          • PostgreSql
          • Pulsar
          • Rabbitmq
          • Redis
          • Redshift
          • RocketMQ
          • S3Redshift
          • S3File
          • SelectDB Cloud
          • Sentry
          • SftpFile
          • Slack
          • Snowflake
          • Socket
          • SQL Server
          • StarRocks
          • TDengine
          • Tablestore
          • Vertica
        • Formats
          • Avro format
          • Canal Format
          • CDC Compatible Debezium-json
          • Debezium Format
          • Kafka source compatible kafka-connect-json
          • MaxWell Format
          • Ogg Format
        • Error Quick Reference Manual
      • Transform
        • Transform Common Options
        • Copy
        • FieldMapper
        • FilterRowKind
        • Filter
        • JsonPath
        • LLM
        • Replace
        • Split
        • SQL Functions
        • SQL
    • Integrations
      • Integrations Page
      • Creating Integrations Using Json
    • Run Integrations
      • Run Integrations Using Runners
      • Integration Versioning
  • Batch Processing/Storage with Maxim
    • Maxim Quickstart Guide
    • Maxim Elements
    • Queries
    • Run Queries
  • Orchestration with Routines
    • Routines Quickstart Guide
    • Routines Elements
    • Routines
    • Run Routines
  • Runners
    • Runners Page
    • Create a Runner to Run Applications
  • Security
    • Vaults
      • Vaults Page
      • Create Vaults
        • Runner-level Vaults
        • Application-level Vaults
      • Edit and Delete Vaults
      • 🚧Utilizing Vaults in Applications and Runners
    • Certificates
      • Certificates Page
      • 🚧Utilizing Certificates in Applications
      • 🟨Setting Up Security Settings
  • Monitoring Performance
    • Dashboard
    • Application Details
    • Runner Details
  • Logging
    • Log Types
  • Cost Management
    • SaaS
      • Pay-as-you-go
        • Hard Budget Cap
        • Soft Budget Cap
      • Subscriptions
    • On-prem
  • Organization Settings
    • General
    • Access Controls
      • User Roles and Privileges
    • Current Costs
    • Billing Addresses
    • Payment Accounts
    • Subscriptions
    • Pricing
    • Invoicing
  • User Settings
  • Troubleshooting
  • FAQs
Powered by GitBook
On this page
  • Key Features​
  • Description​
  • Supported DataSource Info​
  • Data Type Mapping​
  • Sink Options​
  • Example​
  1. Data Integration with Nexus
  2. Nexus Elements
  3. Connectors
  4. Sink

S3File

PreviousS3RedshiftNextSelectDB Cloud

Last updated 8 months ago

S3 File Sink Connector

Key Features

By default, we use 2PC commit to ensure exactly-once

Description

Output data to aws s3 file system.

Datasource
Supported Versions

S3

current

If write to csv, text file type, All column will be string.

Nexus Data type
Orc Data type

STRING

STRING

BOOLEAN

BOOLEAN

TINYINT

BYTE

SMALLINT

SHORT

INT

INT

BIGINT

LONG

FLOAT

FLOAT

FLOAT

FLOAT

DOUBLE

DOUBLE

DECIMAL

DECIMAL

BYTES

BINARY

DATE

DATE

TIME TIMESTAMP

TIMESTAMP

ROW

STRUCT

NULL

UNSUPPORTED DATA TYPE

ARRAY

LIST

Map

Map

Nexus Data type
Parquet Data type

STRING

STRING

BOOLEAN

BOOLEAN

TINYINT

INT_8

SMALLINT

INT_16

INT

INT32

BIGINT

INT64

FLOAT

FLOAT

FLOAT

FLOAT

DOUBLE

DOUBLE

DECIMAL

DECIMAL

BYTES

BINARY

DATE

DATE

TIME TIMESTAMP

TIMESTAMP_MILLIS

ROW

GroupType

NULL

UNSUPPORTED DATA TYPE

ARRAY

LIST

Map

Map

name
type
required
default value
Description

path

string

yes

-

tmp_path

string

no

/tmp/nexus

The result file will write to a tmp path first and then use mv to submit tmp dir to target dir. Need a S3 dir.

bucket

string

yes

-

fs.s3a.endpoint

string

yes

-

fs.s3a.aws.credentials.provider

string

yes

com.amazonaws.auth.InstanceProfileCredentialsProvider

The way to authenticate s3a. We only support org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider and com.amazonaws.auth.InstanceProfileCredentialsProvider now.

access_key

string

no

-

Only used when fs.s3a.aws.credentials.provider = org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider

access_secret

string

no

-

Only used when fs.s3a.aws.credentials.provider = org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider

custom_filename

boolean

no

false

Whether you need custom the filename

file_name_expression

string

no

"${transactionId}"

Only used when custom_filename is true

filename_time_format

string

no

"yyyy.MM.dd"

Only used when custom_filename is true

file_format_type

string

no

"csv"

field_delimiter

string

no

'\001'

Only used when file_format is text

row_delimiter

string

no

"\n"

Only used when file_format is text

have_partition

boolean

no

false

Whether you need processing partitions.

partition_by

array

no

-

Only used when have_partition is true

partition_dir_expression

string

no

"${k0}=${v0}/${k1}=${v1}/.../${kn}=${vn}/"

Only used when have_partition is true

is_partition_field_write_in_file

boolean

no

false

Only used when have_partition is true

sink_columns

array

no

When this parameter is empty, all fields are sink columns

is_enable_transaction

boolean

no

true

batch_size

int

no

1000000

compress_codec

string

no

none

common-options

object

no

-

max_rows_in_memory

int

no

-

Only used when file_format is excel.

sheet_name

string

no

Sheet${Random number}

Only used when file_format is excel.

xml_root_tag

string

no

RECORDS

Only used when file_format is xml, specifies the tag name of the root element within the XML file.

xml_row_tag

string

no

RECORD

Only used when file_format is xml, specifies the tag name of the data rows within the XML file

xml_use_attr_format

boolean

no

-

Only used when file_format is xml, specifies Whether to process data using the tag attribute format.

parquet_avro_write_timestamp_as_int96

boolean

no

false

Only used when file_format is parquet.

parquet_avro_write_fixed_as_int96

array

no

-

Only used when file_format is parquet.

hadoop_s3_properties

map

no

schema_save_mode

Enum

no

CREATE_SCHEMA_WHEN_NOT_EXIST

Before turning on the synchronous task, do different treatment of the target path

data_save_mode

Enum

no

APPEND_DATA

Before opening the synchronous task, the data file in the target path is differently processed

encoding

string

no

"UTF-8"

Only used when file_format_type is json,text,csv,xml.

Store the path of the data file to support variable replacement. For example: path=/test/${database_name}/${schema_name}/${table_name}

hadoop_s3_properties {
      "fs.s3a.buffer.dir" = "/data/st_test/s3a"
      "fs.s3a.fast.upload.buffer" = "disk"
   }

Whether custom the filename

Only used when custom_filename is true

file_name_expression describes the file expression which will be created into the path. We can add the variable ${now} or ${uuid} in the file_name_expression, like test_${uuid}_${now}, ${now} represents the current time, and its format can be defined by specifying the option filename_time_format.

Please note that, If is_enable_transaction is true, we will auto add ${transactionId}_ in the head of the file.

Only used when custom_filename is true

When the format in the file_name_expression parameter is xxxx-${now} , filename_time_format can specify the time format of the path, and the default value is yyyy.MM.dd . The commonly used time formats are listed as follows:

Symbol
Description

y

Year

M

Month

d

Day of month

H

Hour in day (0-23)

m

Minute in hour

s

Second in minute

We supported as the following file types:

text csv parquet orc json excel xml binary

Please note that, The final file name will end with the file_format_type's suffix, the suffix of the text file is txt.

The separator between columns in a row of data. Only needed by text file format.

The separator between rows in a file. Only needed by text file format.

Whether you need processing partitions.

Only used when have_partition is true.

Partition data based on selected fields.

Only used when have_partition is true.

If the partition_by is specified, we will generate the corresponding partition directory based on the partition information, and the final file will be placed in the partition directory.

Default partition_dir_expression is ${k0}=${v0}/${k1}=${v1}/.../${kn}=${vn}/. k0 is the first partition field and v0 is the value of the first partition field.

Only used when have_partition is true.

If is_partition_field_write_in_file is true, the partition field and the value of it will be write into data file.

For example, if you want to write a Hive Data File, Its value should be false.

Which columns need be written to file, default value is all the columns get from Transform or Source. The order of the fields determines the order in which the file is actually written.

If is_enable_transaction is true, we will ensure that data will not be lost or duplicated when it is written to the target directory.

Please note that, If is_enable_transaction is true, we will auto add ${transactionId}_ in the head of the file.

Only support true now.

The maximum number of rows in a file. For Nexus Engine, the number of lines in the file is determined by batch_size and checkpoint.interval jointly decide. If the value of checkpoint.interval is large enough, sink writer will write rows in a file until the rows in the file larger than batch_size. If checkpoint.interval is small, the sink writer will create a new file when a new checkpoint trigger.

The compress codec of files and the details that supported as the following shown:

  • txt: lzo none

  • json: lzo none

  • csv: lzo none

  • orc: lzo snappy lz4 zlib none

  • parquet: lzo snappy lz4 gzip brotli zstd none

Tips: excel type does not support any compression format

Sink plugin common parameters, please refer to Sink Common Options for details.

When File Format is Excel,The maximum number of data items that can be cached in the memory.

Writer the sheet of the workbook

Specifies the tag name of the root element within the XML file.

Specifies the tag name of the data rows within the XML file.

Specifies Whether to process data using the tag attribute format.

Support writing Parquet INT96 from a timestamp, only valid for parquet files.

Support writing Parquet INT96 from a 12-byte field, only valid for parquet files.

Before turning on the synchronous task, do different treatment of the target path. Option introduction: RECREATE_SCHEMA :Will be created when the path does not exist. If the path already exists, delete the path and recreate it. CREATE_SCHEMA_WHEN_NOT_EXIST :Will Created when the path does not exist, use the path when the path is existed. ERROR_WHEN_SCHEMA_NOT_EXIST :Error will be reported when the path does not exist

Before opening the synchronous task, the data file in the target path is differently processed. Option introduction: DROP_DATA: use the path but delete data files in the path. APPEND_DATA:use the path, and add new files in the path for write data. ERROR_WHEN_DATA_EXISTS:When there are some data files in the path, an error will is reported.

Only used when file_format_type is json,text,csv,xml. The encoding of the file to write. This param will be parsed by Charset.forName(encoding).

This example defines a Nexus synchronization task that automatically generates data through FakeSource and sends it to S3File Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target s3 dir will also create a file and all of the data in write in it. Before run this job, you need create s3 path: /nexus/text.

# Defining the runtime environment
env {
  parallelism = 1
  job.mode = "BATCH"
}

source {
  # This is a example source plugin **only for test and demonstrate the feature source plugin**
  FakeSource {
    parallelism = 1
    result_table_name = "fake"
    row.num = 16
    schema = {
      fields {
        c_map = "map<string, array<int>>"
        c_array = "array<int>"
        name = string
        c_boolean = boolean
        age = tinyint
        c_smallint = smallint
        c_int = int
        c_bigint = bigint
        c_float = float
        c_double = double
        c_decimal = "decimal(16, 1)"
        c_null = "null"
        c_bytes = bytes
        c_date = date
        c_timestamp = timestamp
      }
    }
  }
  # If you would like to get more information about how to configure Nexus and see full list of source plugins,
  # please go to source page
}

transform {
  # If you would like to get more information about how to configure Nexus and see full list of transform plugins,
    # please go to transform page
}

sink {
    S3File {
      bucket = "s3a://nexus-test"
      tmp_path = "/tmp/nexus"
      path="/nexus/text"
      fs.s3a.endpoint="s3.cn-north-1.amazonaws.com.cn"
      fs.s3a.aws.credentials.provider="com.amazonaws.auth.InstanceProfileCredentialsProvider"
      file_format_type = "text"
      field_delimiter = "\t"
      row_delimiter = "\n"
      have_partition = true
      partition_by = ["age"]
      partition_dir_expression = "${k0}=${v0}"
      is_partition_field_write_in_file = true
      custom_filename = true
      file_name_expression = "${transactionId}_${now}"
      filename_time_format = "yyyy.MM.dd"
      sink_columns = ["name","age"]
      is_enable_transaction=true
      hadoop_s3_properties {
        "fs.s3a.buffer.dir" = "/data/st_test/s3a"
        "fs.s3a.fast.upload.buffer" = "disk"
      }
  }
  # If you would like to get more information about how to configure Nexus and see full list of sink plugins,
  # please go to sink page
}

For text file format with have_partition and custom_filename and sink_columns and com.amazonaws.auth.InstanceProfileCredentialsProvider


  S3File {
    bucket = "s3a://nexus-test"
    tmp_path = "/tmp/nexus"
    path="/nexus/text"
    fs.s3a.endpoint="s3.cn-north-1.amazonaws.com.cn"
    fs.s3a.aws.credentials.provider="com.amazonaws.auth.InstanceProfileCredentialsProvider"
    file_format_type = "text"
    field_delimiter = "\t"
    row_delimiter = "\n"
    have_partition = true
    partition_by = ["age"]
    partition_dir_expression = "${k0}=${v0}"
    is_partition_field_write_in_file = true
    custom_filename = true
    file_name_expression = "${transactionId}_${now}"
    filename_time_format = "yyyy.MM.dd"
    sink_columns = ["name","age"]
    is_enable_transaction=true
    hadoop_s3_properties {
      "fs.s3a.buffer.dir" = "/data/st_test/s3a"
      "fs.s3a.fast.upload.buffer" = "disk"
    }
  }

For parquet file format simple config with org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider


  S3File {
    bucket = "s3a://nexus-test"
    tmp_path = "/tmp/nexus"
    path="/senexustunnel/parquet"
    fs.s3a.endpoint="s3.cn-north-1.amazonaws.com.cn"
    fs.s3a.aws.credentials.provider="org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider"
    access_key = "xxxxxxxxxxxxxxxxx"
    secret_key = "xxxxxxxxxxxxxxxxx"
    file_format_type = "parquet"
    hadoop_s3_properties {
      "fs.s3a.buffer.dir" = "/data/st_test/s3a"
      "fs.s3a.fast.upload.buffer" = "disk"
    }
  }

For orc file format simple config with org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider


  S3File {
    bucket = "s3a://nexus-test"
    tmp_path = "/tmp/nexus"
    path="/nexus/orc"
    fs.s3a.endpoint="s3.cn-north-1.amazonaws.com.cn"
    fs.s3a.aws.credentials.provider="org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider"
    access_key = "xxxxxxxxxxxxxxxxx"
    secret_key = "xxxxxxxxxxxxxxxxx"
    file_format_type = "orc"
    schema_save_mode = "CREATE_SCHEMA_WHEN_NOT_EXIST"
    data_save_mode="APPEND_DATA"
  }

Multi-table writing and saveMode

env {
  "job.name"="nexus_job"
  "job.mode"=STREAMING
}
source {
  MySQL-CDC {
      database-names=[
          "wls_t1"
      ]
      table-names=[
          "wls_t1.mysqlcdc_to_s3_t3",
          "wls_t1.mysqlcdc_to_s3_t4",
          "wls_t1.mysqlcdc_to_s3_t5",
          "wls_t1.mysqlcdc_to_s3_t1",
          "wls_t1.mysqlcdc_to_s3_t2"
      ]
      password="xxxxxx"
      username="xxxxxxxxxxxxx"
      base-url="jdbc:mysql://localhost:3306/qa_source"
  }
}

transform {
}

sink {
  S3File {
    bucket = "s3a://nexus-test"
    tmp_path = "/tmp/nexus/${table_name}"
    path="/test/${table_name}"
    fs.s3a.endpoint="s3.cn-north-1.amazonaws.com.cn"
    fs.s3a.aws.credentials.provider="org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider"
    access_key = "xxxxxxxxxxxxxxxxx"
    secret_key = "xxxxxxxxxxxxxxxxx"
    file_format_type = "orc"
    schema_save_mode = "CREATE_SCHEMA_WHEN_NOT_EXIST"
    data_save_mode="APPEND_DATA"
  }
}

Supported DataSource Info

Data Type Mapping

Orc File Type

Parquet File Type

Sink Options

If you need to add a other option, you could add it here and refer to this

path [string]

hadoop_s3_properties [map]

If you need to add a other option, you could add it here and refer to this

custom_filename [boolean]

file_name_expression [string]

filename_time_format [string]

file_format_type [string]

field_delimiter [string]

row_delimiter [string]

have_partition [boolean]

partition_by [array]

partition_dir_expression [string]

is_partition_field_write_in_file [boolean]

sink_columns [array]

is_enable_transaction [boolean]

batch_size [int]

compress_codec [string]

common options

max_rows_in_memory [int]

sheet_name [string]

xml_root_tag [string]

xml_row_tag [string]

xml_use_attr_format [boolean]

parquet_avro_write_timestamp_as_int96 [boolean]

parquet_avro_write_fixed_as_int96 [array]

schema_save_mode[Enum]

data_save_mode[Enum]

encoding [string]

Example

Simple:

​
​
​
​
​
​
​
​
​
link
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
​
link