Snowflake

JDBC Snowflake Sink Connector

Key Features​

Description​

Write data through jdbc. Support Batch mode and Streaming mode, support concurrent writing.

Supported DataSource list​

Datasource
Supported Versions
Driver
Url

snowflake

Different dependency version has different driver class.

net.snowflake.client.jdbc.SnowflakeDriver

jdbc:snowflake://<account_name>.snowflakecomputing.com

Data Type Mapping​

Snowflake Data Type
Nexus Data Type

BOOLEAN

BOOLEAN

TINYINT SMALLINT BYTEINT

SHORT_TYPE

INT INTEGER

INT

BIGINT

LONG

DECIMAL NUMERIC NUMBER

DECIMAL(x,y)

DECIMAL(x,y)(Get the designated column's specified column size.>38)

DECIMAL(38,18)

REAL FLOAT4

FLOAT

DOUBLE DOUBLE PRECISION FLOAT8 FLOAT

DOUBLE

CHAR CHARACTER VARCHAR STRING TEXT VARIANT OBJECT

STRING

DATE

DATE

TIME

TIME

DATETIME TIMESTAMP TIMESTAMP_LTZ TIMESTAMP_NTZ TIMESTAMP_TZ

TIMESTAMP

BINARY VARBINARY GEOGRAPHY GEOMETRY

BYTES

Options​

Name
Type
Required
Default
Description

url

String

Yes

-

The URL of the JDBC connection. Refer to a case: jdbc:snowflake://<account_name>.snowflakecomputing.com

driver

String

Yes

-

The jdbc class name used to connect to the remote data source, if you use Snowflake the value is net.snowflake.client.jdbc.SnowflakeDriver.

user

String

No

-

Connection instance user name

password

String

No

-

Connection instance password

query

String

No

-

Use this sql write upstream input datas to database. e.g INSERT ...,query have the higher priority

database

String

No

-

Use this database and table-name auto-generate sql and receive upstream input datas write to database. This option is mutually exclusive with query and has a higher priority.

table

String

No

-

Use database and this table-name auto-generate sql and receive upstream input datas write to database. This option is mutually exclusive with query and has a higher priority.

primary_keys

Array

No

-

This option is used to support operations such as insert, delete, and update when automatically generate sql.

support_upsert_by_query_primary_key_exist

Boolean

No

false

Choose to use INSERT sql, UPDATE sql to process update events(INSERT, UPDATE_AFTER) based on query primary key exists. This configuration is only used when database unsupport upsert syntax. Note: that this method has low performance

connection_check_timeout_sec

Int

No

30

The time in seconds to wait for the database operation used to validate the connection to complete.

max_retries

Int

No

0

The number of retries to submit failed (executeBatch)

batch_size

Int

No

1000

For batch writing, when the number of buffered records reaches the number of batch_size or the time reaches checkpoint.interval , the data will be flushed into the database

max_commit_attempts

Int

No

3

The number of retries for transaction commit failures

transaction_timeout_sec

Int

No

-1

The timeout after the transaction is opened, the default is -1 (never timeout). Note that setting the timeout may affect exactly-once semantics

auto_commit

Boolean

No

true

Automatic transaction commit is enabled by default

properties

Map

No

-

Additional connection configuration parameters,when properties and URL have the same parameters, the priority is determined by the specific implementation of the driver. For example, in MySQL, properties take precedence over the URL.

common-options

No

-

Sink plugin common parameters, please refer to Sink Common Options for details

enable_upsert

Boolean

No

true

Enable upsert by primary_keys exist, If the task has no key duplicate data, setting this parameter to false can speed up data import

If partition_column is not set, it will run in single concurrency, and if partition_column is set, it will be executed in parallel according to the concurrency of tasks.

Task Example​

simple:​

This example defines a Nexus synchronization task that automatically generates data through FakeSource and sends it to JDBC Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target table is test_table will also be 16 rows of data in the table. Before run this job, you need create database test and table test_table in your snowflake database.

# Defining the runtime environment
env {
    parallelism = 1
    job.mode = "BATCH"
}
source {
    # This is a example source plugin **only for test and demonstrate the feature source plugin**
    FakeSource {
        parallelism = 1
        result_table_name = "fake"
        row.num = 16
        schema = {
            fields {
                name = "string"
                age = "int"
            }
        }
    }
    # If you would like to get more information about how to configure Nexus and see full list of source plugins,
    # please go to source page
}
transform {
    # If you would like to get more information about how to configure Nexus and see full list of transform plugins,
    # please go to transform page
}
sink {
    jdbc {
        url = "jdbc:snowflake://<account_name>.snowflakecomputing.com"
        driver = "net.snowflake.client.jdbc.SnowflakeDriver"
        user = "root"
        password = "123456"
        query = "insert into test_table(name,age) values(?,?)"
    }
    # If you would like to get more information about how to configure Nexus and see full list of sink plugins,
    # please go to sink page
}

CDC(Change data capture) event​

CDC change data is also supported by us In this case, you need config database, table and primary_keys.

sink {
   jdbc {
   url = "jdbc:snowflake://<account_name>.snowflakecomputing.com"
   driver = "net.snowflake.client.jdbc.SnowflakeDriver"
   user = "root"
   password = "123456"
   generate_sink_sql = true
   
   
   # You need to configure both database and table
   database = test
   table = sink_table
   primary_keys = ["id","name"]
  }
}

Last updated