PostgreSql
Last updated
Last updated
JDBC PostgreSql Sink Connector
Write data through jdbc. Support Batch mode and Streaming mode, support concurrent writing, support exactly-once semantics (using XA transaction guarantee).
Use
Xa transactions
to ensureexactly-once
. So only supportexactly-once
for the database which is supportXa transactions
. You can setis_exactly_once=true
to enable it.
PostgreSQL
Different dependency version has different driver class.
org.postgresql.Driver
jdbc:postgresql://localhost:5432/test
PostgreSQL
If you want to manipulate the GEOMETRY type in PostgreSQL.
org.postgresql.Driver
jdbc:postgresql://localhost:5432/test
BOOL
BOOLEAN
_BOOL
ARRAY<BOOLEAN>
BYTEA
BYTES
_BYTEA
ARRAY<TINYINT>
INT2 SMALLSERIAL INT4 SERIAL
INT
_INT2 _INT4
ARRAY<INT>
INT8 BIGSERIAL
BIGINT
_INT8
ARRAY<BIGINT>
FLOAT4
FLOAT
_FLOAT4
ARRAY<FLOAT>
FLOAT8
DOUBLE
_FLOAT8
ARRAY<DOUBLE>
NUMERIC(Get the designated column's specified column size>0)
DECIMAL(Get the designated column's specified column size,Gets the number of digits in the specified column to the right of the decimal point)
NUMERIC(Get the designated column's specified column size<0)
DECIMAL(38, 18)
BPCHAR CHARACTER VARCHAR TEXT GEOMETRY GEOGRAPHY JSON JSONB UUID
STRING
_BPCHAR _CHARACTER _VARCHAR _TEXT
ARRAY<STRING>
TIMESTAMP
TIMESTAMP
TIME
TIME
DATE
DATE
OTHER DATA TYPES
NOT SUPPORTED YET
url
String
Yes
-
The URL of the JDBC connection. Refer to a case: jdbc:postgresql://localhost:5432/test if you would use json or jsonb type insert please add jdbc url stringtype=unspecified option
driver
String
Yes
-
The jdbc class name used to connect to the remote data source,
if you use PostgreSQL the value is org.postgresql.Driver
.
user
String
No
-
Connection instance user name
password
String
No
-
Connection instance password
query
String
No
-
Use this sql write upstream input datas to database. e.g INSERT ...
,query
have the higher priority
database
String
No
-
Use this database
and table-name
auto-generate sql and receive upstream input datas write to database.
This option is mutually exclusive with query
and has a higher priority.
table
String
No
-
Use database and this table-name auto-generate sql and receive upstream input datas write to database.
This option is mutually exclusive with query
and has a higher priority.The table parameter can fill in the name of an unwilling table, which will eventually be used as the table name of the creation table, and supports variables (${table_name}
, ${schema_name}
). Replacement rules: ${schema_name}
will replace the SCHEMA name passed to the target side, and ${table_name}
will replace the name of the table passed to the table at the target side.
primary_keys
Array
No
-
This option is used to support operations such as insert
, delete
, and update
when automatically generate sql.
support_upsert_by_query_primary_key_exist
Boolean
No
false
Choose to use INSERT sql, UPDATE sql to process update events(INSERT, UPDATE_AFTER) based on query primary key exists. This configuration is only used when database unsupport upsert syntax. Note: that this method has low performance
connection_check_timeout_sec
Int
No
30
The time in seconds to wait for the database operation used to validate the connection to complete.
max_retries
Int
No
0
The number of retries to submit failed (executeBatch)
batch_size
Int
No
1000
For batch writing, when the number of buffered records reaches the number of batch_size
or the time reaches checkpoint.interval
, the data will be flushed into the database
is_exactly_once
Boolean
No
false
Whether to enable exactly-once semantics, which will use Xa transactions. If on, you need to
set xa_data_source_class_name
.
generate_sink_sql
Boolean
No
false
Generate sql statements based on the database table you want to write to.
xa_data_source_class_name
String
No
-
The xa data source class name of the database Driver, for example, PostgreSQL is org.postgresql.xa.PGXADataSource
, and
please refer to appendix for other data sources
max_commit_attempts
Int
No
3
The number of retries for transaction commit failures
transaction_timeout_sec
Int
No
-1
The timeout after the transaction is opened, the default is -1 (never timeout). Note that setting the timeout may affect exactly-once semantics
auto_commit
Boolean
No
true
Automatic transaction commit is enabled by default
field_ide
String
No
-
Identify whether the field needs to be converted when synchronizing from the source to the sink. ORIGINAL
indicates no conversion is needed;UPPERCASE
indicates conversion to uppercase;LOWERCASE
indicates conversion to lowercase.
properties
Map
No
-
Additional connection configuration parameters,when properties and URL have the same parameters, the priority is determined by the specific implementation of the driver. For example, in MySQL, properties take precedence over the URL.
common-options
no
-
schema_save_mode
Enum
no
CREATE_SCHEMA_WHEN_NOT_EXIST
Before the synchronous task is turned on, different treatment schemes are selected for the existing surface structure of the target side.
data_save_mode
Enum
no
APPEND_DATA
Before the synchronous task is turned on, different processing schemes are selected for data existing data on the target side.
custom_sql
String
no
-
When data_save_mode selects CUSTOM_PROCESSING, you should fill in the CUSTOM_SQL parameter. This parameter usually fills in a SQL that can be executed. SQL will be executed before synchronization tasks.
enable_upsert
Boolean
No
true
Enable upsert by primary_keys exist, If the task has no key duplicate data, setting this parameter to false
can speed up data import
Use database
and this table-name
auto-generate sql and receive upstream input datas write to database.
This option is mutually exclusive with query
and has a higher priority.
The table parameter can fill in the name of an unwilling table, which will eventually be used as the table name of the creation table, and supports variables (${table_name}
, ${schema_name}
). Replacement rules: ${schema_name}
will replace the SCHEMA name passed to the target side, and ${table_name}
will replace the name of the table passed to the table at the target side.
for example:
${schema_name}.${table_name} _test
dbo.tt_${table_name} _sink
public.sink_table
Before the synchronous task is turned on, different treatment schemes are selected for the existing surface structure of the target side.
Option introduction:
RECREATE_SCHEMA
:Will create when the table does not exist, delete and rebuild when the table is saved
CREATE_SCHEMA_WHEN_NOT_EXIST
:Will Created when the table does not exist, skipped when the table is saved
ERROR_WHEN_SCHEMA_NOT_EXIST
:Error will be reported when the table does not exist
Before the synchronous task is turned on, different processing schemes are selected for data existing data on the target side.
Option introduction:
DROP_DATA
: Preserve database structure and delete data
APPEND_DATA
:Preserve database structure, preserve data
CUSTOM_PROCESSING
:User defined processing
ERROR_WHEN_DATA_EXISTS
:When there is data, an error is reported
When data_save_mode selects CUSTOM_PROCESSING, you should fill in the CUSTOM_SQL parameter. This parameter usually fills in a SQL that can be executed. SQL will be executed before synchronization tasks.
If partition_column is not set, it will run in single concurrency, and if partition_column is set, it will be executed in parallel according to the concurrency of tasks.
This example defines a Nexus synchronization task that automatically generates data through FakeSource and sends it to JDBC Sink. FakeSource generates a total of 16 rows of data (row.num=16), with each row having two fields, name (string type) and age (int type). The final target table is test_table will also be 16 rows of data in the table. Before run this job, you need create database test and table test_table in your PostgreSQL.
This example not need to write complex sql statements, you can configure the database name table name to automatically generate add statements for you
For accurate write scene we guarantee accurate once
CDC change data is also supported by us In this case, you need config database, table and primary_keys.
Sink plugin common parameters, please refer to for details