Source Connector Features
Source connectors have some common core features, and each source connector supports them to varying degrees.
exactly-onceβ
If each piece of data in the data source will only be sent downstream by the source once, we think this source connector supports exactly once.
In Nexus, we can save the read Split and its offset (The position of the read data in split at that time, such as line number, byte size, offset, etc.) as StateSnapshot when checkpointing. If the task restarted, we will get the last StateSnapshot and then locate the Split and offset read last time and continue to send data downstream.
For example File
, Kafka
.
column projectionβ
If the connector supports reading only specified columns from the data source (Note that if you read all columns first and then filter unnecessary columns through the schema, this method is not a real column projection)
For example JDBCSource
can use sql to define reading columns.
KafkaSource
will read all content from topic and then use schema
to filter unnecessary columns, This is not column projection
.
batchβ
Batch Job Mode, The data read is bounded and the job will stop after completing all data read.
streamβ
Streaming Job Mode, The data read is unbounded and the job never stop.
parallelismβ
Parallelism Source Connector support config parallelism
, every parallelism will create a task to read the data. In the Parallelism Source Connector, the source will be split into multiple splits, and then the enumerator will allocate the splits to the SourceReader for processing.
support user-defined splitβ
User can config the split rule.
support multiple table readβ
Supports reading multiple tables in one Nexus job
Last updated