Selfuel Docs
  • Welcome to Selfuel Platform
    • Features
    • Capabilities
    • Target Audience
    • $150 Free Trial
  • Registration and Login
  • Platform UI
  • Stream Processing with Cortex
    • Cortex Quickstart Guide
    • Cortex Elements
      • Streams
      • Attributes
      • Mappings
        • 🚧Source Mapping Types
        • 🚧Sink Mapping Types
      • Node and Application Healthchecks
      • Nodes
        • Node Preview
        • Node Connectivites
        • Node Units
      • Expression Builder
        • 🚧Built-in Functions
      • Windows
        • Cron Window
        • Delay Window
        • Unique Event Window
        • First Event Window
        • Sliding Event Count Window
        • Tumbling Event Count Window
        • Session Window
        • Tumbling Event Sort Window
        • Sliding Time Window
        • Tumbling Time Window
        • Sliding Time and Event Count Window
      • Store and Cache
        • RDBMS
        • MongoDB
        • Redis
        • Elasticsearch
    • Applications
      • Applications Page
      • Creating Applications using Canvas
      • Connector Nodes Cluster
        • Source Nodes
          • CDC Source
          • Email Source
          • HTTP Source
          • HTTP Call Response Source
          • HTTP Service Source
          • Kafka Source
          • RabbitMQ Source
          • gRPC Source
          • JMS Source
          • Kafka Multi DC Source
          • JMS Source
          • AWS S3 Source
          • Google Pub-sub Source
          • AWS SQS Source
          • MQTT Source
          • Google Cloud Storage Source
          • HTTP SSE Source
          • WebSubHub Source
        • Sink Nodes
          • Email Sink
          • HTTP Sink
          • HTTP Service Response Sink
          • HTTP Call Sink
          • Kafka Sink
          • RabbitMQ Sink
          • gRPC Sink
          • JMS Sink
          • Kafka Multi DC Sink
          • AWS S3 Sink
          • Google Pub-sub Sink
          • AWS SQS Sink
          • MQTT Sink
          • Google Cloud Storage Sink
          • HTTP SSE Sink
          • WebSubHub Sink
      • Processing Nodes Cluster
        • Query
        • Join
        • Pattern
        • Sequence
        • Processor
        • 🚧On-demand Query
      • Buffer Nodes Cluster
        • Stream
        • Table
        • Window
        • Aggregation
        • Trigger
    • Run Applications
      • Run Applications Using Runners
      • Update Running Applications
      • Application Versioning
  • Data Integration with Nexus
    • Nexus Quickstart Guide
    • Nexus Elements
      • Concept
        • Config
        • Schema Feature
        • Speed Control
      • Connectors
        • Source
          • Source Connector Features
          • Source Common Options
          • AmazonDynamoDB
          • AmazonSqs
          • Cassandra
          • Clickhouse
          • CosFile
          • DB2
          • Doris
          • Easysearch
          • Elasticsearch
          • FakeSource
          • FtpFile
          • Github
          • Gitlab
          • GoogleSheets
          • Greenplum
          • Hbase
          • HdfsFile
          • Hive
          • HiveJdbc
          • Http
          • Apache Iceberg
          • InfluxDB
          • IoTDB
          • JDBC
          • Jira
          • Kingbase
          • Klaviyo
          • Kudu
          • Lemlist
          • Maxcompute
          • Milvus
          • MongoDB CDC
          • MongoDB
          • My Hours
          • MySQL CDC
          • MySQL
          • Neo4j
          • Notion
          • ObsFile
          • OceanBase
          • OneSignal
          • OpenMldb
          • Oracle CDC
          • Oracle
          • OssFile
          • OssJindoFile
          • Paimon
          • Persistiq
          • Phoenix
          • PostgreSQL CDC
          • PostgreSQL
          • Apache Pulsar
          • Rabbitmq
          • Redis
          • Redshift
          • RocketMQ
          • S3File
          • SftpFile
          • Sls
          • Snowflake
          • Socket
          • SQL Server CDC
          • SQL Server
          • StarRocks
          • TDengine
          • Vertica
          • Web3j
          • Kafka
        • Sink
          • Sink Connector Features
          • Sink Common Options
          • Activemq
          • AmazonDynamoDB
          • AmazonSqs
          • Assert
          • Cassandra
          • Clickhouse
          • ClickhouseFile
          • CosFile
          • DB2
          • DataHub
          • DingTalk
          • Doris
          • Druid
          • INFINI Easysearch
          • Elasticsearch
          • Email
          • Enterprise WeChat
          • Feishu
          • FtpFile
          • GoogleFirestore
          • Greenplum
          • Hbase
          • HdfsFile
          • Hive
          • Http
          • Hudi
          • Apache Iceberg
          • InfluxDB
          • IoTDB
          • JDBC
          • Kafka
          • Kingbase
          • Kudu
          • Maxcompute
          • Milvus
          • MongoDB
          • MySQL
          • Neo4j
          • ObsFile
          • OceanBase
          • Oracle
          • OssFile
          • OssJindoFile
          • Paimon
          • Phoenix
          • PostgreSql
          • Pulsar
          • Rabbitmq
          • Redis
          • Redshift
          • RocketMQ
          • S3Redshift
          • S3File
          • SelectDB Cloud
          • Sentry
          • SftpFile
          • Slack
          • Snowflake
          • Socket
          • SQL Server
          • StarRocks
          • TDengine
          • Tablestore
          • Vertica
        • Formats
          • Avro format
          • Canal Format
          • CDC Compatible Debezium-json
          • Debezium Format
          • Kafka source compatible kafka-connect-json
          • MaxWell Format
          • Ogg Format
        • Error Quick Reference Manual
      • Transform
        • Transform Common Options
        • Copy
        • FieldMapper
        • FilterRowKind
        • Filter
        • JsonPath
        • LLM
        • Replace
        • Split
        • SQL Functions
        • SQL
    • Integrations
      • Integrations Page
      • Creating Integrations Using Json
    • Run Integrations
      • Run Integrations Using Runners
      • Integration Versioning
  • Batch Processing/Storage with Maxim
    • Maxim Quickstart Guide
    • Maxim Elements
    • Queries
    • Run Queries
  • Orchestration with Routines
    • Routines Quickstart Guide
    • Routines Elements
    • Routines
    • Run Routines
  • Runners
    • Runners Page
    • Create a Runner to Run Applications
  • Security
    • Vaults
      • Vaults Page
      • Create Vaults
        • Runner-level Vaults
        • Application-level Vaults
      • Edit and Delete Vaults
      • 🚧Utilizing Vaults in Applications and Runners
    • Certificates
      • Certificates Page
      • 🚧Utilizing Certificates in Applications
      • 🟨Setting Up Security Settings
  • Monitoring Performance
    • Dashboard
    • Application Details
    • Runner Details
  • Logging
    • Log Types
  • Cost Management
    • SaaS
      • Pay-as-you-go
        • Hard Budget Cap
        • Soft Budget Cap
      • Subscriptions
    • On-prem
  • Organization Settings
    • General
    • Access Controls
      • User Roles and Privileges
    • Current Costs
    • Billing Addresses
    • Payment Accounts
    • Subscriptions
    • Pricing
    • Invoicing
  • User Settings
  • Troubleshooting
  • FAQs
Powered by GitBook
On this page
  • Step 4 - Request and Header Validation
  • Exchange Name
  • Exchange Type
  • Routing Key
  • Durable Exchange
  • Auto Delete Exchange
  • Step 5 - Exchange, Queue Durability and Deletion
  • Content Type
  • Content Encoding
  • Message Priority
  • Correlation ID
  • Reply to Queue
  • Message Expiration
  • ...X
  • Step 6 - Secure Communication
  • Enable TLS Encryption
  • Certificates
  • TLS Version
  • ...X
  • Step 7 - Message Handling and Performance
  • Connection URI
  • Heartbeat Interval
  • Delivery Mode
  • Step 8 - Preview
  1. Stream Processing with Cortex
  2. Applications
  3. Connector Nodes Cluster
  4. Sink Nodes

RabbitMQ Sink

Step 4 - Request and Header Validation

Exchange Name

The Exchange Name is an identifier for the exchange, which determines how to handle messages it sends.

If an exchange with the same name already exists in the RabbitMQ server, it will be used instead of creating a new one.

Default Value
Possible Data Type

STRING

Exchange Type

The Exchange Type defines the category of the exchange. The available options include Direct, Fanout, Topic, and Headers.

Exchange Type Descriptions
  • Direct Exchange: Messages are routed to queues based on the message's routing key. It's a straightforward and precise method where the message goes to the queues whose binding key exactly matches the routing key of the message. Ideal for scenarios where direct and selective routing of messages is needed.

  • Fanout Exchange: Routes messages to all of the queues bound to it, without considering any routing keys. It's like a broadcast mechanism that efficiently disseminates messages to multiple destinations simultaneously. This type is useful in situations where the same message needs to be delivered to multiple queues.

  • Topic Exchange: Routes messages to one or many queues based on matching between a message routing key and the pattern that the queues are bound with. This type supports routing based on multiple criteria and wildcards, offering a flexible and powerful routing mechanism. It's well-suited for complex routing scenarios where messages are categorized into multiple criteria.

  • Headers Exchange: Headers Exchanges use the message header attributes for routing, ignoring the routing key. Unlike Direct and Topic exchanges, it routes based on header values and can use multiple attributes as criteria for routing. This type is particularly useful for routing decisions based on a rich set of message attributes, offering a high degree of routing flexibility.

Default Value
Possible Data Type

Direct

STRING

Routing Key

The Routing Key is like an address that the exchange uses to decide how to route messages to queues. It is essential to provide a routing key when the exchange type is set to Direct or Topic.

Default Value
Possible Data Type

STRING

Durable Exchange

When Durable Exchange is enabled by setting it to ON, the declared exchange will persist even after the broker is restarted.

Default Value
Possible Data Type

OFF

Auto Delete Exchange

The Auto-Delete Exchange option, when enabled, causes the exchange to be automatically removed when it is no longer in use.

Default Value
Possible Data Type

OFF

Step 5 - Exchange, Queue Durability and Deletion

Content Type

Specifies the MIME content type of the message. Some exemplary MIME content types are

  • text/plain: Used for plain text messages. This is a common content type for simple text-based messages.

  • application/json: Indicates that the message body is a JSON formatted string. Widely used in systems that exchange data in JSON format.

  • application/xml: Used when the message contains XML data.

  • text/html: Indicates that the message body is HTML. This might be used in systems that need to transport HTML content.

  • application/octet-stream: A generic content type for representing binary data or data that does not match any other more specific MIME type.

MIME (Multipurpose Internet Mail Extensions) Content Type

In the context of RabbitMQ, the "content type" field is used to specify the MIME (Multipurpose Internet Mail Extensions) content type of the message. This field is part of the message properties and is important for indicating the format or the type of data contained in the message. Here's a breakdown of its significance:

  • MIME Content Type: MIME is a standard that indicates the nature and format of a document, file, or assortment of bytes. It's widely used in email to send different types of content (like text, images, video) and is also relevant in messaging systems like RabbitMQ.

  • Purpose in RabbitMQ: By specifying the MIME content type in RabbitMQ messages, producers inform consumers about the format of the message body. This helps consumers to correctly interpret and process the data. For example, a MIME type of text/plain indicates that the message body is plain text, while application/json indicates a JSON formatted message.

  • Importance for Consumers: Knowing the content type is crucial for consumers, especially in systems where messages could be in various formats. It ensures that the consumer application can parse and use the message data correctly. For instance, if a consumer receives a message with MIME type application/xml, it knows to process it as an XML document.

  • Implementation: In RabbitMQ, when publishing a message, you can set the content type property in the message properties. This is typically done in the code where the message is being published to the queue.

Default Value
Possible Data Type

STRING

Content Encoding

Specifies the MIME content encoding of the message. Some exemplary MIME content encodings are

  • UTF-8: A common character encoding for text data. Used when the text in the message is encoded in UTF-8, which is a standard for Unicode text.

  • base64: Used for binary data that has been encoded into a base64 format. Base64 encoding is often used to ensure safe transmission of binary content over systems that might only support text.

  • gzip: Indicates that the message content has been compressed using the gzip compression method. Useful for reducing the size of the message payload, especially for larger data sets.

  • UTF-16: Another character encoding, often used for text data that requires a wider range of characters than what UTF-8 can provide.

MIME (Multipurpose Internet Mail Extensions) Content Encoding

In RabbitMQ, the "content encoding" field, specifies the MIME content encoding of the message. This is an essential aspect of message properties, and here's a detailed explanation in the context of RabbitMQ:

  • MIME Content Encoding: MIME content encoding is a method used to represent the binary data of the message content in a textual format. It's not about the type of content (like JSON, XML, etc.) but how the content is encoded or represented.

  • Purpose in RabbitMQ: In RabbitMQ, the content encoding field informs the consumers about how the message content is encoded. For instance, a message body might be encoded using base64 encoding, a common encoding for binary data to ensure safe transport through systems that might not handle binary data well.

  • Importance for Message Processing: This field is crucial for consumers to understand how to decode or interpret the received message correctly. If a message is encoded with base64 encoding, the consumer needs to decode it from base64 to get the original message content.

  • Use Cases: Content encoding becomes particularly important when dealing with binary data (like images, files, etc.) or when ensuring compatibility across different systems that might interpret binary data differently. By encoding the data, RabbitMQ ensures that the message content remains intact and unaltered during transit.

  • Implementation: When publishing a message in RabbitMQ, you can set the content encoding property along with the content type. This is usually done in the code where the message is being formatted and sent to the queue.

Default Value
Possible Data Type

STRING

Message Priority

Sets a value from 0 to 9 (highest) to indicate the priority of the message.

Default Value
Possible Data Type

0

STRING

Correlation ID

Associates the current message with a previous request, often used for associating replies with their corresponding requests.

Default Value
Possible Data Type

STRING

Reply to Queue

Specifies an anonymous exclusive callback queue where RabbitMQ sends responses.

Default Value
Possible Data Type

STRING

Message Expiration

The expiration time after which the message is deleted. The value of the expiration field describes the TTL (Time To Live) period in milliseconds.

Default Value
Possible Data Type

STRING

...X

Step 6 - Secure Communication

Enable TLS Encryption

TLS Encryption indicates if an encrypted communication channel should be established. Unless a Runner-level Vault is activated for the Node; a set of randomly generated JVM Secrets (Truststore and Keystore Passowords) will be utilized.

Default Value
Possible Data Type

ON

Certificates

You can utilize certificates that are already uploaded in Certificates or upload a new one from below.

Default Value
Possible Data Type

TLS Version

TLS Version specifies the version of TLS/SSL to be used for the secure communication.

Default Value
Possible Data Type

SSL

...X

Step 7 - Message Handling and Performance

Connection URI

The Connection URI is the address used to establish a connection with an AMQP server.

e.g. amqp://guest:guest

e.g. amqp://guest:guest@localhost:2354.

Default Value
Possible Data Type

STRING

Heartbeat Interval

The Heartbeat Interval specifies the duration in seconds after which the RabbitMQ server and client libraries should assume the peer TCP connection to be unresponsive or down.

Default Value
Possible Data Type

60

INTEGER

Delivery Mode

Determines the persistence of the connection: Non-persistent or Persistent.

RabbitMQ Delivery Modes

Delivery Mode refers to a property of a message that determines how the message is stored and delivered by the broker. Specifically, it indicates whether the message is persistent or non-persistent.

  1. Non-Persistent:

    • Description: When a message is marked as non-persistent, it is stored only in memory.

    • Behavior: If the RabbitMQ server restarts or if there's a failure, non-persistent messages in the queue will be lost.

    • Use Case: This mode is suitable for messages that are not essential or can be easily recreated. It offers higher performance compared to persistent messages due to reduced disk I/O.

  2. Persistent:

    • Description: Persistent messages are stored both in memory and on disk.

    • Behavior: If the RabbitMQ server restarts, these messages are not lost and will be available once the server is back online.

    • Use Case: This mode is used for critical messages that must not be lost. It is important for ensuring message durability but comes with a performance cost due to disk I/O.

Considerations

  • Performance vs Durability: Choosing between non-persistent and persistent delivery modes is often a trade-off between performance and durability. Persistent messages ensure reliability at the cost of lower throughput and higher latency due to disk access.

  • Queue Configuration: While the delivery mode is set per message, the queue's durability is also a factor. A durable queue will store persistent messages across server restarts, while a non-durable queue will not.

  • Transactional or Confirmed Publishing: For critical data, combining persistent messages with transactional or confirmed publishing can further ensure message reliability.

Default Value
Possible Data Type

1

Step 8 - Preview

In Preview Step, you're provided with a concise summary of all the changes you've made to the RabbitMQ Sink Node. This step is pivotal for reviewing and ensuring that your configurations are as intended before completing node setup.

  • Viewing Configurations: Preview Step presents a consolidated view of your node setup.

  • Saving and Exiting: Use the Complete button to save your changes and exit the node and return back to Canvas.

  • Revisions: Use the Back button to return to any Step of modify node setup.

The Preview Step offers a user-friendly summary to manage and finalize node settings in Cortex.

PreviousKafka SinkNextgRPC Sink

Last updated 1 year ago