Class BaseProducerConfigBuilder<T extends BaseProducerConfigBuilder<T> & org.swisspush.kobuka.client.base.ClientBuilderFunctions<T>>
- All Implemented Interfaces:
ProducerConfigFields<T>
- Direct Known Subclasses:
ProducerConfigBuilder
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionacksbatch.sizebootstrapServers(String value) bootstrap.serversbootstrapServers(List<String> value) bootstrap.serversbufferMemory(Long value) buffer.memorybuild()default <R> RclientDnsLookup(String value) client.dns.lookupclient.idcompressionType(String value) compression.typeconnectionsMaxIdleMs(Long value) connections.max.idle.msvoidcopyFrom(BaseCommonClientConfigBuilder<?> parent) voidcopyFrom(BaseProducerConfigBuilder<?> parent) deliveryTimeoutMs(Integer value) delivery.timeout.msenableIdempotence(Boolean value) enable.idempotenceinterceptorClasses(String value) interceptor.classesinterceptorClasses(List<String> value) interceptor.classeskeySerializer(Class value) key.serializerlinger.msmaxBlockMs(Long value) max.block.msmax.in.flight.requests.per.connectionmaxRequestSize(Integer value) max.request.sizemetadataMaxAgeMs(Long value) metadata.max.age.msmetadataMaxIdleMs(Long value) metadata.max.idle.msmetricReporters(String value) metric.reportersmetricReporters(List<String> value) metric.reportersmetricsNumSamples(Integer value) metrics.num.samplesmetricsRecordingLevel(String value) metrics.recording.levelmetricsSampleWindowMs(Long value) metrics.sample.window.mspartitioner.adaptive.partitioning.enablepartitioner.availability.timeout.mspartitionerClass(Class value) partitioner.classpartitionerIgnoreKeys(Boolean value) partitioner.ignore.keysdefault TreceiveBufferBytes(Integer value) receive.buffer.bytesreconnectBackoffMaxMs(Long value) reconnect.backoff.max.msreconnectBackoffMs(Long value) reconnect.backoff.msrequestTimeoutMs(Integer value) request.timeout.msretriesretryBackoffMs(Long value) retry.backoff.mssasl.client.callback.handler.classsaslJaasConfig(String value) sasl.jaas.configsaslJaasConfig(org.apache.kafka.common.config.types.Password value) sasl.jaas.configsaslKerberosKinitCmd(String value) sasl.kerberos.kinit.cmdsasl.kerberos.min.time.before.reloginsaslKerberosServiceName(String value) sasl.kerberos.service.namesasl.kerberos.ticket.renew.jittersasl.kerberos.ticket.renew.window.factorsasl.login.callback.handler.classsaslLoginClass(Class value) sasl.login.classsaslLoginConnectTimeoutMs(Integer value) sasl.login.connect.timeout.mssaslLoginReadTimeoutMs(Integer value) sasl.login.read.timeout.mssasl.login.refresh.buffer.secondssasl.login.refresh.min.period.secondssasl.login.refresh.window.factorsasl.login.refresh.window.jittersaslLoginRetryBackoffMaxMs(Long value) sasl.login.retry.backoff.max.mssaslLoginRetryBackoffMs(Long value) sasl.login.retry.backoff.mssaslMechanism(String value) sasl.mechanismsasl.oauthbearer.clock.skew.secondssasl.oauthbearer.expected.audiencesasl.oauthbearer.expected.audiencesasl.oauthbearer.expected.issuersasl.oauthbearer.jwks.endpoint.refresh.mssasl.oauthbearer.jwks.endpoint.retry.backoff.max.mssasl.oauthbearer.jwks.endpoint.retry.backoff.mssasl.oauthbearer.jwks.endpoint.urlsasl.oauthbearer.scope.claim.namesasl.oauthbearer.sub.claim.namesasl.oauthbearer.token.endpoint.urlsecurityProtocol(String value) security.protocolsecurityProviders(String value) security.providerssendBufferBytes(Integer value) send.buffer.bytessocket.connection.setup.timeout.max.mssocket.connection.setup.timeout.mssslCipherSuites(String value) ssl.cipher.suitessslCipherSuites(List<String> value) ssl.cipher.suitessslEnabledProtocols(String value) ssl.enabled.protocolssslEnabledProtocols(List<String> value) ssl.enabled.protocolsssl.endpoint.identification.algorithmsslEngineFactoryClass(Class value) ssl.engine.factory.classsslKeymanagerAlgorithm(String value) ssl.keymanager.algorithmsslKeyPassword(String value) ssl.key.passwordsslKeyPassword(org.apache.kafka.common.config.types.Password value) ssl.key.passwordssl.keystore.certificate.chainsslKeystoreCertificateChain(org.apache.kafka.common.config.types.Password value) ssl.keystore.certificate.chainsslKeystoreKey(String value) ssl.keystore.keysslKeystoreKey(org.apache.kafka.common.config.types.Password value) ssl.keystore.keysslKeystoreLocation(String value) ssl.keystore.locationsslKeystorePassword(String value) ssl.keystore.passwordsslKeystorePassword(org.apache.kafka.common.config.types.Password value) ssl.keystore.passwordsslKeystoreType(String value) ssl.keystore.typesslProtocol(String value) ssl.protocolsslProvider(String value) ssl.providerssl.secure.random.implementationsslTrustmanagerAlgorithm(String value) ssl.trustmanager.algorithmsslTruststoreCertificates(String value) ssl.truststore.certificatessslTruststoreCertificates(org.apache.kafka.common.config.types.Password value) ssl.truststore.certificatessslTruststoreLocation(String value) ssl.truststore.locationsslTruststorePassword(String value) ssl.truststore.passwordsslTruststorePassword(org.apache.kafka.common.config.types.Password value) ssl.truststore.passwordsslTruststoreType(String value) ssl.truststore.typetransactionalId(String value) transactional.idtransactionTimeoutMs(Integer value) transaction.timeout.ms<R> Rtransform(Function<BaseProducerConfigBuilder<?>, R> fn) valueSerializer(Class value) value.serializerMethods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface org.swisspush.kobuka.client.base.ProducerConfigFields
self
-
Constructor Details
-
BaseProducerConfigBuilder
public BaseProducerConfigBuilder()
-
-
Method Details
-
copyFrom
-
copyFrom
-
build
-
transform
-
build
-
property
-
asSupplier
-
bootstrapServers
Description copied from interface:ProducerConfigFieldsbootstrap.servers
A list of host/port pairs to use for establishing the initial connection to the Kafka cluster.
The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers.
This list should be in the formhost1:port1,host2:port2,....
Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).Default: ""
Valid Values: non-null string
Importance: high
- Specified by:
bootstrapServersin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
bootstrapServers
Description copied from interface:ProducerConfigFieldsbootstrap.servers
A list of host/port pairs to use for establishing the initial connection to the Kafka cluster.
The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers.
This list should be in the formhost1:port1,host2:port2,....
Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).Default: ""
Valid Values: non-null string
Importance: high
- Specified by:
bootstrapServersin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
clientDnsLookup
Description copied from interface:ProducerConfigFieldsclient.dns.lookup
Controls how the client uses DNS lookups.
If set touse_all_dns_ips, connect to each returned IP address in sequence until a successful connection is established.
After a disconnection, the next IP is used.
Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however).
If set toresolve_canonical_bootstrap_servers_only, resolve each bootstrap address into a list of canonical names.
After the bootstrap phase, this behaves the same asuse_all_dns_ips.Default: use_all_dns_ips
Valid Values: [use_all_dns_ips, resolve_canonical_bootstrap_servers_only]
Importance: medium
- Specified by:
clientDnsLookupin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
bufferMemory
Description copied from interface:ProducerConfigFieldsbuffer.memory
The total bytes of memory the producer can use to buffer records waiting to be sent to the server.
If records are sent faster than they can be delivered to the server the producer will block formax.block.msafter which it will throw an exception.This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering.
Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests.Default: 33554432
Valid Values: [0,...]
Importance: high
- Specified by:
bufferMemoryin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
retries
Description copied from interface:ProducerConfigFieldsretries
Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error.
Note that this retry is no different than if the client resent the record upon receiving the error.
Produce requests will be failed before the number of retries has been exhausted if the timeout configured bydelivery.timeout.msexpires first before successful acknowledgement.
Users should generally prefer to leave this config unset and instead usedelivery.timeout.msto control retry behavior.Enabling idempotence requires this config value to be greater than 0.
If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled.Allowing retries while setting
enable.idempotencetofalseandmax.in.flight.requests.per.connectionto 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first.Default: 2147483647
Valid Values: [0,...,2147483647]
Importance: high
- Specified by:
retriesin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
acks
Description copied from interface:ProducerConfigFieldsacks
The number of acknowledgments the producer requires the leader to have received before considering a request complete.
This controls the durability of records that are sent.
The following settings are allowed:acks=0If set to zero then the producer will not wait for any acknowledgment from the server at all.
The record will be immediately added to the socket buffer and considered sent.
No guarantee can be made that the server has received the record in this case, and theretriesconfiguration will not take effect (as the client won't generally know of any failures).
The offset given back for each record will always be set to-1.acks=1This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement from all followers.
In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost.acks=allThis means the leader will wait for the full set of in-sync replicas to acknowledge the record.
This guarantees that the record will not be lost as long as at least one in-sync replica remains alive.
This is the strongest available guarantee.
This is equivalent to the acks=-1 setting.
Note that enabling idempotence requires this config value to be 'all'.
If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled.Default: all
Valid Values: [all, -1, 0, 1]
Importance: low
- Specified by:
acksin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
compressionType
Description copied from interface:ProducerConfigFieldscompression.type
The compression type for all data generated by the producer.
The default is none (i.e.
no compression).
Valid values arenone,gzip,snappy,lz4, orzstd.
Compression is of full batches of data, so the efficacy of batching will also impact the compression ratio (more batching means better compression).Default: none
Valid Values: [none, gzip, snappy, lz4, zstd]
Importance: high
- Specified by:
compressionTypein interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
batchSize
Description copied from interface:ProducerConfigFieldsbatch.size
The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition.
This helps performance on both the client and the server.
This configuration controls the default batch size in bytes.No attempt will be made to batch records larger than this size.
Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent.
A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely).
A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records.Note: This setting gives the upper bound of the batch size to be sent.
If we have fewer than this many bytes accumulated for this partition, we will 'linger' for thelinger.mstime waiting for more records to show up.
Thislinger.mssetting defaults to 0, which means we'll immediately send out a record even the accumulated batch size is under thisbatch.sizesetting.Default: 16384
Valid Values: [0,...]
Importance: medium
- Specified by:
batchSizein interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
partitionerAdaptivePartitioningEnable
Description copied from interface:ProducerConfigFieldspartitioner.adaptive.partitioning.enable
When set to 'true', the producer will try to adapt to broker performance and produce more messages to partitions hosted on faster brokers.
If 'false', producer will try to distribute messages uniformly.
Note: this setting has no effect if a custom partitioner is usedDefault: true
Valid Values:
Importance: low
- Specified by:
partitionerAdaptivePartitioningEnablein interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
partitionerAvailabilityTimeoutMs
Description copied from interface:ProducerConfigFieldspartitioner.availability.timeout.ms
If a broker cannot process produce requests from a partition forpartitioner.availability.timeout.mstime, the partitioner treats that partition as not available.
If the value is 0, this logic is disabled.
Note: this setting has no effect if a custom partitioner is used orpartitioner.adaptive.partitioning.enableis set to 'false'Default: 0
Valid Values: [0,...]
Importance: low
- Specified by:
partitionerAvailabilityTimeoutMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
partitionerIgnoreKeys
Description copied from interface:ProducerConfigFieldspartitioner.ignore.keys
When set to 'true' the producer won't use record keys to choose a partition.
If 'false', producer would choose a partition based on a hash of the key when a key is present.
Note: this setting has no effect if a custom partitioner is used.Default: false
Valid Values:
Importance: medium
- Specified by:
partitionerIgnoreKeysin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
lingerMs
Description copied from interface:ProducerConfigFieldslinger.ms
The producer groups together any records that arrive in between request transmissions into a single batched request.
Normally this occurs only under load when records arrive faster than they can be sent out.
However in some circumstances the client may want to reduce the number of requests even under moderate load.
This setting accomplishes this by adding a small amount of artificial delay—that is, rather than immediately sending out a record, the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together.
This can be thought of as analogous to Nagle's algorithm in TCP.
This setting gives the upper bound on the delay for batching: once we getbatch.sizeworth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up.
This setting defaults to 0 (i.e.
no delay).
Settinglinger.ms=5, for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absence of load.Default: 0
Valid Values: [0,...]
Importance: medium
- Specified by:
lingerMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
deliveryTimeoutMs
Description copied from interface:ProducerConfigFieldsdelivery.timeout.ms
An upper bound on the time to report success or failure after a call tosend()returns.
This limits the total time that a record will be delayed prior to sending, the time to await acknowledgement from the broker (if expected), and the time allowed for retriable send failures.
The producer may report failure to send a record earlier than this config if either an unrecoverable error is encountered, the retries have been exhausted, or the record is added to a batch which reached an earlier delivery expiration deadline.
The value of this config should be greater than or equal to the sum ofrequest.timeout.msandlinger.ms.Default: 120000 (2 minutes)
Valid Values: [0,...]
Importance: medium
- Specified by:
deliveryTimeoutMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
clientId
Description copied from interface:ProducerConfigFieldsclient.id
An id string to pass to the server when making requests.
The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.Default: ""
Valid Values:
Importance: medium
- Specified by:
clientIdin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sendBufferBytes
Description copied from interface:ProducerConfigFieldssend.buffer.bytes
The size of the TCP send buffer (SO_SNDBUF) to use when sending data.
If the value is -1, the OS default will be used.Default: 131072 (128 kibibytes)
Valid Values: [-1,...]
Importance: medium
- Specified by:
sendBufferBytesin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
receiveBufferBytes
Description copied from interface:ProducerConfigFieldsreceive.buffer.bytes
The size of the TCP receive buffer (SO_RCVBUF) to use when reading data.
If the value is -1, the OS default will be used.Default: 32768 (32 kibibytes)
Valid Values: [-1,...]
Importance: medium
- Specified by:
receiveBufferBytesin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
maxRequestSize
Description copied from interface:ProducerConfigFieldsmax.request.size
The maximum size of a request in bytes.
This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.
This is also effectively a cap on the maximum uncompressed record batch size.
Note that the server has its own cap on the record batch size (after compression if compression is enabled) which may be different from this.Default: 1048576
Valid Values: [0,...]
Importance: medium
- Specified by:
maxRequestSizein interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
reconnectBackoffMs
Description copied from interface:ProducerConfigFieldsreconnect.backoff.ms
The base amount of time to wait before attempting to reconnect to a given host.
This avoids repeatedly connecting to a host in a tight loop.
This backoff applies to all connection attempts by the client to a broker.Default: 50
Valid Values: [0,...]
Importance: low
- Specified by:
reconnectBackoffMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
reconnectBackoffMaxMs
Description copied from interface:ProducerConfigFieldsreconnect.backoff.max.ms
The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect.
If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum.
After calculating the backoff increase, 20% random jitter is added to avoid connection storms.Default: 1000 (1 second)
Valid Values: [0,...]
Importance: low
- Specified by:
reconnectBackoffMaxMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
retryBackoffMs
Description copied from interface:ProducerConfigFieldsretry.backoff.ms
The amount of time to wait before attempting to retry a failed request to a given topic partition.
This avoids repeatedly sending requests in a tight loop under some failure scenarios.Default: 100
Valid Values: [0,...]
Importance: low
- Specified by:
retryBackoffMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
maxBlockMs
Description copied from interface:ProducerConfigFieldsmax.block.ms
The configuration controls how long theKafkaProducer'ssend(),partitionsFor(),initTransactions(),sendOffsetsToTransaction(),commitTransaction()andabortTransaction()methods will block.
Forsend()this timeout bounds the total time waiting for both metadata fetch and buffer allocation (blocking in the user-supplied serializers or partitioner is not counted against this timeout).
ForpartitionsFor()this timeout bounds the time spent waiting for metadata if it is unavailable.
The transaction-related methods always block, but may timeout if the transaction coordinator could not be discovered or did not respond within the timeout.Default: 60000 (1 minute)
Valid Values: [0,...]
Importance: medium
- Specified by:
maxBlockMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
requestTimeoutMs
Description copied from interface:ProducerConfigFieldsrequest.timeout.ms
The configuration controls the maximum amount of time the client will wait for the response of a request.
If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.
This should be larger thanreplica.lag.time.max.ms(a broker configuration) to reduce the possibility of message duplication due to unnecessary producer retries.Default: 30000 (30 seconds)
Valid Values: [0,...]
Importance: medium
- Specified by:
requestTimeoutMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
metadataMaxAgeMs
Description copied from interface:ProducerConfigFieldsmetadata.max.age.ms
The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.Default: 300000 (5 minutes)
Valid Values: [0,...]
Importance: low
- Specified by:
metadataMaxAgeMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
metadataMaxIdleMs
Description copied from interface:ProducerConfigFieldsmetadata.max.idle.ms
Controls how long the producer will cache metadata for a topic that's idle.
If the elapsed time since a topic was last produced to exceeds the metadata idle duration, then the topic's metadata is forgotten and the next access to it will force a metadata fetch request.Default: 300000 (5 minutes)
Valid Values: [5000,...]
Importance: low
- Specified by:
metadataMaxIdleMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
metricsSampleWindowMs
Description copied from interface:ProducerConfigFieldsmetrics.sample.window.ms
The window of time a metrics sample is computed over.Default: 30000 (30 seconds)
Valid Values: [0,...]
Importance: low
- Specified by:
metricsSampleWindowMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
metricsNumSamples
Description copied from interface:ProducerConfigFieldsmetrics.num.samples
The number of samples maintained to compute metrics.Default: 2
Valid Values: [1,...]
Importance: low
- Specified by:
metricsNumSamplesin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
metricsRecordingLevel
Description copied from interface:ProducerConfigFieldsmetrics.recording.level
The highest recording level for metrics.Default: INFO
Valid Values: [INFO, DEBUG, TRACE]
Importance: low
- Specified by:
metricsRecordingLevelin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
metricReporters
Description copied from interface:ProducerConfigFieldsmetric.reporters
A list of classes to use as metrics reporters.
Implementing theorg.apache.kafka.common.metrics.MetricsReporterinterface allows plugging in classes that will be notified of new metric creation.
The JmxReporter is always included to register JMX statistics.Default: ""
Valid Values: non-null string
Importance: low
- Specified by:
metricReportersin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
metricReporters
Description copied from interface:ProducerConfigFieldsmetric.reporters
A list of classes to use as metrics reporters.
Implementing theorg.apache.kafka.common.metrics.MetricsReporterinterface allows plugging in classes that will be notified of new metric creation.
The JmxReporter is always included to register JMX statistics.Default: ""
Valid Values: non-null string
Importance: low
- Specified by:
metricReportersin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
maxInFlightRequestsPerConnection
Description copied from interface:ProducerConfigFieldsmax.in.flight.requests.per.connection
The maximum number of unacknowledged requests the client will send on a single connection before blocking.
Note that if this configuration is set to be greater than 1 andenable.idempotenceis set to false, there is a risk of message reordering after a failed send due to retries (i.e., if retries are enabled); if retries are disabled or ifenable.idempotenceis set to true, ordering will be preserved.
Additionally, enabling idempotence requires the value of this configuration to be less than or equal to 5.
If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled.
Default: 5
Valid Values: [1,...]
Importance: low
- Specified by:
maxInFlightRequestsPerConnectionin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
keySerializer
Description copied from interface:ProducerConfigFieldskey.serializer
Serializer class for key that implements theorg.apache.kafka.common.serialization.Serializerinterface.Default:
Valid Values:
Importance: high
- Specified by:
keySerializerin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
valueSerializer
Description copied from interface:ProducerConfigFieldsvalue.serializer
Serializer class for value that implements theorg.apache.kafka.common.serialization.Serializerinterface.Default:
Valid Values:
Importance: high
- Specified by:
valueSerializerin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
socketConnectionSetupTimeoutMs
Description copied from interface:ProducerConfigFieldssocket.connection.setup.timeout.ms
The amount of time the client will wait for the socket connection to be established.
If the connection is not built before the timeout elapses, clients will close the socket channel.Default: 10000 (10 seconds)
Valid Values:
Importance: medium
- Specified by:
socketConnectionSetupTimeoutMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
socketConnectionSetupTimeoutMaxMs
Description copied from interface:ProducerConfigFieldssocket.connection.setup.timeout.max.ms
The maximum amount of time the client will wait for the socket connection to be established.
The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum.
To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value.Default: 30000 (30 seconds)
Valid Values:
Importance: medium
- Specified by:
socketConnectionSetupTimeoutMaxMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
connectionsMaxIdleMs
Description copied from interface:ProducerConfigFieldsconnections.max.idle.ms
Close idle connections after the number of milliseconds specified by this config.Default: 540000 (9 minutes)
Valid Values:
Importance: medium
- Specified by:
connectionsMaxIdleMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
partitionerClass
Description copied from interface:ProducerConfigFieldspartitioner.class
A class to use to determine which partition to be send to when produce the records.
Available options are:- If not set, the default partitioning logic is used.
This strategy will try sticking to a partition until batch.size bytes is produced to the partition.
It works with the strategy:- If no partition is specified but a key is present, choose a partition based on a hash of the key
- If no partition or key is present, choose the sticky partition that changes when batch.size bytes are produced to the partition.
org.apache.kafka.clients.producer.RoundRobinPartitioner: This partitioning strategy is that each record in a series of consecutive records will be sent to a different partition(no matter if the 'key' is provided or not), until we run out of partitions and start over again.
Note: There's a known issue that will cause uneven distribution when new batch is created.
Please check KAFKA-9965 for more detail.
Implementing the
org.apache.kafka.clients.producer.Partitionerinterface allows you to plug in a custom partitioner.Default: null
Valid Values:
Importance: medium
- Specified by:
partitionerClassin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
- If not set, the default partitioning logic is used.
-
interceptorClasses
Description copied from interface:ProducerConfigFieldsinterceptor.classes
A list of classes to use as interceptors.
Implementing theorg.apache.kafka.clients.producer.ProducerInterceptorinterface allows you to intercept (and possibly mutate) the records received by the producer before they are published to the Kafka cluster.
By default, there are no interceptors.Default: ""
Valid Values: non-null string
Importance: low
- Specified by:
interceptorClassesin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
interceptorClasses
Description copied from interface:ProducerConfigFieldsinterceptor.classes
A list of classes to use as interceptors.
Implementing theorg.apache.kafka.clients.producer.ProducerInterceptorinterface allows you to intercept (and possibly mutate) the records received by the producer before they are published to the Kafka cluster.
By default, there are no interceptors.Default: ""
Valid Values: non-null string
Importance: low
- Specified by:
interceptorClassesin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
securityProtocol
Description copied from interface:ProducerConfigFieldssecurity.protocol
Protocol used to communicate with brokers.
Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.Default: PLAINTEXT
Valid Values: [PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL]
Importance: medium
- Specified by:
securityProtocolin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
securityProviders
Description copied from interface:ProducerConfigFieldssecurity.providers
A list of configurable creator classes each returning a provider implementing security algorithms.
These classes should implement theorg.apache.kafka.common.security.auth.SecurityProviderCreatorinterface.Default: null
Valid Values:
Importance: low
- Specified by:
securityProvidersin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslProtocol
Description copied from interface:ProducerConfigFieldsssl.protocol
The SSL protocol used to generate the SSLContext.
The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise.
This value should be fine for most use cases.
Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'.
'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.
With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'.
If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'.Default: TLSv1.3
Valid Values:
Importance: medium
- Specified by:
sslProtocolin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslProvider
Description copied from interface:ProducerConfigFieldsssl.provider
The name of the security provider used for SSL connections.
Default value is the default security provider of the JVM.Default: null
Valid Values:
Importance: medium
- Specified by:
sslProviderin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslCipherSuites
Description copied from interface:ProducerConfigFieldsssl.cipher.suites
A list of cipher suites.
This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol.
By default all the available cipher suites are supported.Default: null
Valid Values:
Importance: low
- Specified by:
sslCipherSuitesin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslCipherSuites
Description copied from interface:ProducerConfigFieldsssl.cipher.suites
A list of cipher suites.
This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol.
By default all the available cipher suites are supported.Default: null
Valid Values:
Importance: low
- Specified by:
sslCipherSuitesin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslEnabledProtocols
Description copied from interface:ProducerConfigFieldsssl.enabled.protocols
The list of protocols enabled for SSL connections.
The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise.
With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2).
This default should be fine for most cases.
Also see the config documentation for `ssl.protocol`.Default: TLSv1.2,TLSv1.3
Valid Values:
Importance: medium
- Specified by:
sslEnabledProtocolsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslEnabledProtocols
Description copied from interface:ProducerConfigFieldsssl.enabled.protocols
The list of protocols enabled for SSL connections.
The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise.
With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2).
This default should be fine for most cases.
Also see the config documentation for `ssl.protocol`.Default: TLSv1.2,TLSv1.3
Valid Values:
Importance: medium
- Specified by:
sslEnabledProtocolsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslKeystoreType
Description copied from interface:ProducerConfigFieldsssl.keystore.type
The file format of the key store file.
This is optional for client.
The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].Default: JKS
Valid Values:
Importance: medium
- Specified by:
sslKeystoreTypein interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslKeystoreLocation
Description copied from interface:ProducerConfigFieldsssl.keystore.location
The location of the key store file.
This is optional for client and can be used for two-way authentication for client.Default: null
Valid Values:
Importance: high
- Specified by:
sslKeystoreLocationin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslKeystorePassword
Description copied from interface:ProducerConfigFieldsssl.keystore.password
The store password for the key store file.
This is optional for client and only needed if 'ssl.keystore.location' is configured.
Key store password is not supported for PEM format.Default: null
Valid Values:
Importance: high
- Specified by:
sslKeystorePasswordin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslKeystorePassword
Description copied from interface:ProducerConfigFieldsssl.keystore.password
The store password for the key store file.
This is optional for client and only needed if 'ssl.keystore.location' is configured.
Key store password is not supported for PEM format.Default: null
Valid Values:
Importance: high
- Specified by:
sslKeystorePasswordin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslKeyPassword
Description copied from interface:ProducerConfigFieldsssl.key.password
The password of the private key in the key store file or the PEM key specified in `ssl.keystore.key'.Default: null
Valid Values:
Importance: high
- Specified by:
sslKeyPasswordin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslKeyPassword
Description copied from interface:ProducerConfigFieldsssl.key.password
The password of the private key in the key store file or the PEM key specified in `ssl.keystore.key'.Default: null
Valid Values:
Importance: high
- Specified by:
sslKeyPasswordin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslKeystoreKey
Description copied from interface:ProducerConfigFieldsssl.keystore.key
Private key in the format specified by 'ssl.keystore.type'.
Default SSL engine factory supports only PEM format with PKCS#8 keys.
If the key is encrypted, key password must be specified using 'ssl.key.password'Default: null
Valid Values:
Importance: high
- Specified by:
sslKeystoreKeyin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslKeystoreKey
Description copied from interface:ProducerConfigFieldsssl.keystore.key
Private key in the format specified by 'ssl.keystore.type'.
Default SSL engine factory supports only PEM format with PKCS#8 keys.
If the key is encrypted, key password must be specified using 'ssl.key.password'Default: null
Valid Values:
Importance: high
- Specified by:
sslKeystoreKeyin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslKeystoreCertificateChain
Description copied from interface:ProducerConfigFieldsssl.keystore.certificate.chain
Certificate chain in the format specified by 'ssl.keystore.type'.
Default SSL engine factory supports only PEM format with a list of X.509 certificatesDefault: null
Valid Values:
Importance: high
- Specified by:
sslKeystoreCertificateChainin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslKeystoreCertificateChain
Description copied from interface:ProducerConfigFieldsssl.keystore.certificate.chain
Certificate chain in the format specified by 'ssl.keystore.type'.
Default SSL engine factory supports only PEM format with a list of X.509 certificatesDefault: null
Valid Values:
Importance: high
- Specified by:
sslKeystoreCertificateChainin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslTruststoreCertificates
Description copied from interface:ProducerConfigFieldsssl.truststore.certificates
Trusted certificates in the format specified by 'ssl.truststore.type'.
Default SSL engine factory supports only PEM format with X.509 certificates.Default: null
Valid Values:
Importance: high
- Specified by:
sslTruststoreCertificatesin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslTruststoreCertificates
Description copied from interface:ProducerConfigFieldsssl.truststore.certificates
Trusted certificates in the format specified by 'ssl.truststore.type'.
Default SSL engine factory supports only PEM format with X.509 certificates.Default: null
Valid Values:
Importance: high
- Specified by:
sslTruststoreCertificatesin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslTruststoreType
Description copied from interface:ProducerConfigFieldsssl.truststore.type
The file format of the trust store file.
The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].Default: JKS
Valid Values:
Importance: medium
- Specified by:
sslTruststoreTypein interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslTruststoreLocation
Description copied from interface:ProducerConfigFieldsssl.truststore.location
The location of the trust store file.Default: null
Valid Values:
Importance: high
- Specified by:
sslTruststoreLocationin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslTruststorePassword
Description copied from interface:ProducerConfigFieldsssl.truststore.password
The password for the trust store file.
If a password is not set, trust store file configured will still be used, but integrity checking is disabled.
Trust store password is not supported for PEM format.Default: null
Valid Values:
Importance: high
- Specified by:
sslTruststorePasswordin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslTruststorePassword
Description copied from interface:ProducerConfigFieldsssl.truststore.password
The password for the trust store file.
If a password is not set, trust store file configured will still be used, but integrity checking is disabled.
Trust store password is not supported for PEM format.Default: null
Valid Values:
Importance: high
- Specified by:
sslTruststorePasswordin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslKeymanagerAlgorithm
Description copied from interface:ProducerConfigFieldsssl.keymanager.algorithm
The algorithm used by key manager factory for SSL connections.
Default value is the key manager factory algorithm configured for the Java Virtual Machine.Default: SunX509
Valid Values:
Importance: low
- Specified by:
sslKeymanagerAlgorithmin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslTrustmanagerAlgorithm
Description copied from interface:ProducerConfigFieldsssl.trustmanager.algorithm
The algorithm used by trust manager factory for SSL connections.
Default value is the trust manager factory algorithm configured for the Java Virtual Machine.Default: PKIX
Valid Values:
Importance: low
- Specified by:
sslTrustmanagerAlgorithmin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslEndpointIdentificationAlgorithm
Description copied from interface:ProducerConfigFieldsssl.endpoint.identification.algorithm
The endpoint identification algorithm to validate server hostname using server certificate.
Default: https
Valid Values:
Importance: low
- Specified by:
sslEndpointIdentificationAlgorithmin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslSecureRandomImplementation
Description copied from interface:ProducerConfigFieldsssl.secure.random.implementation
The SecureRandom PRNG implementation to use for SSL cryptography operations.
Default: null
Valid Values:
Importance: low
- Specified by:
sslSecureRandomImplementationin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
sslEngineFactoryClass
Description copied from interface:ProducerConfigFieldsssl.engine.factory.class
The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects.
Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactoryDefault: null
Valid Values:
Importance: low
- Specified by:
sslEngineFactoryClassin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslKerberosServiceName
Description copied from interface:ProducerConfigFieldssasl.kerberos.service.name
The Kerberos principal name that Kafka runs as.
This can be defined either in Kafka's JAAS config or in Kafka's config.Default: null
Valid Values:
Importance: medium
- Specified by:
saslKerberosServiceNamein interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslKerberosKinitCmd
Description copied from interface:ProducerConfigFieldssasl.kerberos.kinit.cmd
Kerberos kinit command path.Default: /usr/bin/kinit
Valid Values:
Importance: low
- Specified by:
saslKerberosKinitCmdin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslKerberosTicketRenewWindowFactor
Description copied from interface:ProducerConfigFieldssasl.kerberos.ticket.renew.window.factor
Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.Default: 0.8
Valid Values:
Importance: low
- Specified by:
saslKerberosTicketRenewWindowFactorin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslKerberosTicketRenewJitter
Description copied from interface:ProducerConfigFieldssasl.kerberos.ticket.renew.jitter
Percentage of random jitter added to the renewal time.Default: 0.05
Valid Values:
Importance: low
- Specified by:
saslKerberosTicketRenewJitterin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslKerberosMinTimeBeforeRelogin
Description copied from interface:ProducerConfigFieldssasl.kerberos.min.time.before.relogin
Login thread sleep time between refresh attempts.Default: 60000
Valid Values:
Importance: low
- Specified by:
saslKerberosMinTimeBeforeReloginin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslLoginRefreshWindowFactor
Description copied from interface:ProducerConfigFieldssasl.login.refresh.window.factor
Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential.
Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified.
Currently applies only to OAUTHBEARER.Default: 0.8
Valid Values: [0.5,...,1.0]
Importance: low
- Specified by:
saslLoginRefreshWindowFactorin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslLoginRefreshWindowJitter
Description copied from interface:ProducerConfigFieldssasl.login.refresh.window.jitter
The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time.
Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified.
Currently applies only to OAUTHBEARER.Default: 0.05
Valid Values: [0.0,...,0.25]
Importance: low
- Specified by:
saslLoginRefreshWindowJitterin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslLoginRefreshMinPeriodSeconds
Description copied from interface:ProducerConfigFieldssasl.login.refresh.min.period.seconds
The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds.
Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified.
This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential.
Currently applies only to OAUTHBEARER.Default: 60
Valid Values: [0,...,900]
Importance: low
- Specified by:
saslLoginRefreshMinPeriodSecondsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslLoginRefreshBufferSeconds
Description copied from interface:ProducerConfigFieldssasl.login.refresh.buffer.seconds
The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds.
If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible.
Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified.
This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential.
Currently applies only to OAUTHBEARER.Default: 300
Valid Values: [0,...,3600]
Importance: low
- Specified by:
saslLoginRefreshBufferSecondsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslMechanism
Description copied from interface:ProducerConfigFieldssasl.mechanism
SASL mechanism used for client connections.
This may be any mechanism for which a security provider is available.
GSSAPI is the default mechanism.Default: GSSAPI
Valid Values:
Importance: medium
- Specified by:
saslMechanismin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslJaasConfig
Description copied from interface:ProducerConfigFieldssasl.jaas.config
JAAS login context parameters for SASL connections in the format used by JAAS configuration files.
JAAS configuration file format is described here.
The format for the value is:loginModuleClass controlFlag (optionName=optionValue)*;.
For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case.
For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;Default: null
Valid Values:
Importance: medium
- Specified by:
saslJaasConfigin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslJaasConfig
Description copied from interface:ProducerConfigFieldssasl.jaas.config
JAAS login context parameters for SASL connections in the format used by JAAS configuration files.
JAAS configuration file format is described here.
The format for the value is:loginModuleClass controlFlag (optionName=optionValue)*;.
For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case.
For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;Default: null
Valid Values:
Importance: medium
- Specified by:
saslJaasConfigin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslClientCallbackHandlerClass
Description copied from interface:ProducerConfigFieldssasl.client.callback.handler.class
The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.Default: null
Valid Values:
Importance: medium
- Specified by:
saslClientCallbackHandlerClassin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslLoginCallbackHandlerClass
Description copied from interface:ProducerConfigFieldssasl.login.callback.handler.class
The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface.
For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case.
For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandlerDefault: null
Valid Values:
Importance: medium
- Specified by:
saslLoginCallbackHandlerClassin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslLoginClass
Description copied from interface:ProducerConfigFieldssasl.login.class
The fully qualified name of a class that implements the Login interface.
For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case.
For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLoginDefault: null
Valid Values:
Importance: medium
- Specified by:
saslLoginClassin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslLoginConnectTimeoutMs
Description copied from interface:ProducerConfigFieldssasl.login.connect.timeout.ms
The (optional) value in milliseconds for the external authentication provider connection timeout.
Currently applies only to OAUTHBEARER.Default: null
Valid Values:
Importance: low
- Specified by:
saslLoginConnectTimeoutMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslLoginReadTimeoutMs
Description copied from interface:ProducerConfigFieldssasl.login.read.timeout.ms
The (optional) value in milliseconds for the external authentication provider read timeout.
Currently applies only to OAUTHBEARER.Default: null
Valid Values:
Importance: low
- Specified by:
saslLoginReadTimeoutMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslLoginRetryBackoffMaxMs
Description copied from interface:ProducerConfigFieldssasl.login.retry.backoff.max.ms
The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider.
Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting.
Currently applies only to OAUTHBEARER.Default: 10000 (10 seconds)
Valid Values:
Importance: low
- Specified by:
saslLoginRetryBackoffMaxMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslLoginRetryBackoffMs
Description copied from interface:ProducerConfigFieldssasl.login.retry.backoff.ms
The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider.
Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting.
Currently applies only to OAUTHBEARER.Default: 100
Valid Values:
Importance: low
- Specified by:
saslLoginRetryBackoffMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslOauthbearerScopeClaimName
Description copied from interface:ProducerConfigFieldssasl.oauthbearer.scope.claim.name
The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.Default: scope
Valid Values:
Importance: low
- Specified by:
saslOauthbearerScopeClaimNamein interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslOauthbearerSubClaimName
Description copied from interface:ProducerConfigFieldssasl.oauthbearer.sub.claim.name
The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.Default: sub
Valid Values:
Importance: low
- Specified by:
saslOauthbearerSubClaimNamein interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslOauthbearerTokenEndpointUrl
Description copied from interface:ProducerConfigFieldssasl.oauthbearer.token.endpoint.url
The URL for the OAuth/OIDC identity provider.
If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config.
If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization.Default: null
Valid Values:
Importance: medium
- Specified by:
saslOauthbearerTokenEndpointUrlin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslOauthbearerJwksEndpointUrl
Description copied from interface:ProducerConfigFieldssasl.oauthbearer.jwks.endpoint.url
The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved.
The URL can be HTTP(S)-based or file-based.
If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup.
All then-current keys will be cached on the broker for incoming requests.
If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand.
However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received.
If the URL is file-based, the broker will load the JWKS file from a configured location on startup.
In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail.Default: null
Valid Values:
Importance: medium
- Specified by:
saslOauthbearerJwksEndpointUrlin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslOauthbearerJwksEndpointRefreshMs
Description copied from interface:ProducerConfigFieldssasl.oauthbearer.jwks.endpoint.refresh.ms
The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.Default: 3600000 (1 hour)
Valid Values:
Importance: low
- Specified by:
saslOauthbearerJwksEndpointRefreshMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslOauthbearerJwksEndpointRetryBackoffMaxMs
Description copied from interface:ProducerConfigFieldssasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms
The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider.
JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.Default: 10000 (10 seconds)
Valid Values:
Importance: low
- Specified by:
saslOauthbearerJwksEndpointRetryBackoffMaxMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslOauthbearerJwksEndpointRetryBackoffMs
Description copied from interface:ProducerConfigFieldssasl.oauthbearer.jwks.endpoint.retry.backoff.ms
The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider.
JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.Default: 100
Valid Values:
Importance: low
- Specified by:
saslOauthbearerJwksEndpointRetryBackoffMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslOauthbearerClockSkewSeconds
Description copied from interface:ProducerConfigFieldssasl.oauthbearer.clock.skew.seconds
The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.Default: 30
Valid Values:
Importance: low
- Specified by:
saslOauthbearerClockSkewSecondsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslOauthbearerExpectedAudience
Description copied from interface:ProducerConfigFieldssasl.oauthbearer.expected.audience
The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences.
The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match.
If there is no match, the broker will reject the JWT and authentication will fail.Default: null
Valid Values:
Importance: low
- Specified by:
saslOauthbearerExpectedAudiencein interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslOauthbearerExpectedAudience
Description copied from interface:ProducerConfigFieldssasl.oauthbearer.expected.audience
The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences.
The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match.
If there is no match, the broker will reject the JWT and authentication will fail.Default: null
Valid Values:
Importance: low
- Specified by:
saslOauthbearerExpectedAudiencein interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
saslOauthbearerExpectedIssuer
Description copied from interface:ProducerConfigFieldssasl.oauthbearer.expected.issuer
The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer.
The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim.
If there is no match, the broker will reject the JWT and authentication will fail.Default: null
Valid Values:
Importance: low
- Specified by:
saslOauthbearerExpectedIssuerin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
enableIdempotence
Description copied from interface:ProducerConfigFieldsenable.idempotence
When set to 'true', the producer will ensure that exactly one copy of each message is written in the stream.
If 'false', producer retries due to broker failures, etc., may write duplicates of the retried message in the stream.
Note that enabling idempotence requiresmax.in.flight.requests.per.connectionto be less than or equal to 5 (with message ordering preserved for any allowable value),retriesto be greater than 0, andacksmust be 'all'.Idempotence is enabled by default if no conflicting configurations are set.
If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled.
If idempotence is explicitly enabled and conflicting configurations are set, aConfigExceptionis thrown.Default: true
Valid Values:
Importance: low
- Specified by:
enableIdempotencein interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
transactionTimeoutMs
Description copied from interface:ProducerConfigFieldstransaction.timeout.ms
The maximum amount of time in ms that the transaction coordinator will wait for a transaction status update from the producer before proactively aborting the ongoing transaction.If this value is larger than the transaction.max.timeout.ms setting in the broker, the request will fail with aInvalidTxnTimeoutExceptionerror.Default: 60000 (1 minute)
Valid Values:
Importance: low
- Specified by:
transactionTimeoutMsin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-
transactionalId
Description copied from interface:ProducerConfigFieldstransactional.id
The TransactionalId to use for transactional delivery.
This enables reliability semantics which span multiple producer sessions since it allows the client to guarantee that transactions using the same TransactionalId have been completed prior to starting any new transactions.
If no TransactionalId is provided, then the producer is limited to idempotent delivery.
If a TransactionalId is configured,enable.idempotenceis implied.
By default the TransactionId is not configured, which means transactions cannot be used.
Note that, by default, transactions require a cluster of at least three brokers which is the recommended setting for production; for development you can change this, by adjusting broker settingtransaction.state.log.replication.factor.Default: null
Valid Values: non-empty string
Importance: low
- Specified by:
transactionalIdin interfaceProducerConfigFields<T extends org.swisspush.kobuka.client.base.AbstractProducerConfigBuilder<T>>
-