User Community Service Desk Downloads
If you can't find the product or version you're looking for, visit support.ataccama.com/downloads

Data Sources Configuration

This article describes the properties used to configure data sources in Data Processing Engine (DPE). To modify the default configuration in on-premise deployments, add or edit properties in the dpe/etc/application.properties file.

JDBC data source plugin default settings

These properties define default JDBC connection configuration for all data sources. You can also configure each driver separately, in which case driver-specific properties override the values provided here for that particular driver.

Each property follows the pattern plugin.jdbcdatasource.ataccama.one.connections.{propertyName}.

Property Data type Description

plugin.jdbcdatasource.ataccama.one.connections.pooling-enabled

Boolean

If set to true, connection pooling is applied. If the property is not set, the default value is false.

If you use data source throttling by specifying ataccama.one.dpm.resource-allocation.connections.<name>. DPM properties, this property needs to be set to false for all connected DPEs.

plugin.jdbcdatasource.ataccama.one.connections.connection-timeout

Number

Defines for how long the pool manager or, in case connection pooling is not enabled, a data source, lets the client’s connection request wait before a timeout exception is thrown.

This typically happens when all available connections are already in use and no new connections can be added due to other limits. Expressed in milliseconds.

The default value is 20000 when connection pooling is not enabled or 30000 when connection pooling is applied.

plugin.jdbcdatasource.ataccama.one.connections.idle-timeout

Number

Specifies for how long a connection can remain idle in the connection pool before it is closed. Applicable if connection pooling is enabled. Expressed in milliseconds.

Default value: 300000.

plugin.jdbcdatasource.ataccama.one.connections.max-lifetime

Number

Determines for how long a connection can remain in the pool. The connections that are currently in use are not closed.

Expressed in milliseconds. Applicable if connection pooling is enabled. The period should be several seconds shorter compared to the database limit.

Default value: 900000.

plugin.jdbcdatasource.ataccama.one.connections.minimum-idle

Number

The minimum number of idle connections in the pool. Applicable if connection pooling is enabled.

If the property is not set, the value corresponds to the value of the property maximum-pool-size.

Default value: 1.

plugin.jdbcdatasource.ataccama.one.connections.maximum-pool-size

Number

The maximum number of connections in the connection pool. This includes both active and idle connections.

Applicable if connection pooling is enabled. When the maximum number of connections is reached, further connection requests are blocked.

If the property is not set, the number of allowed connections is 10.

Default value: 5.

plugin.jdbcdatasource.ataccama.one.row-count-timeout

String

Specifies the maximum waiting time for row count operation during preprocessing.

Default value: 119m.

JDBC data source driver settings

The following properties need to be set when configuring a JDBC data source plugin. You can use these properties as a template for adding custom data sources.

The property names follow this pattern: plugin.jdbcdatasource.ataccama.one.driver.{driverId}.{propertyName}. The identifier of the driver (driverId) needs to be unique and should match the identifier of the database, for example, postgresql.

It is also possible to add custom JDBC properties. To do this, use the pattern plugin.jdbcdatasource.ataccama.one.driver.{driverId}.properties.{propertyName} and replace all placeholder values.

For example, if you are working with an Oracle database with the default configuration and want to define how many rows are prefetched, the following property should be provided:

plugin.jdbcdatasource.ataccama.one.driver.oracle.properties.oracle.jdbc.defaultRowPrefetch

Or, if you need to log in as a system user in an Oracle database, the following property should be added:

plugin.jdbcdatasource.ataccama.one.driver.oracle.properties.oracle.jdbc.internal_logon=sysdba

Examples of configuration with default values are provided in the following sections for a number of data sources. The H2 data source should only be used for testing purposes.

The properties related to connection pooling are driver-specific values of connection pooling properties that are defined for all drivers. The <driverId>.driver-class property must be added if there are multiple drivers found in the driver’s classpath (<driverId>.driver-class-path).

Property Data type Description

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.name

String

The name of the data source that is displayed in ONE Web Application.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.connection-pattern

String

Refers to the pattern of a valid JDBC connection string. This serves as a template for users and should use placeholders to indicate what users need to modify.

For example, for PostgreSQL, the value is jdbc:postgresql://<hostname>:<port>/<database>.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.schema-regex

String

A regular expression for a schema name from connection string in its first group.

This is especially useful for data sources where all schemas are listed regardless of the database specified in the connection string, such as Teradata (jdbc:teradata://.*database=([a-zA-Z0-9_]*).*) or Cassandra (jdbc:cassandra://.*?/([a-zA-Z0-9_]*).*). The database name is then case-sensitive.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.driver-class-path

String

The classpath of the driver, for example, postgresql-*.jar.

To select multiple files, use a semicolon (;) as a separator.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.additional-classpath-files

String

A path for additional files that are loaded to the classpath of this driver. Suitable for adding additional libraries or license files.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.driver-class

String

The driver’s class name. Must be added in the following cases:

  • If there are multiple drivers found in the driver’s classpath (<driverId>.driver-class-path).

  • If you are adding a non-default JDBC driver. In that case, you need to omit the provider-class property as well.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.provider-class

String

The customized implementation class of the driver’s provider. If the property is not provided, the default provider is used. Omitted if configuring a custom JDBC driver.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.pooling-enabled

Boolean

If set to true, connection pooling is applied.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.connection-timeout

Number

Defines for how long the pool manager or, in case connection pooling is not enabled, a data source, lets the client’s connection request wait before a timeout exception is thrown.

This typically happens when all available connections are already in use and no new connections can be added due to other limits. Expressed in milliseconds.

The default value is 20000 when connection pooling is not enabled or 30000 when connection pooling is applied.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.idle-timeout

Number

Specifies for how long a connection can remain idle in the connection pool before it is closed. Applicable if connection pooling is enabled. Expressed in milliseconds.

Default value: 300000.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.max-lifetime

Number

Determines for how long a connection can remain in the pool. The connections that are currently in use are not closed.

Expressed in milliseconds. Applicable if connection pooling is enabled. The period should be several seconds shorter compared to the database limit.

Default value: 900000.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.minimum-idle

Number

The minimum number of idle connections in the pool. Applicable if connection pooling is enabled.

If the property is not set, the value corresponds to the value of the property maximum-pool-size.

Default value: 1.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.maximum-pool-size

Number

The maximum number of connections in the connection pool. This includes both active and idle connections.

Applicable if connection pooling is enabled. When the maximum number of connections is reached, further connection requests are blocked.

If the property is not set, the number of allowed connections is 10.

Default value: 5.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.full-select-query-pattern

String

A query pattern for displaying all records, for example, SELECT {columns} FROM {table}.

Allowed placeholders: {columns}, {table}.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.preview-query-pattern

String

A query pattern for displaying a preview of records, for example, SELECT {columns} FROM {table} WHERE ROWNUM ⇐ {previewLimit}. Allowed placeholders: {columns}, {table}, {previewLimit}.

If the property is not set, the pattern SELECT {columns} FROM {table} is used instead.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.dsl-query-preview-query-pattern

String

A DSL query pattern for loading data source preview. The property is mainly used for optimization purposes.

By default, the pattern is as follows: SELECT * FROM ({dslQuery}) dslQuery LIMIT {previewLimit}. Allowed placeholders: {dslQuery}, {previewLimit}.

For exact configuration details, refer to the corresponding section for your data source type on this page.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.dsl-query-import-metadata-query-pattern

String

A DSL query pattern for importing metadata. The property is mainly used for optimization purposes.

By default, the pattern is as follows: SELECT * FROM ({dslQuery}) dslQuery LIMIT 0 or SELECT * FROM ({dslQuery}) dslQuery WHERE 1=0 (for example, for Teradata). Allowed placeholders: {dslQuery}.

For exact configuration details, refer to the corresponding section for your data source type on this page.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.row-count-query-pattern

String

A query pattern for counting the number of rows in a catalog item, for example, SELECT NUM_ROWS FROM ALL_TABLES WHERE TABLE_NAME = {table}. Allowed placeholder: {table}.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.row-number-column

String

A row number column query for processing of relational databases in Spark environment.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.sampling-query-pattern

String

A query pattern for retrieving a sample of records from a catalog item, for example, SELECT {columns} FROM {table} SAMPLE ({percentageLimit}).

Allowed placeholders: {table}, {columns}, {limit}, {percentageLimit}.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.query-quotation-mark

String

The punctuation mark used to delimit identifiers in queries, typically a backtick (`) or a quotation mark ("), which needs to be escaped.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.disallowed-indexes-table-types

String

Restricts which table types can be queried in the data source.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.bulk-import-table-count-threshold

Number

If set, metadata is loaded in bulk for all tables available in a schema or database provided that the number of tables selected for import exceeds the threshold specified.

Otherwise, the bulk import strategy is applied only when users attempt to load the whole schema (or, in data sources that do not use schemas, the whole database). In other words, if the property is omitted or not supported for the given data source, tables are processed and queried one by one.

The property is particularly useful when working with slow databases. For example, if querying metadata for a single table takes two minutes and for the full schema five minutes, setting the threshold to 5 should significantly reduce waiting time for users when importing more than five tables.

In the current version, the property can be used for the following data sources: MSSQL, Oracle, PostgreSQL, Amazon Aurora PostgreSQL.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.disabled

Boolean

Disables the JDBC driver. To do so, set the property to true. For example:

plugin.jdbcdatasource.ataccama.one.driver.mysql.disabled=true
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.disabled=true
plugin.jdbcdatasource.ataccama.one.driver.redshift.disabled=true
plugin.jdbcdatasource.ataccama.one.driver.bq.disabled=true

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.location-query

String

Customizes the query to determine the location root.

The default value is data source dependent.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.schema-exclude-pattern

String

If set, entities whose name matches this pattern are excluded from the job result.

This property is not used for the following data sources which do not support schemas: MySQL, Amazon Aurora MySQL, MariaDB. If you specify any value here, this would result in an empty job result.

Make sure that correct regular expression syntax is used (see Pattern (Java Platform SE 7)). Otherwise DPE fails to run properly.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.schema-include-pattern

String

If set, only entities whose name matches this pattern are included in the job result.

This property is not used for the following data sources which do not support schemas: MySQL, Amazon Aurora MySQL, MariaDB. If you specify any value here, this would result in an empty job result.

Make sure that correct regular expression syntax is used (see Pattern (Java Platform SE 7)). Otherwise DPE fails to run properly.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.table-exclude-pattern

String

If set, entities whose name matches this pattern are excluded from the job result.

Make sure that correct regular expression syntax is used (see Pattern (Java Platform SE 7)). Otherwise DPE fails to run properly.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.table-include-pattern

String

If set, only entities whose name matches this pattern are included in the job result.

Make sure that correct regular expression syntax is used (see Pattern (Java Platform SE 7)). Otherwise DPE fails to run properly.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.processing-properties.provide-file-driver-property-as-url Boolean

If set to true, the driver properties are specified as URL parameters. Used for JDBC drivers that do not accept file paths. Applies starting from version 13.9.1.

Default value: false.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.parent-resource-last

JDBC data export

The following properties are used for enabling data export on all data sources. For more information about the data export feature, see Data Export.

Property Data type Description

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.disabled

Boolean

If set, exporting data is not possible on the data source.

Default value: true. Set to false to allow data export.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.create-table-pattern

String

Used for creating a new table.

The default value is CREATE TABLE {table} ({columns}), where {table} is replaced by the full table name, and {columns} is replaced by create-table-column-pattern for each requested column, separated by fragment-separator.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.create-table-column-pattern

String

Used for creating a new column.

The default value is {column} {type}, where {column} is replaced by the column name, and {type} is replaced by the database type specification, which contains other properties from this list.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.create-table-primary-key-strategy

String

Used for creating a new column as the primary key. To apply this, specify create-table-primary-key-strategy=first_column.

Default value: none.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.fragment-separator

String

Specifies the separator for columns. Example value: ,\n.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.alter-table-pattern

String

Used for editing existing tables.

The default value is ALTER TABLE {table} {commands}, where {table} is replaced by the full table name, and {commands} is replaced by add-column-pattern, alter-column-pattern, drop-column-pattern, or make-column-nullable-pattern, based on the required action.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.alter-multiple-columns

Boolean

(Optional) Some databases support multiple column changes in a single alter table command.

Default value: false.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.add-column-pattern

String

Used for adding a new column to an existing table.

The default value is ADD {column} {type}, where {column} is replaced by the column name, and {type} is replaced by the type specification.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.incompatible-column-type-strategy

String

This property helps in cases where an existing column type is incompatible with the requested type. It first drops the column and then creates the column of the same name with the correct type.

Default value: drop_create.

Some databases don’t allow adding a column whose name was used previously, even after the column was deleted. In this case, you can specify incompatible-column-type-strategy=fail, which causes the provisioning to fail at runtime.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.allow-boolean-as-number

Boolean

(Optional) Some databases do not support the Boolean data type natively, so you can use allow-boolean-as-number=true which causes an integer type of the source column to be considered as compatible.

Default value: false.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.alter-column-pattern

String

Used for editing a column in an existing table.

The default value is ALTER COLUMN {column} {type}, where {column} is replaced by the column name, and {type} is replaced by the type specification.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.drop-column-pattern

String

Used for deleting columns.

Default value: DROP COLUMN {column}.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.notNullSuperfluousColumnStrategy

String

In cases when there is a redundant existing column which is also marked as NOT NULL, you can decide which command to use to make it nullable.

Default value: make_nullable.

Since some databases don’t allow making existing NOT NULL columns nullable, you can specify not-null-superfluous-column-strategy=drop, in which case drop-column-pattern is used (make-column-nullable-pattern is ignored, for example, can be omitted). You can also use not-null-superfluous-column-strategy=fail, in which case the provisioning fails at runtime.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.make-column-nullable-pattern

String

Used to make a column nullable.

Default value: ALTER COLUMN {column} {type} NULL.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.delete-table-pattern

String

Used for deleting the contents of a table.

Default value: DELETE FROM {table}.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.type-mappings.<sourceType>

Date

Specifies the mapping of an export type of an attribute for create-table or alter-table properties. You can replace sourceTypes with one of the following, depending on the data type in which you want to export the column:

  • BOOLEAN

  • DATE

  • DATETIME

  • FLOAT

  • INTEGER

  • LONG

The value of the property is the target type or specifier.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.type-mappings.STRING

String

Specifies the mapping of an attribute size for create-table or alter-table properties if the actual size of the column is not known but the database requires it. In that case, we allow specifying the default value using {columnSize=some_constant} syntax.

Example value: varchar({columnSize = 128}).

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.type-mappings.STRING.lob

String

Specifies the mapping of an attribute size for create-table or alter-table properties if the actual size of the column is not known but needs to be specified in a particular way depending on the condition.

You can specify the condition using the property data-export.type-mappings.STRING.lob.when.

Example value: varchar(max).

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.type-mappings.STRING.lob.when

String

The column size specifier condition. It supports the type-mappings.STRING.lob property.

Example value: columnSize > 2000.

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.use-implicit-transaction-for-ddl

Boolean

If set to true, implicit transactions for data definition language (DDL) operations can be used on databases that do not support explicit transactions.

Default value: false.

For Azure Synapse, the default value of this property is set to true due to its unique transaction handling approach.

When it comes to type mappings, you can also use different target types for the same source type based on some conditions. For any conditional mapping, specify two more properties with the following general syntax, where subType is any descriptive name that is used to bind these properties together:

plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.type-mappings.<Type>.<subType> = target type
plugin.jdbcdatasource.ataccama.one.driver.<driverId>.data-export.type-mappings.<Type>.<subType>.when = condition expression

For example, we want to use VARCHAR2 type for STRING column of size ⇐ 2000 and type CLOB for the larger sizes. In this case, the default mapping is as follows:

data-export.type-mappings.STRING = VARCHAR2({columnSize})

While the conditional mapping is as follows:

data-export.type-mappings.STRING.lob = CLOB
data-export.type-mappings.STRING.lob.when = columnSize > 2000

Shared settings

The following property is common for all configured JDBC drivers.

Property Data type Description

plugin.jdbcdatasource.ataccama.one.data-source.allowed-table-types

String

Determines which table types are shown in data sources. Applies to all available data sources.

If no value is provided or the property is removed or commented out, all table types are displayed (not recommended).

When the log level for DPE is set to DEBUG, logs contain the current configuration (allowed array) as well as all available table types (types) for each data source.

For example, for S4HANA, these values would be:

S4HANA table types
Apr 29 11:28:02 app start.sh[507040]: 2021-04-29 11:28:02,994 DEBUG [a372f0][ImpersonatedUserIdentity(id=b312e35e-cecb-483a-a31d-6b7f0e57b18e, roles=[MMM_admin, admin, MMM_user, default, DPP_admin, CS_admin], serviceIdentity=ServiceIdentity(module=dpm, id=dpm-prod, roles=[IMPERSONATION]))][grpc-default-executor-1] c.a.d.p.d.j.s.DefaultDatabaseDataSourceExplorer:102 - eventId=loadTableTypes types=[CALC VIEW, GLOBAL TEMPORARY, HIERARCHY VIEW, JOIN VIEW, NO LOGGING TEMPORARY, OLAP VIEW, SYNONYM, SYSTEM TABLE, TABLE, USER DEFINED, VIEW], allowed=[TABLE, VIEW, CALC VIEW]

For PostgreSQL, the default settings include the following table types:

PostgreSQL table types
Apr 29 11:27:43 app start.sh[507040]: 2021-04-29 11:27:43,019 DEBUG [e6c813][ImpersonatedUserIdentity(id=b312e35e-cecb-483a-a31d-6b7f0e57b18e, ... types=[FOREIGN TABLE, INDEX, MATERIALIZED VIEW, PARTITIONED TABLE, SEQUENCE, SYSTEM INDEX, SYSTEM TABLE, SYSTEM TOAST INDEX, SYSTEM TOAST TABLE, SYSTEM VIEW, TABLE, TEMPORARY INDEX, TEMPORARY SEQUENCE, TEMPORARY TABLE, TEMPORARY VIEW, TYPE, VIEW], allowed=[TABLE, SYSTEM VIEW, VIEW]

Default value: TABLE, VIEW.

Oracle configuration

The following properties are used only for Oracle databases.

Property Data type Description

plugin.jdbcdatasource.ataccama.one.driver.oracle.disallowed-indexes-schema-names

String

Restricts which schemas can be queried in the data source.

plugin.jdbcdatasource.ataccama.one.driver.oracle.skip-oracle-maintained-tables

Boolean

If set to true, tables with the ORACLE_MAINTAINED='Y' flag are excluded from the job.

Default value: true.

Oracle JDBC properties
plugin.jdbcdatasource.ataccama.one.driver.oracle.name = Oracle
plugin.jdbcdatasource.ataccama.one.driver.oracle.connection-pattern = jdbc:oracle:thin:@<hostname>:<port>:<sid>
plugin.jdbcdatasource.ataccama.one.driver.oracle.driver-class-path = ojdbc*.jar
plugin.jdbcdatasource.ataccama.one.driver.oracle.additional-classpath-files = oracleLibs/*;
plugin.jdbcdatasource.ataccama.one.driver.oracle.provider-class = com.ataccama.dpe.plugin.dataconnect.jdbc.provider.oracle.OracleDataSourceClientProvider
#plugin.jdbcdatasource.ataccama.one.driver.oracle.pooling-enabled = true
#plugin.jdbcdatasource.ataccama.one.driver.oracle.connection-timeout = 20000
#plugin.jdbcdatasource.ataccama.one.driver.oracle.idle-timeout = 300000
#plugin.jdbcdatasource.ataccama.one.driver.oracle.max-lifetime = 900000
#plugin.jdbcdatasource.ataccama.one.driver.oracle.minimum-idle = 1
#plugin.jdbcdatasource.ataccama.one.driver.oracle.maximum-pool-size = 5
plugin.jdbcdatasource.ataccama.one.driver.oracle.full-select-query-pattern = SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.oracle.preview-query-pattern = SELECT {columns} FROM {table} WHERE ROWNUM <= {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.oracle.dsl-query-preview-query-pattern = SELECT * FROM ({dslQuery}) WHERE ROWNUM <= {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.oracle.dsl-query-import-metadata-query-pattern = SELECT * FROM ({dslQuery}) WHERE ROWNUM = 0
plugin.jdbcdatasource.ataccama.one.driver.oracle.row-count-query-pattern = SELECT COUNT(*) FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.oracle.sampling-query-pattern = SELECT {columns} FROM {table} SAMPLE ({percentageLimit})
plugin.jdbcdatasource.ataccama.one.driver.oracle.disallowed-indexes-table-types = SYNONYM,VIEW
plugin.jdbcdatasource.ataccama.one.driver.oracle.disallowed-indexes-schema-names = SYS
plugin.jdbcdatasource.ataccama.one.driver.oracle.bulk-import-table-count-threshold = 5
plugin.jdbcdatasource.ataccama.one.driver.oracle.location-query = select nvl(nullif(service_name, 'SYS$USERS'), nvl(instance_name, sid))\
 from (SELECT distinct sys_context('userenv', 'service_name') service_name, sys_context('userenv', 'instance_name') instance_name, sys_context('USERENV', 'SID') sid FROM DUAL)
# Example of custom driver property (prefixed by plugin.jdbcdatasource.ataccama.one.driver.oracle.properties.):
#plugin.jdbcdatasource.ataccama.one.driver.oracle.properties.oracle.jdbc.defaultRowPrefetch = 1000
plugin.jdbcdatasource.ataccama.one.driver.oracle.row-number-column = ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) as ____rno
#plugin.jdbcdatasource.ataccama.one.driver.oracle.allow-partial-listing = false
plugin.jdbcdatasource.ataccama.one.driver.oracle.skip-oracle-maintained-tables = true
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.disabled = false
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.create-table-pattern = CREATE TABLE {table} ({columns})
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.create-table-column-pattern = {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.fragment-separator = ,\n
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.alter-table-pattern = ALTER TABLE {table} {commands}
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.alter-multiple-columns = false
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.add-column-pattern = ADD {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.alter-column-pattern = MODIFY {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.drop-column-pattern = DROP COLUMN {column} CASCADE CONSTRAINTS
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.make-column-nullable-pattern = MODIFY {column} NULL
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.delete-table-pattern = DELETE FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.allow-boolean-as-number = true
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.type-mappings.DATE = DATE
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.type-mappings.INTEGER = NUMBER(10)
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.type-mappings.BOOLEAN = NUMBER(1)
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.type-mappings.LONG = NUMBER(20)
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.type-mappings.DATETIME = TIMESTAMP
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.type-mappings.FLOAT = NUMBER({columnSize = 32},{fractionalDigits = 8})
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.type-mappings.FLOAT.real = BINARY_FLOAT
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.type-mappings.FLOAT.real.when = jdbcColumnTypeNumber == 100
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.type-mappings.FLOAT.double = BINARY_DOUBLE
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.type-mappings.FLOAT.double.when = jdbcColumnTypeNumber == 101
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.type-mappings.STRING = VARCHAR2({columnSize = 128} CHAR)
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.type-mappings.STRING.lob = CLOB
plugin.jdbcdatasource.ataccama.one.driver.oracle.data-export.type-mappings.STRING.lob.when = columnSize > 2000

PostgreSQL configuration

PostgreSQL JDBC properties
plugin.jdbcdatasource.ataccama.one.driver.postgresql.name = PostgreSQL
plugin.jdbcdatasource.ataccama.one.driver.postgresql.supports-analytical-queries = true
plugin.jdbcdatasource.ataccama.one.driver.postgresql.connection-pattern = jdbc:postgresql://<hostname>:<port>/<database>
plugin.jdbcdatasource.ataccama.one.driver.postgresql.driver-class-path = postgresql-*.jar
plugin.jdbcdatasource.ataccama.one.driver.postgresql.provider-class = com.ataccama.dpe.plugin.dataconnect.jdbc.provider.postgresql.PostgreSQLDataSourceClientProvider
#plugin.jdbcdatasource.ataccama.one.driver.postgresql.pooling-enabled = true
#plugin.jdbcdatasource.ataccama.one.driver.postgresql.connection-timeout = 20000
#plugin.jdbcdatasource.ataccama.one.driver.postgresql.idle-timeout = 300000
#plugin.jdbcdatasource.ataccama.one.driver.postgresql.max-lifetime = 900000
#plugin.jdbcdatasource.ataccama.one.driver.postgresql.minimum-idle = 1
#plugin.jdbcdatasource.ataccama.one.driver.postgresql.maximum-pool-size = 5
plugin.jdbcdatasource.ataccama.one.driver.postgresql.full-select-query-pattern = SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.postgresql.preview-query-pattern = SELECT {columns} FROM {table} LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.postgresql.dsl-query-preview-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.postgresql.dsl-query-import-metadata-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT 0
plugin.jdbcdatasource.ataccama.one.driver.postgresql.row-count-query-pattern = SELECT COUNT(*) FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.postgresql.sampling-query-pattern = SELECT {columns} FROM {table} WHERE RANDOM() < {percentageLimit} limit {limit}
#plugin.jdbcdatasource.ataccama.one.driver.postgresql.sampling-query-pattern = SELECT {columns} FROM {table} TABLESAMPLE SYSTEM ({percentageLimit}) -- for Postgresql version >= 9.5
plugin.jdbcdatasource.ataccama.one.driver.postgresql.query-quotation-mark = \"
plugin.jdbcdatasource.ataccama.one.driver.postgresql.disallowed-indexes-table-types = SYNONYM
plugin.jdbcdatasource.ataccama.one.driver.postgresql.bulk-import-table-count-threshold = 30
# This driver supports custom properties, which can be set through:
#plugin.jdbcdatasource.ataccama.one.driver.postgresql.properties.NAME_OF_THE_CUSTOM_PROPERTY = CUSTOM_PROPERTY_VALUE
#plugin.jdbcdatasource.ataccama.one.driver.postgresql.allow-partial-listing = false
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.disabled = false
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.create-table-pattern = CREATE TABLE {table} ({columns})
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.create-table-column-pattern = {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.fragment-separator = ,\n
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.alter-table-pattern = ALTER TABLE {table} {commands}
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.alter-multiple-columns = true
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.add-column-pattern = ADD COLUMN {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.alter-column-pattern = ALTER COLUMN {column} TYPE {type}
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.drop-column-pattern = DROP COLUMN {column} CASCADE
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.make-column-nullable-pattern = ALTER COLUMN {column} DROP NOT NULL
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.delete-table-pattern = DELETE FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.type-mappings.DATE = date
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.type-mappings.INTEGER = integer
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.type-mappings.BOOLEAN = boolean
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.type-mappings.LONG = bigint
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.type-mappings.DATETIME = timestamp
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.type-mappings.FLOAT = numeric({columnSize = 32},{fractionalDigits = 8})
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.type-mappings.FLOAT.real = real
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.type-mappings.FLOAT.real.when = jdbcColumnType == "REAL" || jdbcColumnType == "FLOAT"
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.type-mappings.FLOAT.double = double precision
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.type-mappings.FLOAT.double.when = jdbcColumnType == "DOUBLE"
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.type-mappings.STRING = varchar({columnSize = 128})
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.type-mappings.STRING.lob = text
plugin.jdbcdatasource.ataccama.one.driver.postgresql.data-export.type-mappings.STRING.lob.when = columnSize > 2000

Amazon Aurora PostgreSQL configuration

Amazon Aurora PostgreSQL JDBC properties
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.name = Amazon Aurora PostgreSQL
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.supports-analytical-queries = true
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.connection-pattern = jdbc:postgresql://<hostname>:<port>/<database>
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.driver-class-path = postgresql-*.jar
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.provider-class = com.ataccama.dpe.plugin.dataconnect.jdbc.provider.postgresql.PostgreSQLDataSourceClientProvider
#plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.pooling-enabled = true
#plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.connection-timeout = 20000
#plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.idle-timeout = 300000
#plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.max-lifetime = 900000
#plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.minimum-idle = 1
#plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.maximum-pool-size = 5
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.full-select-query-pattern = SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.preview-query-pattern = SELECT {columns} FROM {table} LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.dsl-query-preview-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.dsl-query-import-metadata-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT 0
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.row-count-query-pattern = SELECT COUNT(*) FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.sampling-query-pattern = SELECT {columns} FROM {table} WHERE RANDOM() < {percentageLimit} limit {limit}
#plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.sampling-query-pattern = SELECT {columns} FROM {table} TABLESAMPLE SYSTEM ({percentageLimit}) -- for Postgresql version >= 9.5
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.query-quotation-mark = \"
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.disallowed-indexes-table-types = SYNONYM
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.bulk-import-table-count-threshold = 30
# This driver supports custom properties, which can be set through:
#plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.properties.NAME_OF_THE_CUSTOM_PROPERTY = CUSTOM_PROPERTY_VALUE
#plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.allow-partial-listing = false
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.disabled = false
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.create-table-pattern = CREATE TABLE {table} ({columns})
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.create-table-column-pattern = {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.fragment-separator = ,\n
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.alter-table-pattern = ALTER TABLE {table} {commands}
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.alter-multiple-columns = true
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.add-column-pattern = ADD COLUMN {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.alter-column-pattern = ALTER COLUMN {column} TYPE {type}
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.drop-column-pattern = DROP COLUMN {column} CASCADE
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.make-column-nullable-pattern = ALTER COLUMN {column} DROP NOT NULL
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.delete-table-pattern = DELETE FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.type-mappings.DATE = date
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.type-mappings.INTEGER = integer
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.type-mappings.BOOLEAN = boolean
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.type-mappings.LONG = bigint
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.type-mappings.DATETIME = timestamp
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.type-mappings.FLOAT = numeric({columnSize = 32},{fractionalDigits = 8})
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.type-mappings.FLOAT.real = real
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.type-mappings.FLOAT.real.when = jdbcColumnType == "REAL" || jdbcColumnType == "FLOAT"
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.type-mappings.FLOAT.double = double precision
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.type-mappings.FLOAT.double.when = jdbcColumnType == "DOUBLE"
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.type-mappings.STRING = varchar({columnSize = 128})
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.type-mappings.STRING.lob.when = columnSize > 2000
plugin.jdbcdatasource.ataccama.one.driver.aurora-postgresql.data-export.type-mappings.STRING.lob = text

MySQL configuration

MySQL JDBC properties
# Disabled by default, change to disabled = false to enable driver configuration
plugin.jdbcdatasource.ataccama.one.driver.mysql.disabled = true
plugin.jdbcdatasource.ataccama.one.driver.mysql.name = MySQL
plugin.jdbcdatasource.ataccama.one.driver.mysql.connection-pattern = jdbc:mysql://<hostname>:<port>/<database>
plugin.jdbcdatasource.ataccama.one.driver.mysql.driver-class-path = mysql-connector-j-8*.jar
plugin.jdbcdatasource.ataccama.one.driver.mysql.provider-class = com.ataccama.dpe.plugin.dataconnect.jdbc.provider.mysql.MySQLDataSourceClientProvider
#plugin.jdbcdatasource.ataccama.one.driver.mysql.pooling-enabled = true
#plugin.jdbcdatasource.ataccama.one.driver.mysql.connection-timeout = 20000
#plugin.jdbcdatasource.ataccama.one.driver.mysql.idle-timeout = 300000
#plugin.jdbcdatasource.ataccama.one.driver.mysql.max-lifetime = 900000
#plugin.jdbcdatasource.ataccama.one.driver.mysql.minimum-idle = 1
#plugin.jdbcdatasource.ataccama.one.driver.mysql.maximum-pool-size = 5
plugin.jdbcdatasource.ataccama.one.driver.mysql.full-select-query-pattern = SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.mysql.preview-query-pattern = SELECT {columns} FROM {table} LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.mysql.dsl-query-preview-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.mysql.dsl-query-import-metadata-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT 0
plugin.jdbcdatasource.ataccama.one.driver.mysql.row-count-query-pattern = SELECT COUNT(*) FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.mysql.sampling-query-pattern = SELECT {columns} FROM {table} WHERE RAND() < {percentageLimit};
plugin.jdbcdatasource.ataccama.one.driver.mysql.query-quotation-mark =`
plugin.jdbcdatasource.ataccama.one.driver.mysql.disallowed-indexes-table-types = SYNONYM
plugin.jdbcdatasource.ataccama.one.driver.mysql.properties.characterEncoding = utf-8
#plugin.jdbcdatasource.ataccama.one.driver.mysql.allow-partial-listing = false
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.disabled = false
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.create-table-pattern = CREATE TABLE {table} ({columns})
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.create-table-column-pattern = {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.fragment-separator = ,\n
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.alter-table-pattern = ALTER TABLE {table} {commands}
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.alter-multiple-columns = false
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.add-column-pattern = ADD {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.alter-column-pattern = MODIFY {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.drop-column-pattern = DROP COLUMN {column} CASCADE
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.make-column-nullable-pattern = MODIFY {column} {type} NULL
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.delete-table-pattern = DELETE FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.type-mappings.DATE = DATE
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.type-mappings.INTEGER = INT
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.type-mappings.BOOLEAN = BOOL
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.type-mappings.LONG = BIGINT
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.type-mappings.DATETIME = DATETIME
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.type-mappings.FLOAT = DECIMAL({columnSize = 32},{fractionalDigits = 8})
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.type-mappings.FLOAT.real = FLOAT
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.type-mappings.FLOAT.real.when = jdbcColumnType == "REAL" || jdbcColumnType == "FLOAT"
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.type-mappings.FLOAT.double = DOUBLE
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.type-mappings.FLOAT.double.when = jdbcColumnType == "DOUBLE"
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.type-mappings.STRING = VARCHAR({columnSize = 128})
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.type-mappings.STRING.mediumtext.when = columnSize > 2000 && columnSize <= 16777215
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.type-mappings.STRING.mediumtext = MEDIUMTEXT
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.type-mappings.STRING.longtext.when = columnSize > 16777215
plugin.jdbcdatasource.ataccama.one.driver.mysql.data-export.type-mappings.STRING.longtext = LONGTEXT
plugin.jdbcdatasource.ataccama.one.driver.mysql.processing-properties.provide-file-driver-property-as-url = true

Amazon Aurora MySQL configuration

Amazon Aurora MySQL JDBC properties
# Disabled by default, change to disabled = false to enable driver configuration
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.disabled = true
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.name = Amazon Aurora MySQL
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.connection-pattern = jdbc:mysql://<hostname>:<port>/<database>
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.driver-class-path = mysql-connector-j-8*.jar
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.provider-class = com.ataccama.dpe.plugin.dataconnect.jdbc.provider.mysql.MySQLDataSourceClientProvider
#plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.pooling-enabled = true
#plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.connection-timeout = 20000
#plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.idle-timeout = 300000
#plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.max-lifetime = 900000
#plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.minimum-idle = 1
#plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.maximum-pool-size = 5
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.full-select-query-pattern = SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.preview-query-pattern = SELECT {columns} FROM {table} LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.dsl-query-preview-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.dsl-query-import-metadata-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT 0
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.row-count-query-pattern = SELECT COUNT(*) FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.sampling-query-pattern = SELECT {columns} FROM {table} WHERE RAND() < {percentageLimit};
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.query-quotation-mark =`
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.disallowed-indexes-table-types = SYNONYM
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.properties.characterEncoding = utf-8
#plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.allow-partial-listing = false
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.disabled = false
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.create-table-pattern = CREATE TABLE {table} ({columns})
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.create-table-column-pattern = {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.fragment-separator = ,\n
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.alter-table-pattern = ALTER TABLE {table} {commands}
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.alter-multiple-columns = false
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.add-column-pattern = ADD {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.alter-column-pattern = MODIFY {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.drop-column-pattern = DROP COLUMN {column} CASCADE
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.make-column-nullable-pattern = MODIFY {column} {type} NULL
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.delete-table-pattern = DELETE FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.type-mappings.DATE = DATE
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.type-mappings.INTEGER = INT
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.type-mappings.BOOLEAN = BOOL
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.type-mappings.LONG = BIGINT
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.type-mappings.DATETIME = DATETIME
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.type-mappings.FLOAT = DECIMAL({columnSize = 32},{fractionalDigits = 8})
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.type-mappings.FLOAT.real = FLOAT
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.type-mappings.FLOAT.real.when = jdbcColumnType == "REAL" || jdbcColumnType == "FLOAT"
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.type-mappings.FLOAT.double = DOUBLE
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.type-mappings.FLOAT.double.when = jdbcColumnType == "DOUBLE"
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.type-mappings.STRING = VARCHAR({columnSize = 128})
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.type-mappings.STRING.mediumtext.when = columnSize > 2000 && columnSize <= 16777215
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.type-mappings.STRING.mediumtext = MEDIUMTEXT
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.type-mappings.STRING.longtext.when = columnSize > 16777215
plugin.jdbcdatasource.ataccama.one.driver.aurora-mysql.data-export.type-mappings.STRING.longtext = LONGTEXT

MS SQL configuration

MS SQL JDBC properties
plugin.jdbcdatasource.ataccama.one.driver.mssql.name = MSSQL Server
plugin.jdbcdatasource.ataccama.one.driver.mssql.connection-pattern = jdbc:sqlserver://<hostname>:<port>;databaseName =<database>
plugin.jdbcdatasource.ataccama.one.driver.mssql.driver-class-path = mssql-jdbc*.jar
plugin.jdbcdatasource.ataccama.one.driver.mssql.additional-classpath-files = mssqlLibs/*;
plugin.jdbcdatasource.ataccama.one.driver.mssql.provider-class = com.ataccama.dpe.plugin.dataconnect.jdbc.provider.mssql.MSSQLDataSourceClientProvider
#plugin.jdbcdatasource.ataccama.one.driver.mssql.pooling-enabled = true
#plugin.jdbcdatasource.ataccama.one.driver.mssql.connection-timeout = 20000
#plugin.jdbcdatasource.ataccama.one.driver.mssql.idle-timeout = 300000
#plugin.jdbcdatasource.ataccama.one.driver.mssql.max-lifetime = 900000
#plugin.jdbcdatasource.ataccama.one.driver.mssql.minimum-idle = 1
#plugin.jdbcdatasource.ataccama.one.driver.mssql.maximum-pool-size = 5
plugin.jdbcdatasource.ataccama.one.driver.mssql.full-select-query-pattern = SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.mssql.preview-query-pattern = SELECT TOP {previewLimit} {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.mssql.dsl-query-preview-query-pattern = SELECT TOP {previewLimit} * FROM ({dslQuery}) AS dslQuery
plugin.jdbcdatasource.ataccama.one.driver.mssql.dsl-query-import-metadata-query-pattern = SELECT TOP 0 * FROM ({dslQuery}) AS dslQuery
plugin.jdbcdatasource.ataccama.one.driver.mssql.row-count-query-pattern = SELECT COUNT_BIG(*) FROM {table}
#plugin.jdbcdatasource.ataccama.one.driver.mssql.sampling-query-pattern = SELECT {columns} FROM {table} TABLESAMPLE ({percentageLimit} PERCENT)
plugin.jdbcdatasource.ataccama.one.driver.mssql.sampling-query-pattern = IF (SELECT TABLE_TYPE FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = '{simpleTable}' AND TABLE_SCHEMA = '{schema}') = 'BASE TABLE' \
 EXEC('SELECT TOP {limit} {columns} FROM {table} TABLESAMPLE (CASE WHEN {percentageLimit}*110 > 100 THEN 100 ELSE {percentageLimit}*110 END PERCENT)') \
 ELSE SELECT TOP {limit} {columns} FROM (select t1.*, RAND(CHECKSUM(NEWID())) as ____rnd from {table} as t1) as t1 where ____rnd < {percentageLimit} * 1.1
plugin.jdbcdatasource.ataccama.one.driver.mssql.disallowed-indexes-table-types = SYNONYM
plugin.jdbcdatasource.ataccama.one.driver.mssql.bulk-import-table-count-threshold = 30
plugin.jdbcdatasource.ataccama.one.driver.mssql.row-number-column = ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) as ____rno
#plugin.jdbcdatasource.ataccama.one.driver.mssql.allow-partial-listing = false
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.disabled = false
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.create-table-pattern = CREATE TABLE {table} ({columns})
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.create-table-column-pattern = {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.fragment-separator = ,\n
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.alter-table-pattern = ALTER TABLE {table} {commands}
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.alter-multiple-columns = false
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.add-column-pattern = ADD {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.alter-column-pattern = ALTER COLUMN {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.drop-column-pattern = DROP COLUMN {column}
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.make-column-nullable-pattern = ALTER COLUMN {column} {type} NULL
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.delete-table-pattern = DELETE FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.type-mappings.DATE = date
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.type-mappings.INTEGER = integer
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.type-mappings.BOOLEAN = bit
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.type-mappings.LONG = bigint
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.type-mappings.DATETIME = datetime
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.type-mappings.FLOAT = numeric({columnSize = 32},{fractionalDigits = 8})
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.type-mappings.FLOAT.real = real
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.type-mappings.FLOAT.real.when = jdbcColumnType == "REAL" || jdbcColumnType == "FLOAT"
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.type-mappings.FLOAT.double = float(53)
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.type-mappings.FLOAT.double.when = jdbcColumnType == "DOUBLE"
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.type-mappings.STRING = varchar({columnSize = 128})
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.type-mappings.STRING.lob = varchar(max)
plugin.jdbcdatasource.ataccama.one.driver.mssql.data-export.type-mappings.STRING.lob.when = columnSize > 2000

Azure Synapse Analytics configuration

Azure Synapse Analytics JDBC properties
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.name = Azure Synapse Analytics
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.connection-pattern = jdbc:sqlserver://<hostname>:<port>;databaseName =<database>
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.driver-class-path = mssql-jdbc*.jar
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.additional-classpath-files = mssqlLibs/*;
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.provider-class = com.ataccama.dpe.plugin.dataconnect.jdbc.provider.mssql.MSSQLDataSourceClientProvider
#plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.pooling-enabled = true
#plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.connection-timeout = 20000
#plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.idle-timeout = 300000
#plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.max-lifetime = 900000
#plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.minimum-idle = 1
#plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.maximum-pool-size = 5
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.full-select-query-pattern = SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.preview-query-pattern = SELECT TOP {previewLimit} {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.dsl-query-preview-query-pattern = SELECT TOP {previewLimit} * FROM ({dslQuery}) AS dslQuery
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.dsl-query-import-metadata-query-pattern = SELECT TOP 0 * FROM ({dslQuery}) AS dslQuery
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.row-count-query-pattern = SELECT COUNT_BIG(*) FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.sampling-query-pattern = SELECT TOP {limit} {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.disallowed-indexes-table-types = SYNONYM
#plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.allow-partial-listing = false
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.schema-exclude-pattern =
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.schema-include-pattern =
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.table-exclude-pattern =
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.table-include-pattern =
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.disabled = false
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.create-table-pattern = CREATE TABLE {table} ({columns})
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.create-table-column-pattern = {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.fragment-separator = ,\n
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.alter-table-pattern = ALTER TABLE {table} {commands}
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.alter-multiple-columns = false
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.add-column-pattern = ADD {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.alter-column-pattern = ALTER COLUMN {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.drop-column-pattern = DROP COLUMN {column}
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.make-column-nullable-pattern = ALTER COLUMN {column} {type} NULL
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.delete-table-pattern = DELETE FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.use-implicit-transaction-for-ddl = true
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.type-mappings.DATE = date
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.type-mappings.INTEGER = integer
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.type-mappings.BOOLEAN = bit
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.type-mappings.LONG = bigint
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.type-mappings.DATETIME = datetime
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.type-mappings.FLOAT = numeric({columnSize = 32},{fractionalDigits = 8})
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.type-mappings.FLOAT.real = real
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.type-mappings.FLOAT.real.when = jdbcColumnType == "REAL" || jdbcColumnType == "FLOAT"
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.type-mappings.FLOAT.double = float(53)
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.type-mappings.FLOAT.double.when = jdbcColumnType == "DOUBLE"
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.data-export.type-mappings.STRING = varchar({columnSize = 128})

SAP HANA configuration

SAP HANA JDBC properties
# Disabled by default, change to disabled = false to enable driver configuration
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.disabled = true
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.name = SAP HANA
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.connection-pattern = jdbc:sap://<hostname>:<port>/?databaseName =<database>
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.driver-class-path = ngdbc*.jar
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.provider-class = com.ataccama.dpe.plugin.dataconnect.jdbc.provider.saphana.SaphanaDataSourceClientProvider
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.driver-class = com.sap.db.jdbc.Driver
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.pooling-enabled = true
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.connection-timeout = 20000
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.idle-timeout = 300000
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.max-lifetime = 900000
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.minimum-idle = 1
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.maximum-pool-size = 5
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.full-select-query-pattern = SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.preview-query-pattern = SELECT {columns} FROM {table} LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.dsl-query-preview-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.dsl-query-import-metadata-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT 0
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.row-count-query-pattern = SELECT COUNT(*) FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.sampling-query-pattern = SELECT {columns} FROM {table} TABLESAMPLE BERNOULLI ({percentageLimit}*100)
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.disallowed-indexes-table-types = SYNONYM,VIEW
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.location-query = select value from "SYS"."M_SYSTEM_OVERVIEW" where section = 'System' and name = 'Instance ID'
#plugin.jdbcdatasource.ataccama.one.driver.sap-hana.allow-partial-listing = false
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.disabled = false
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.create-table-pattern = CREATE TABLE {table} ({columns})
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.create-table-column-pattern = {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.fragment-separator = ,\n
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.alter-table-pattern = ALTER TABLE {table} {commands}
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.alter-multiple-columns = false
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.add-column-pattern = ADD ({column} {type})
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.alter-column-pattern = ALTER ({column} {type})
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.drop-column-pattern = DROP ({column})
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.make-column-nullable-pattern = ALTER ({column} {type} NULL)
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.delete-table-pattern = DELETE FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.type-mappings.DATE = DATE
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.type-mappings.INTEGER = INTEGER
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.type-mappings.BOOLEAN = BOOLEAN
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.type-mappings.LONG = BIGINT
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.type-mappings.DATETIME = DATETIME
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.type-mappings.FLOAT = DECIMAL({columnSize = 32},{fractionalDigits = 8})
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.type-mappings.FLOAT.real = REAL
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.type-mappings.FLOAT.real.when = jdbcColumnType == "REAL" || jdbcColumnType == "FLOAT"
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.type-mappings.FLOAT.double = DOUBLE
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.type-mappings.FLOAT.double.when = jdbcColumnType == "DOUBLE"
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.type-mappings.STRING = VARCHAR({columnSize = 128})
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.type-mappings.STRING.lob = CLOB
plugin.jdbcdatasource.ataccama.one.driver.sap-hana.data-export.type-mappings.STRING.lob.when = columnSize > 5000

MariaDB configuration

MariaDB JDBC properties
plugin.jdbcdatasource.ataccama.one.driver.mariadb.name = MariaDB
plugin.jdbcdatasource.ataccama.one.driver.mariadb.connection-pattern = jdbc:mariadb://<hostname>:<port>/<database>
plugin.jdbcdatasource.ataccama.one.driver.mariadb.driver-class-path = mariadb*.jar
plugin.jdbcdatasource.ataccama.one.driver.mariadb.provider-class = com.ataccama.dpe.plugin.dataconnect.jdbc.provider.mariadb.MariaDBDataSourceClientProvider
#plugin.jdbcdatasource.ataccama.one.driver.mariadb.pooling-enabled = true
#plugin.jdbcdatasource.ataccama.one.driver.mariadb.connection-timeout = 20000
#plugin.jdbcdatasource.ataccama.one.driver.mariadb.idle-timeout = 300000
#plugin.jdbcdatasource.ataccama.one.driver.mariadb.max-lifetime = 900000
#plugin.jdbcdatasource.ataccama.one.driver.mariadb.minimum-idle = 1
#plugin.jdbcdatasource.ataccama.one.driver.mariadb.maximum-pool-size = 5
plugin.jdbcdatasource.ataccama.one.driver.mariadb.full-select-query-pattern = SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.mariadb.preview-query-pattern = SELECT {columns} FROM {table} LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.mariadb.dsl-query-preview-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.mariadb.dsl-query-import-metadata-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT 0
plugin.jdbcdatasource.ataccama.one.driver.mariadb.row-count-query-pattern = SELECT COUNT(*) FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.mariadb.sampling-query-pattern = SELECT {columns} FROM {table} LIMIT {limit}
plugin.jdbcdatasource.ataccama.one.driver.mariadb.query-quotation-mark =`
plugin.jdbcdatasource.ataccama.one.driver.mariadb.disallowed-indexes-table-types = SYNONYM
#plugin.jdbcdatasource.ataccama.one.driver.mariadb.allow-partial-listing = false
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.disabled = false
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.create-table-pattern = CREATE TABLE {table} ({columns})
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.create-table-column-pattern = {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.fragment-separator = ,\n
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.alter-table-pattern = ALTER TABLE {table} {commands}
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.alter-multiple-columns = false
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.add-column-pattern = ADD {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.alter-column-pattern = MODIFY {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.drop-column-pattern = DROP COLUMN {column} CASCADE
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.make-column-nullable-pattern = MODIFY {column} {type} NULL
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.delete-table-pattern = DELETE FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.type-mappings.DATE = DATE
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.type-mappings.INTEGER = INT
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.type-mappings.BOOLEAN = BOOL
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.type-mappings.LONG = BIGINT
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.type-mappings.DATETIME = DATETIME
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.type-mappings.FLOAT = DECIMAL({columnSize = 32},{fractionalDigits = 8})
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.type-mappings.FLOAT.real = FLOAT
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.type-mappings.FLOAT.real.when = jdbcColumnType == "REAL" || jdbcColumnType == "FLOAT"
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.type-mappings.FLOAT.double = DOUBLE
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.type-mappings.FLOAT.double.when = jdbcColumnType == "DOUBLE"
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.type-mappings.STRING = VARCHAR({columnSize = 128})
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.type-mappings.STRING.mediumtext.when = columnSize > 2000 && columnSize <= 16777215
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.type-mappings.STRING.mediumtext = MEDIUMTEXT
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.type-mappings.STRING.longtext.when = columnSize > 16777215
plugin.jdbcdatasource.ataccama.one.driver.mariadb.data-export.type-mappings.STRING.longtext = LONGTEXT

Teradata configuration

Teradata JDBC properties
plugin.jdbcdatasource.ataccama.one.driver.teradata.name = Teradata
plugin.jdbcdatasource.ataccama.one.driver.teradata.connection-pattern = jdbc:teradata://<hostname>/database =<database>,charset = UTF8
plugin.jdbcdatasource.ataccama.one.driver.teradata.driver-class-path = terajdbc4-*.jar
plugin.jdbcdatasource.ataccama.one.driver.teradata.provider-class = com.ataccama.dpe.plugin.dataconnect.jdbc.provider.teradata.TeradataDataSourceClientProvider
plugin.jdbcdatasource.ataccama.one.driver.teradata.full-select-query-pattern = SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.teradata.preview-query-pattern = SELECT {columns} FROM {table} SAMPLE {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.teradata.dsl-query-preview-query-pattern = SELECT TOP {previewLimit} * FROM ({dslQuery}) dslQuery
plugin.jdbcdatasource.ataccama.one.driver.teradata.dsl-query-import-metadata-query-pattern = SELECT * FROM ({dslQuery}) dslQuery WHERE 1 = 0
plugin.jdbcdatasource.ataccama.one.driver.teradata.row-count-query-pattern = SELECT CAST(COUNT (*) AS BIGINT) FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.teradata.sampling-query-pattern = SELECT {columns} FROM {table} SAMPLE {limit}
#plugin.jdbcdatasource.ataccama.one.driver.teradata.pooling-enabled = true
#plugin.jdbcdatasource.ataccama.one.driver.teradata.connection-timeout = 20000 // not supported anyway
#plugin.jdbcdatasource.ataccama.one.driver.teradata.idle-timeout = 300000
#plugin.jdbcdatasource.ataccama.one.driver.teradata.max-lifetime = 900000
#plugin.jdbcdatasource.ataccama.one.driver.teradata.minimum-idle = 1
#plugin.jdbcdatasource.ataccama.one.driver.teradata.maximum-pool-size = 5
plugin.jdbcdatasource.ataccama.one.driver.teradata.disallowed-indexes-table-types = SYNONYM
# This driver supports custom properties, which can be set through:
#plugin.jdbcdatasource.ataccama.one.driver.teradata.properties.NAME_OF_THE_CUSTOM_PROPERTY = CUSTOM_PROPERTY_VALUE
#plugin.jdbcdatasource.ataccama.one.driver.teradata.allow-partial-listing = false
plugin.jdbcdatasource.ataccama.one.driver.teradata.schema-regex = jdbc:teradata://.*database =([a-zA-Z0-9_]*).*
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.disabled = false
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.create-table-pattern = CREATE TABLE {table} ({columns})
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.create-table-column-pattern = {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.fragment-separator = ,\n
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.alter-table-pattern = ALTER TABLE {table} {commands}
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.alter-multiple-columns = false
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.add-column-pattern = ADD {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.alter-column-pattern = ADD {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.drop-column-pattern = DROP {column}
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.make-column-nullable-pattern = ADD {column} NULL
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.delete-table-pattern = DELETE FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.allow-boolean-as-number = true
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.type-mappings.DATE = DATE
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.type-mappings.INTEGER = INTEGER
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.type-mappings.BOOLEAN = DECIMAL(1)
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.type-mappings.LONG = BIGINT
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.type-mappings.DATETIME = TIMESTAMP
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.type-mappings.FLOAT = NUMERIC({columnSize = 32},{fractionalDigits = 8})
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.type-mappings.FLOAT.double = DOUBLE PRECISION
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.type-mappings.FLOAT.double.when = jdbcColumnType == "REAL" || jdbcColumnType == "FLOAT" || jdbcColumnType == "DOUBLE"
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.type-mappings.STRING = VARCHAR({columnSize = 128}) CHARACTER SET UNICODE CASESPECIFIC
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.type-mappings.STRING.longtext.when = columnSize > 2000
plugin.jdbcdatasource.ataccama.one.driver.teradata.data-export.type-mappings.STRING.longtext = CLOB

Amazon Redshift configuration

Amazon Redshift JDBC properties
# Disabled by default, change to disabled = false to enable driver configuration
plugin.jdbcdatasource.ataccama.one.driver.redshift.disabled = true
plugin.jdbcdatasource.ataccama.one.driver.redshift.name = Amazon Redshift
plugin.jdbcdatasource.ataccama.one.driver.redshift.connection-pattern = jdbc:redshift://<hostname>:<port>/<database>
plugin.jdbcdatasource.ataccama.one.driver.redshift.driver-class-path = redshift-*.jar
plugin.jdbcdatasource.ataccama.one.driver.redshift.provider-class = com.ataccama.dpe.plugin.dataconnect.jdbc.provider.redshift.RedshiftDataSourceClientProvider
#plugin.jdbcdatasource.ataccama.one.driver.redshift.pooling-enabled = true
#plugin.jdbcdatasource.ataccama.one.driver.redshift.connection-timeout = 20000
#plugin.jdbcdatasource.ataccama.one.driver.redshift.idle-timeout = 300000
#plugin.jdbcdatasource.ataccama.one.driver.redshift.max-lifetime = 900000
#plugin.jdbcdatasource.ataccama.one.driver.redshift.minimum-idle = 1
#plugin.jdbcdatasource.ataccama.one.driver.redshift.maximum-pool-size = 5
plugin.jdbcdatasource.ataccama.one.driver.redshift.full-select-query-pattern = SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.redshift.preview-query-pattern = SELECT {columns} FROM {table} LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.redshift.dsl-query-preview-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.redshift.dsl-query-import-metadata-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT 0
plugin.jdbcdatasource.ataccama.one.driver.redshift.row-count-query-pattern = SELECT COUNT(*) FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.redshift.sampling-query-pattern = SELECT {columns} FROM {table} WHERE RANDOM() < {percentageLimit} limit {limit}
plugin.jdbcdatasource.ataccama.one.driver.redshift.query-quotation-mark = \"
plugin.jdbcdatasource.ataccama.one.driver.redshift.disallowed-indexes-table-types = SYNONYM
# This driver supports custom properties, which can be set through:
#plugin.jdbcdatasource.ataccama.one.driver.redshift.properties.NAME_OF_THE_CUSTOM_PROPERTY = CUSTOM_PROPERTY_VALUE
#plugin.jdbcdatasource.ataccama.one.driver.redshift.allow-partial-listing = false
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.disabled = false
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.create-table-pattern = CREATE TABLE {table} ({columns})
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.create-table-column-pattern = {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.fragment-separator = ,\n
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.alter-table-pattern = ALTER TABLE {table} {commands}
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.alter-multiple-columns = false
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.add-column-pattern = ADD COLUMN {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.alter-column-pattern = ALTER COLUMN {column} TYPE {type}
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.drop-column-pattern = DROP COLUMN {column} CASCADE
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.not-null-superfluous-column-strategy = drop
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.delete-table-pattern = DELETE FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.use-implicit-transaction-for-ddl = true
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.type-mappings.DATE = date
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.type-mappings.INTEGER = integer
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.type-mappings.BOOLEAN = boolean
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.type-mappings.LONG = bigint
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.type-mappings.DATETIME = timestamp
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.type-mappings.FLOAT = numeric({columnSize =32},{fractionalDigits =8})
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.type-mappings.FLOAT.real = real
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.type-mappings.FLOAT.real.when = jdbcColumnType == "REAL" || jdbcColumnType == "FLOAT"
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.type-mappings.FLOAT.double = double precision
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.type-mappings.FLOAT.double.when = jdbcColumnType == "DOUBLE"
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.type-mappings.STRING = varchar({columnSize =128})
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.type-mappings.STRING.LIMITED = VARCHAR(65535)
plugin.jdbcdatasource.ataccama.one.driver.redshift.data-export.type-mappings.STRING.LIMITED.when = columnSize > 65535

Snowflake configuration

Snowflake JDBC properties
plugin.jdbcdatasource.ataccama.one.driver.snowflake.name=Snowflake
plugin.jdbcdatasource.ataccama.one.driver.snowflake.connection-pattern=jdbc:snowflake://<hostname>:<port>?db=<database>
plugin.jdbcdatasource.ataccama.one.driver.snowflake.provider-class = com.ataccama.dpe.plugin.snowflake.jdbc.SnowflakeDataSourceClientProvider
#plugin.jdbcdatasource.ataccama.one.driver.snowflake.pooling-enabled = true
#plugin.jdbcdatasource.ataccama.one.driver.snowflake.connection-timeout = 20000
#plugin.jdbcdatasource.ataccama.one.driver.snowflake.idle-timeout = 300000
#plugin.jdbcdatasource.ataccama.one.driver.snowflake.max-lifetime = 900000
#plugin.jdbcdatasource.ataccama.one.driver.snowflake.minimum-idle = 1
#plugin.jdbcdatasource.ataccama.one.driver.snowflake.maximum-pool-size = 5
plugin.jdbcdatasource.ataccama.one.driver.snowflake.full-select-query-pattern = SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.snowflake.preview-query-pattern = SELECT {columns} FROM {table} LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.snowflake.dsl-query-preview-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.snowflake.dsl-query-import-metadata-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT 0
plugin.jdbcdatasource.ataccama.one.driver.snowflake.row-count-query-pattern = SELECT COUNT(*) FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.snowflake.sampling-query-pattern = select {columns} from {table} sample bernoulli({percentage100Limit})
plugin.jdbcdatasource.ataccama.one.driver.snowflake.query-quotation-mark =\"
plugin.jdbcdatasource.ataccama.one.driver.snowflake.disallowed-indexes-table-types=SYNONYM
# This driver supports custom properties, which can be set through:
#plugin.jdbcdatasource.ataccama.one.driver.snowflake.properties.NAME_OF_THE_CUSTOM_PROPERTY=CUSTOM_PROPERTY_VALUE
#plugin.jdbcdatasource.ataccama.one.driver.snowflake.allow-partial-listing=false
plugin.jdbcdatasource.ataccama.one.driver.snowflake.schema-exclude-pattern=^(INFORMATION_SCHEMA)|(information_schema)$
plugin.jdbcdatasource.ataccama.one.driver.snowflake.schema-include-pattern=
plugin.jdbcdatasource.ataccama.one.driver.snowflake.table-exclude-pattern=
plugin.jdbcdatasource.ataccama.one.driver.snowflake.table-include-pattern=
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.disabled=false
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.create-table-pattern = CREATE TABLE {table} ({columns})
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.create-table-column-pattern = {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.fragment-separator = ,\n
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.alter-table-pattern = ALTER TABLE {table} {commands}
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.alter-multiple-columns=false
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.add-column-pattern = ADD COLUMN {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.alter-column-pattern = ALTER COLUMN {column} SET DATA TYPE {type}
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.drop-column-pattern = DROP COLUMN {column} CASCADE
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.make-column-nullable-pattern = ALTER COLUMN {column} DROP NOT NULL
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.delete-table-pattern = DELETE FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.type-mappings.DATE=DATE
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.type-mappings.INTEGER=NUMBER(10)
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.type-mappings.BOOLEAN=BOOLEAN
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.type-mappings.LONG=NUMBER(20)
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.type-mappings.DATETIME=DATETIME
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.type-mappings.FLOAT=NUMBER({columnSize=32},{fractionalDigits=8})
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.type-mappings.STRING=VARCHAR({columnSize=128})
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.type-mappings.STRING.LIMITED=VARCHAR(16777216)
plugin.jdbcdatasource.ataccama.one.driver.snowflake.data-export.type-mappings.STRING.LIMITED.when=columnSize > 16777216

Apache Cassandra configuration

Apache Cassandra requires a primary key defined for every table. Make sure you specify the plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.create-table-primary-key-strategy property.
Apache Cassandra JDBC properties
# Disabled by default, change to disabled = false to enable driver configuration
plugin.jdbcdatasource.ataccama.one.driver.cassandra.disabled = true
plugin.jdbcdatasource.ataccama.one.driver.cassandra.name = Cassandra
plugin.jdbcdatasource.ataccama.one.driver.cassandra.connection-pattern = jdbc:cassandra://<hostname>:<port>
plugin.jdbcdatasource.ataccama.one.driver.cassandra.driver-class-path = CassandraJDBC42*.jar
plugin.jdbcdatasource.ataccama.one.driver.cassandra.driver-class = com.simba.cassandra.jdbc42.Driver
plugin.jdbcdatasource.ataccama.one.driver.cassandra.pooling-enabled = true
plugin.jdbcdatasource.ataccama.one.driver.cassandra.connection-timeout = 20000
plugin.jdbcdatasource.ataccama.one.driver.cassandra.idle-timeout = 300000
plugin.jdbcdatasource.ataccama.one.driver.cassandra.max-lifetime = 900000
plugin.jdbcdatasource.ataccama.one.driver.cassandra.minimum-idle = 1
plugin.jdbcdatasource.ataccama.one.driver.cassandra.maximum-pool-size = 5
plugin.jdbcdatasource.ataccama.one.driver.cassandra.full-select-query-pattern = SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.cassandra.dsl-query-preview-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.cassandra.dsl-query-import-metadata-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT 0
plugin.jdbcdatasource.ataccama.one.driver.cassandra.preview-query-pattern = SELECT {columns} FROM {table} LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.cassandra.row-count-query-pattern = SELECT COUNT(*) FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.cassandra.sampling-query-pattern = SELECT {columns} FROM {table} LIMIT {limit};
plugin.jdbcdatasource.ataccama.one.driver.cassandra.query-quotation-mark =\"
plugin.jdbcdatasource.ataccama.one.driver.cassandra.schema-regex = jdbc:cassandra://.*?/([a-zA-Z0-9_]*).*
#plugin.jdbcdatasource.ataccama.one.driver.cassandra.allow-partial-listing = false
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.disabled = false
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.create-table-pattern = CREATE TABLE {table} ({columns})
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.create-table-primary-key-strategy = first_column
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.create-table-column-pattern = {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.fragment-separator = ,\n
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.alter-table-pattern = ALTER TABLE {table} {commands}
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.alter-multiple-columns = false
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.add-column-pattern = ADD {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.alter-column-pattern = ALTER {column} TYPE {type}
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.drop-column-pattern = DROP {column}
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.not-null-superfluous-column-strategy = drop
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.incompatible-column-type-strategy = fail
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.delete-table-pattern = TRUNCATE {table}
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.type-mappings.DATE = date
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.type-mappings.INTEGER = int
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.type-mappings.BOOLEAN = boolean
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.type-mappings.LONG = bigint
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.type-mappings.DATETIME = timestamp
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.type-mappings.FLOAT = decimal
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.type-mappings.FLOAT.float = float
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.type-mappings.FLOAT.float.when = jdbcColumnType == "REAL" || jdbcColumnType == "FLOAT"
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.type-mappings.FLOAT.double = double
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.type-mappings.FLOAT.double.when = jdbcColumnType == "DOUBLE"
plugin.jdbcdatasource.ataccama.one.driver.cassandra.data-export.type-mappings.STRING = text

BigQuery configuration

BigQuery JDBC properties
# Disabled by default (disabled = true)
plugin.jdbcdatasource.ataccama.one.driver.bq.disabled = false
plugin.jdbcdatasource.ataccama.one.driver.bq.name = Big Query
plugin.jdbcdatasource.ataccama.one.driver.bq.skip-database-level = false
plugin.jdbcdatasource.ataccama.one.driver.bq.connection-pattern = jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;ProjectId =<projectID>;
plugin.jdbcdatasource.ataccama.one.driver.bq.driver-class-path = GoogleBigQueryJDBC42.jar
plugin.jdbcdatasource.ataccama.one.driver.bq.provider-class = com.ataccama.dpe.plugin.dataconnect.jdbc.provider.bigquery.BigQueryDataSourceClientProvider
plugin.jdbcdatasource.ataccama.one.driver.bq.additional-classpath-files = bigQueryLibs/*.jar
plugin.jdbcdatasource.ataccama.one.driver.bq.driver-class = com.simba.googlebigquery.jdbc42.Driver
plugin.jdbcdatasource.ataccama.one.driver.bq.pooling-enabled = false
plugin.jdbcdatasource.ataccama.one.driver.bq.connection-timeout = 20000
plugin.jdbcdatasource.ataccama.one.driver.bq.idle-timeout = 300000
plugin.jdbcdatasource.ataccama.one.driver.bq.max-lifetime = 900000
plugin.jdbcdatasource.ataccama.one.driver.bq.minimum-idle = 1
plugin.jdbcdatasource.ataccama.one.driver.bq.maximum-pool-size = 5
plugin.jdbcdatasource.ataccama.one.driver.bq.full-select-query-pattern = #standardSQL \n \
  SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.bq.preview-query-pattern =  #standardSQL \n \
  SELECT {columns} FROM {table} LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.bq.dsl-query-preview-query-pattern = #standardSQL \n \
  SELECT * FROM ({dslQuery}) dslQuery LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.bq.dsl-query-import-metadata-query-pattern = #standardSQL \n \
  SELECT * FROM ({dslQuery}) dslQuery LIMIT 0
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.dsl-query-preview-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.dsl-query-import-metadata-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT 0
#plugin.jdbcdatasource.ataccama.one.driver.bq.row-count-query-pattern = SELECT COUNT(*) FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.bq.row-count-query-pattern =  #standardSQL \n \
  SELECT row_count FROM `{database}`.`{schema}`.__TABLES__ WHERE table_id = '{simpleTable}'
plugin.jdbcdatasource.ataccama.one.driver.bq.sampling-query-pattern =  #standardSQL \n \
  SELECT {columns} FROM {table} WHERE RAND() < {percentageLimit};
plugin.jdbcdatasource.ataccama.one.driver.bq.query-quotation-mark =`
# This driver supports custom properties, which can be set through:
#plugin.jdbcdatasource.ataccama.one.driver.bq.properties.NAME_OF_THE_CUSTOM_PROPERTY = CUSTOM_PROPERTY_VALUE
plugin.jdbcdatasource.ataccama.one.driver.bq.properties.Timeout = 7200
#plugin.jdbcdatasource.ataccama.one.driver.bq.properties.AllowLargeResults = 1
#plugin.jdbcdatasource.ataccama.one.driver.bq.properties.LargeResultDataset =_simba_jdbc
#plugin.jdbcdatasource.ataccama.one.driver.bq.properties.LargeResultTable = temp_table_{now}
#plugin.jdbcdatasource.ataccama.one.driver.bq.allow-partial-listing = false
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.disabled = false
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.create-table-pattern = CREATE TABLE {table} ({columns})
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.create-table-column-pattern = {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.fragment-separator = ,\n
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.alter-table-pattern = ALTER TABLE {table} {commands}
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.alter-multiple-columns = false
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.add-column-pattern = ADD COLUMN {column} {type}
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.alter-column-pattern = ALTER COLUMN {column} SET DATA TYPE {type}
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.drop-column-pattern = DROP COLUMN {column}
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.make-column-nullable-pattern = ALTER COLUMN {column} DROP NOT NULL
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.incompatible-column-type-strategy = fail
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.delete-table-pattern = truncate table {table}
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.type-mappings.DATE = DATE
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.type-mappings.INTEGER = INT
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.type-mappings.BOOLEAN = BOOL
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.type-mappings.LONG = INT64
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.type-mappings.DATETIME = DATETIME
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.type-mappings.FLOAT = BIGNUMERIC({columnSize = 32},{fractionalDigits = 8})
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.type-mappings.FLOAT.float64= FLOAT64
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.type-mappings.FLOAT.float64.when = jdbcColumnType == "REAL" || jdbcColumnType == "FLOAT" || jdbcColumnType == "DOUBLE"
plugin.jdbcdatasource.ataccama.one.driver.bq.data-export.type-mappings.STRING = STRING({columnSize = 128})
plugin.jdbcdatasource.ataccama.one.driver.bq.parent-resource-last = true

IBM Db2 configuration

IBM Db2 JDBC properties
plugin.jdbcdatasource.ataccama.one.driver.db2.name=DB2
plugin.jdbcdatasource.ataccama.one.driver.db2.connection-pattern=jdbc:db2://<hostname>:<port>/<database>
plugin.jdbcdatasource.ataccama.one.driver.db2.driver-class-path=db2jcc4.jar
plugin.jdbcdatasource.ataccama.one.driver.db2.driver-class=com.ibm.db2.jcc.DB2Driver
plugin.jdbcdatasource.ataccama.one.driver.db2.full-select-query-pattern=SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.db2.preview-query-pattern=SELECT {columns} FROM {table} LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.db2.dsl-query-preview-query-pattern=SELECT * FROM ({dslQuery}) dslQuery LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.db2.dsl-query-import-metadata-query-pattern=SELECT * FROM ({dslQuery}) dslQuery LIMIT 0
plugin.jdbcdatasource.ataccama.one.driver.db2.row-count-query-pattern=SELECT COUNT(*) FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.db2.sampling-query-pattern = SELECT {columns} FROM {table} WHERE RANDOM() < {percentageLimit} limit {limit}
# Depending on the Db2 server version, instead of 'values current server' you might need to use the command 'select current_server from sysibm.sysdummy1' for the following property. Run both commands on your database server before modifying the configuration. The server name is used as the top-level location of imported catalog items.
plugin.jdbcdatasource.ataccama.one.driver.db2.location-query = values current server

IBM Netezza configuration

IBM Netezza JDBC properties
plugin.jdbcdatasource.ataccama.one.driver.netezza.name=Netezza
plugin.jdbcdatasource.ataccama.one.driver.netezza.connection-pattern=jdbc:netezza://<hostname>:<port>/<database>
plugin.jdbcdatasource.ataccama.one.driver.netezza.driver-class-path=nzjdbc.jar
plugin.jdbcdatasource.ataccama.one.driver.netezza.driver-class=org.netezza.Driver
plugin.jdbcdatasource.ataccama.one.driver.netezza.pooling-enabled=true
plugin.jdbcdatasource.ataccama.one.driver.netezza.connection-timeout=20000
plugin.jdbcdatasource.ataccama.one.driver.netezza.idle-timeout=300000
plugin.jdbcdatasource.ataccama.one.driver.netezza.max-lifetime=900000
plugin.jdbcdatasource.ataccama.one.driver.netezza.minimum-idle=1
plugin.jdbcdatasource.ataccama.one.driver.netezza.maximum-pool-size=5
plugin.jdbcdatasource.ataccama.one.driver.netezza.full-select-query-pattern=SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.netezza.preview-query-pattern=SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.netezza.dsl-query-preview-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.netezza.dsl-query-import-metadata-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT 0
plugin.jdbcdatasource.ataccama.one.driver.netezza.row-count-query-pattern=SELECT COUNT(*) FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.netezza.sampling-query-pattern=SELECT {columns} FROM {table} LIMIT {limit}
plugin.jdbcdatasource.ataccama.one.driver.netezza.query-quotation-mark=\"
plugin.jdbcdatasource.ataccama.one.driver.netezza.connection-test-query=select 1

Informix configuration

For Informix, you need to refer to tables by a simplified syntax with no reference to the catalog: {schema}.{simpleTable}, where:

  • {schema} - Schema name.

  • {simpleTable} - Name of the table.

Informix JDBC properties
plugin.jdbcdatasource.ataccama.one.driver.informix.name = INFORMIX
plugin.jdbcdatasource.ataccama.one.driver.informix.connection-pattern = jdbc:informix-sqli://<hostname>:<port>/<dbname>:informixserver =<server_name>
plugin.jdbcdatasource.ataccama.one.driver.informix.driver-class-path = jdbc-4.10.8.1.jar
plugin.jdbcdatasource.ataccama.one.driver.informix.driver-class = com.informix.jdbc.IfxDriver
plugin.jdbcdatasource.ataccama.one.driver.informix.pooling-enabled = true
plugin.jdbcdatasource.ataccama.one.driver.informix.connection-timeout = 20000
plugin.jdbcdatasource.ataccama.one.driver.informix.idle-timeout = 300000
plugin.jdbcdatasource.ataccama.one.driver.informix.max-lifetime = 900000
plugin.jdbcdatasource.ataccama.one.driver.informix.minimum-idle = 1
plugin.jdbcdatasource.ataccama.one.driver.informix.maximum-pool-size = 5
plugin.jdbcdatasource.ataccama.one.driver.informix.full-select-query-pattern = SELECT {columns} FROM {schema}.{simpleTable}
plugin.jdbcdatasource.ataccama.one.driver.informix.preview-query-pattern = SELECT {columns} FROM {schema}.{simpleTable} LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.informix.dsl-query-preview-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.informix.dsl-query-import-metadata-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT 0
plugin.jdbcdatasource.ataccama.one.driver.informix.row-count-query-pattern = SELECT COUNT(*) FROM {schema}.{simpleTable}
plugin.jdbcdatasource.ataccama.one.driver.informix.sampling-query-pattern = SELECT {columns} FROM {schema}.{simpleTable} WHERE 0 < {percentageLimit} limit {limit}
plugin.jdbcdatasource.ataccama.one.driver.informix.disallowed-indexes-table-types = SYNONYM
plugin.jdbcdatasource.ataccama.one.driver.informix.disabled = false
plugin.jdbcdatasource.ataccama.one.driver.informix.query-quotation-mark =\u0020

Dremio configuration

Dremio JDBC properties
# Disabled by default, change to disabled = false to enable driver configuration
plugin.jdbcdatasource.ataccama.one.driver.dremio.disabled = true
plugin.jdbcdatasource.ataccama.one.driver.dremio.name = Dremio
plugin.jdbcdatasource.ataccama.one.driver.dremio.connection-pattern = jdbc:dremio:direct =<hostname>:<port>[;schema =<schema>]
plugin.jdbcdatasource.ataccama.one.driver.dremio.driver-class-path = dremio-jdbc-driver*.jar
plugin.jdbcdatasource.ataccama.one.driver.dremio.driver-class = com.dremio.jdbc.Driver
plugin.jdbcdatasource.ataccama.one.driver.dremio.pooling-enabled = false
plugin.jdbcdatasource.ataccama.one.driver.dremio.connection-timeout = 20000
plugin.jdbcdatasource.ataccama.one.driver.dremio.idle-timeout = 300000
plugin.jdbcdatasource.ataccama.one.driver.dremio.max-lifetime = 900000
plugin.jdbcdatasource.ataccama.one.driver.dremio.minimum-idle = 1
plugin.jdbcdatasource.ataccama.one.driver.dremio.maximum-pool-size = 5
plugin.jdbcdatasource.ataccama.one.driver.dremio.profiling-sample-limit = 100000
plugin.jdbcdatasource.ataccama.one.driver.dremio.full-select-query-pattern = SELECT {columns} FROM "{schema}"."{simpleTable}"
plugin.jdbcdatasource.ataccama.one.driver.dremio.preview-query-pattern = SELECT {columns} FROM "{schema}"."{simpleTable}" LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.dremio.row-count-query-pattern = SELECT COUNT(*) FROM "{schema}"."{simpleTable}"
plugin.jdbcdatasource.ataccama.one.driver.dremio.sampling-query-pattern = SELECT {columns} FROM "{schema}"."{simpleTable}" WHERE RAND() < {percentageLimit};
plugin.jdbcdatasource.ataccama.one.driver.dremio.dsl-query-preview-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT {previewLimit}
plugin.jdbcdatasource.ataccama.one.driver.dremio.dsl-query-import-metadata-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT 0

Sybase configuration

Sybase JDBC properties
# Disabled by default, change to disabled = false to enable driver configuration
plugin.jdbcdatasource.ataccama.one.driver.sybase.disabled = true
plugin.jdbcdatasource.ataccama.one.driver.sybase.name = Sybase
plugin.jdbcdatasource.ataccama.one.driver.sybase.connection-pattern = jdbc:jtds:sybase://<hostname>:<port>;DatabaseName =<database>
plugin.jdbcdatasource.ataccama.one.driver.sybase.driver-class-path = jtds-*.jar
plugin.jdbcdatasource.ataccama.one.driver.sybase.driver-class = net.sourceforge.jtds.jdbc.Driver
plugin.jdbcdatasource.ataccama.one.driver.sybase.pooling-enabled = true
plugin.jdbcdatasource.ataccama.one.driver.sybase.connection-timeout = 20000
plugin.jdbcdatasource.ataccama.one.driver.sybase.idle-timeout = 300000
plugin.jdbcdatasource.ataccama.one.driver.sybase.max-lifetime = 900000
plugin.jdbcdatasource.ataccama.one.driver.sybase.minimum-idle = 1
plugin.jdbcdatasource.ataccama.one.driver.sybase.maximum-pool-size = 5
plugin.jdbcdatasource.ataccama.one.driver.sybase.connection-test-query = SELECT 1
plugin.jdbcdatasource.ataccama.one.driver.sybase.full-select-query-pattern = SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.sybase.preview-query-pattern = SELECT TOP {previewLimit} {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.sybase.dsl-query-preview-query-pattern = SELECT TOP {previewLimit} * FROM ({dslQuery}) dslQuery
plugin.jdbcdatasource.ataccama.one.driver.sybase.dsl-query-import-metadata-query-pattern = SELECT TOP 0 * FROM ({dslQuery}) dslQuery
plugin.jdbcdatasource.ataccama.one.driver.sybase.row-count-query-pattern = SELECT COUNT(*) FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.sybase.sampling-query-pattern = SELECT TOP {limit} {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.sybase.disallowed-indexes-table-types = SYNONYM
plugin.jdbcdatasource.ataccama.one.driver.sybase.bulk-import-table-count-threshold = 30
plugin.jdbcdatasource.ataccama.one.driver.sybase.row-number-column = ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) as ____rno#

SQLite configuration

SQLite JDBC properties
plugin.jdbcdatasource.ataccama.one.driver.sqlite.name=Sqlite3
plugin.jdbcdatasource.ataccama.one.driver.sqlite.connection-pattern=jdbc:sqlite:/path
plugin.jdbcdatasource.ataccama.one.driver.sqlite.driver-class-path=sqlite-jdbc-*.jar
plugin.jdbcdatasource.ataccama.one.driver.sqlite.driver-class=org.sqlite.JDBC
plugin.jdbcdatasource.ataccama.one.driver.sqlite.provider-class = com.ataccama.dpe.plugin.dataconnect.jdbc.provider.sqlite.SqliteDataSourceClientProvider
plugin.jdbcdatasource.ataccama.one.driver.sqlite.pooling-enabled=false
plugin.jdbcdatasource.ataccama.one.driver.sqlite.connection-timeout=20000
plugin.jdbcdatasource.ataccama.one.driver.sqlite.idle-timeout=300000
plugin.jdbcdatasource.ataccama.one.driver.sqlite.max-lifetime=900000
plugin.jdbcdatasource.ataccama.one.driver.sqlite.minimum-idle=1
plugin.jdbcdatasource.ataccama.one.driver.sqlite.maximum-pool-size=5
plugin.jdbcdatasource.ataccama.one.driver.sqlite.profiling-sample-limit=100000
plugin.jdbcdatasource.ataccama.one.driver.sqlite.full-select-query-pattern=SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.sqlite.preview-query-pattern=SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.sqlite.row-count-query-pattern=SELECT COUNT(*) FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.sqlite.sampling-query-pattern=SELECT {columns} FROM {table} WHERE RAND() < {percentageLimit};
plugin.jdbcdatasource.ataccama.one.driver.sqlite.query-quotation-mark=`
plugin.jdbcdatasource.ataccama.one.driver.sqlite.dsl-query-preview-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT WHERE RAND() < {percentageLimit}
plugin.jdbcdatasource.ataccama.one.driver.sqlite.dsl-query-import-metadata-query-pattern = SELECT * FROM ({dslQuery}) dslQuery LIMIT 0

Azure Data Explorer (ADX) configuration

Available from 14.5.1 and later.

Azure Data Explorer (ADX) JDBC properties
plugin.jdbcdatasource.ataccama.one.driver.adx.name=Azure Data Explorer
plugin.jdbcdatasource.ataccama.one.driver.adx.connection-pattern=jdbc:sqlserver://<hostname>:<port>;databaseName=<database>
plugin.jdbcdatasource.ataccama.one.driver.adx.driver-class-path = mssql-jdbc*.jar
plugin.jdbcdatasource.ataccama.one.driver.adx.additional-classpath-files = mssqlLibs/;
plugin.jdbcdatasource.ataccama.one.driver.adx.provider-class = com.ataccama.dpe.plugin.dataconnect.jdbc.provider.mssql.ADXDataSourceClientProvider
plugin.jdbcdatasource.ataccama.one.driver.adx.full-select-query-pattern = SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.adx.preview-query-pattern = SELECT TOP {previewLimit} {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.adx.dsl-query-preview-query-pattern = SELECT TOP {previewLimit} * FROM ({dslQuery}) AS dslQuery
plugin.jdbcdatasource.ataccama.one.driver.adx.dsl-query-import-metadata-query-pattern = SELECT TOP 0 * FROM ({dslQuery}) AS dslQuery
plugin.jdbcdatasource.ataccama.one.driver.adx.row-count-query-pattern = SELECT COUNT_BIG() FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.adx.sampling-query-pattern = IF (SELECT TABLE_TYPE FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = '{simpleTable}' AND TABLE_SCHEMA = '{schema}') = 'BASE TABLE' \
 EXEC('SELECT TOP {limit} {columns} FROM {table} TABLESAMPLE (CASE WHEN {percentageLimit}*110 > 100 THEN 100 ELSE {percentageLimit}110 END PERCENT)') \
 ELSE SELECT TOP {limit} {columns} FROM (select t1., RAND(CHECKSUM(NEWID())) as ____rnd from {table} as t1) as t1 where ____rnd < {percentageLimit} * 1.1
plugin.jdbcdatasource.ataccama.one.driver.adx.disallowed-indexes-table-types=SYNONYM
plugin.jdbcdatasource.ataccama.one.driver.adx.bulk-import-table-count-threshold=30
plugin.jdbcdatasource.ataccama.one.driver.adx.row-number-column=ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) as ____rno
#plugin.jdbcdatasource.ataccama.one.driver.adx.allow-partial-listing=false
plugin.jdbcdatasource.ataccama.one.driver.adx.schema-exclude-pattern=
plugin.jdbcdatasource.ataccama.one.driver.adx.schema-include-pattern=
plugin.jdbcdatasource.ataccama.one.driver.adx.table-exclude-pattern=
plugin.jdbcdatasource.ataccama.one.driver.adx.table-include-pattern=

Troubleshooting

Microsoft SQL Server SSL issues

Connections to older versions of MSSQL might fail due to JDK algorithm restrictions. If this happens, SSL-related issues are found in the DPE logs.

Possible solutions: If they are present, remove TLSv1 and TLSv1.1 from the list of values for the property jdk.tls.disabledAlgorithms in the java.security and java.config files (usually found in /etc/crypto-policies/back-end). If java.config is not present in your distribution, it is sufficient to edit just the java.security file. Additionally, in java.config, if present, allow key length to be 1024 bits by changing the value of keySize in jdk.certpath.disabledAlgorithms and jdk.tls.disabledAlgorithm. Key length restrictions are likely to be found on the algorithms in RedHat Enterprise Linux distributions.

Example using the jdk.tls.disabledAlgorithms property:

Before fixes
jdk.tls.disabledAlgorithms=DH keySize < 768, TLSv1.1, TLSv1, SSLv3, SSLv2, DHE_DSS, RSA_EXPORT, DHE_DSS_EXPORT, DHE_RSA_EXPORT, DH_DSS_EXPORT, DH_RSA_EXPORT, DH_anon, ECDH_anon, DH_RSA, DH_DSS, ECDH, 3DES_EDE_CBC, DES_CBC, RC4_40, RC4_128, DES40_CBC, RC2, HmacMD5
After fixes
jdk.tls.disabledAlgorithms=DH keySize < 1024, SSLv3, SSLv2, DHE_DSS, RSA_EXPORT, DHE_DSS_EXPORT, DHE_RSA_EXPORT, DH_DSS_EXPORT, DH_RSA_EXPORT, DH_anon, ECDH_anon, DH_RSA, DH_DSS, ECDH, 3DES_EDE_CBC, DES_CBC, RC4_40, RC4_128, DES40_CBC, RC2, HmacMD5

Dremio error: Failure getting metadata

In case you encounter the following error Failure getting metadata: Number of fields in dataset exceeded the maximum number of fields of 800, you need to review and modify the configuration of wide table limits in Dremio. For more information, refer to the official Dremio documentation, specifically Creating and Querying Wide Tables.

Microsoft SQL Server connection error

You might experience an error while trying to connect to an Microsoft SQL data source due to a failure to establish a secure connection using Secure Sockets Layer (SSL) encryption.

Server connection error message
PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

To fix this, add ;encrypt=false to the following property:

MS SQL JDBC properties
plugin.jdbcdatasource.ataccama.one.driver.mssql.connection-pattern = jdbc:sqlserver://<hostname>:<port>;databaseName =<database>;encrypt=false

Azure Synapse truncation error

When you preview or profile data or run monitoring projects, you might see an error message like this:

Truncation error message
String or binary data would be truncated while reading column of type 'VARCHAR(8000)'. Check ANSI_WARNINGS

To resolve this error, add SET ANSI_WARNINGS OFF; to the following properties, as shown here:

Azure Synapse Analytics JDBC properties
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.full-select-query-pattern = SET ANSI_WARNINGS OFF; SELECT {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.preview-query-pattern = SET ANSI_WARNINGS OFF; SELECT TOP {previewLimit} {columns} FROM {table}
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.dsl-query-preview-query-pattern = SET ANSI_WARNINGS OFF; SELECT TOP {previewLimit} * FROM ({dslQuery}) AS dslQuery
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.dsl-query-import-metadata-query-pattern = SET ANSI_WARNINGS OFF; SELECT TOP 0 * FROM ({dslQuery}) AS dslQuery
plugin.jdbcdatasource.ataccama.one.driver.azure-synapse.sampling-query-pattern = SET ANSI_WARNINGS OFF; SELECT TOP {limit} {columns} FROM {table}

Snowflake error: Database does not exist or not authorized

When trying to preview, import, or profile a Snowflake view, you might see an error message like this:

Snowflake database does not exist or is not authorized
Failure during expansion of view '<view>': SQL compilation error: Database '<database>' does not exist or not authorized.

This means one of the following:

  • The view you wanted to work with references a table for which you don’t have the necessary access rights. We recommend contacting your database admin to verify that your permissions are correctly assigned.

  • The view is based on a table from a database that no longer exists.

Not all data counted when Snowflake database is queried

When using the default implementation, there is a limit to how many records can be returned. This means that even if your Snowflake schema has, for example, 200000 tables, the count is returned as c.10000. This issue is a result of limits in JDBC metadata queries.

To resolve this problem, you can set the property jdbc.snowflake.useInformationSchemaQueries to true, in which case Snowflake information schema queries replaces JDBC metadata queries.

This property is set via JAVA_OPTS, as follows:

plugin.executor-launch-model.ataccama.one.launch-type-properties.LOCAL.env.JAVA_OPTS=-Djdbc.snowflake.useInformationSchemaQueries=true

Connecting to Databricks using OpenJDK 17 error

When trying to connect to Databricks using OpenJDK 17, the following error might occur:

Databricks error message
[Databricks][JDBCDriver](500540) Error caught in BackgroundFetcher. Foreground thread ID: 177. Background thread ID: 211. Error caught: Could not initialize class com.databricks.client.jdbc42.internal.apache.arrow.memory.util.MemoryUtil.

To resolve this problem, add EnableArrow=0 to your JDBC URL.

BigQuery Connection Issues

When attempting to troubleshoot BigQuery connection issues, try the following:

  • Consult the ODBC and JDBC drivers for BigQuery documentation.

  • Consult the JDBC driver documentation. Included as a PDF in the downloaded JDBC driver ZIP file.

  • The recommended predefined Service Account roles used to access BigQuery data are BigQuery Data Viewer (roles/bigquery.dataViewer) and BigQuery Job User (roles/bigquery.jobUser).

  • Pay attention to the Large Result Set Support section and the related AllowLargeResults, LargeResultDataset, and LargeResultTable properties.

  • If you are using a large result set:

    • Make sure that the LargeResultDataset option is specified, the specified dataset exists, and is in the same region as the queried table.

    • Otherwise, the Service Account used must have bigquery.datasets.create permissions on the BigQuery instance specified by the ProjectId property.

    • Note that setting the LargeResultTable value causes all results to be written to the same table. However, the table creation time is not shown when the value is set.

  • If you are setting the OAuthPvtKeyPath driver property as a file path, for example, the key file is present on the DPE or the Runtime server:

    • Set it directly as a JDBC string, not as a driver property object.

    • Make sure the user running ONE (DPE or Runtime server) has read access to the key files .json and .p12.

  • Use LogLevel=6 and LogPath=<path> driver properties for the JDBC driver to create <path>/BigQuery…​log files with detailed information about connecting to BigQuery, metadata processing, and SQL commands execution.

Azure SQL database token failure

Connecting to Azure SQL database fails with the following error:

MSI Token failure
MSI Token failure: Failed to acquire access token from IMDS, verify your clientId

The error means that the clientId is either wrong or missing some required privileges. Verify the following in your environment:

  • Make sure that the Azure AD account is created within the Azure SQL server. The following example shows what the creation SQL queries look like. The user is in the table named sys.sysusers.

    Creation SQL queries
    CREATE USER name_of_the_user_managed_identity FROM EXTERNAL PROVIDER;
    ALTER ROLE db_datareader ADD MEMBER name_of_the_user_managed_identity;
  • Make sure that in your Azure SQL server, the following property is enabled on the Set server firewall tab: Allow Azure services and resources to access this server.

  • On the virtual machine (VM) where your DPE is running, check Identity > User assigned. The managed identity should match the one in the JDBC string.

    Sample JDBC string
    jdbc:sqlserver://host.database.windows.net:1433;encrypt=true;database=yourDB;encrypt=true;trustServerCertificate=false;loginTimeout=30;hostNameInCertificate=*.database.windows.net;authentication=ActiveDirectoryMSI;msiClientId=abcdxyz;
  • In most cases, completing the previous steps should be enough for DPE or any other external applications installed inside the VM to successfully connect to Azure SQL. If the problem persists, try enabling the System assigned and remove msiClientId.

Was this page useful?