`

Matillion ETL Data Model for Spark SQL
Version - 18.0.6886

Note: Data models are true for the latest version of Matillion ETL. If you are on an older version or using a component from an old job, your experience may differ.




Connection String OptionsBack To Top

  1. Auth Scheme
  2. Auto Cache
  3. Batch Size
  4. Cache Connection
  5. Cache Driver
  6. Cache Location
  7. Cache Metadata
  8. Cache Query Result
  9. Connection Life Time
  10. Connect On Open
  11. Database
  12. Firewall Password
  13. Firewall Port
  14. Firewall Server
  15. Firewall Type
  16. Firewall User
  17. HTTP Path
  18. Kerberos KDC
  19. Kerberos Realm
  20. Kerberos SPN
  21. Location
  22. Logfile
  23. Max Log File Size
  24. Max Rows
  25. Offline
  26. Other
  27. Password
  28. Pool Idle Timeout
  29. Pool Max Size
  30. Pool Min Size
  31. Pool Wait Time
  32. Port
  33. Protocol Version
  34. Proxy Auth Scheme
  35. Proxy Auto Detect
  36. Proxy Exceptions
  37. Proxy Password
  38. Proxy Port
  39. Proxy Server
  40. Proxy SSL Type
  41. Proxy User
  42. Pseudo Columns
  43. Query Passthrough
  44. Readonly
  45. RTK
  46. Server
  47. Server Configurations
  48. SSL Server Cert
  49. Support Enhanced SQL
  50. Tables
  51. Timeout
  52. Transport Mode
  53. Use Connection Pooling
  54. Use Insert Select Syntax
  55. User
  56. Use Show Databases Query
  57. Use Show Tables Query
  58. Use SSL
  59. Verbosity
  60. Views



Auth Scheme

The authentication scheme used. Accepted entries are PLAIN, LDAP, NONE, and KERBEROS.
Data Type

string

Default Value

"PLAIN"

Remarks

The AuthScheme used to authenticate with SparkSQL.





Auto Cache

Automatically caches the results of SELECT queries into a cache database specified by either CacheLocation or both of CacheConnection and CacheProvider . Configured with CacheQueryResult .
Data Type

bool

Default Value

false

Remarks

When AutoCache is set, the driver automatically maintains a cache of your table's data in the database of your choice. With CacheQueryResult additionally set, the driver updates the cache when you execute a SELECT query and returns the live results from the SparkSQL data.

Explicitly Caching SELECT Results

CacheQueryResult is a way to query SparkSQL in real time while maintaining a cache for offline use. Set CacheQueryResult to update the cache whenever you execute a SELECT statement. When you execute a SELECT statement with AutoCache and CacheQueryResult set, the driver executes the query to the remote data and caches the results; rows that already exist are overwritten. That is, SELECT statements are used to create and refresh the cache, not to query it. Data manipulation commands are executed to the remote data as well.

To query the cached data, set the Offline property. If you need to query the cached data in an online connection, you can append #CACHE to the table name. For example:

SELECT * FROM [Customers#CACHE]

Setting the Caching Database

When AutoCache is set, the driver caches to a simple, file-based cache. You can configure its location or cache to a different database with the following properties:

See Also





Batch Size

The maximum size of each batch operation to submit.
Data Type

int

Default Value

0

Remarks

When BatchSize is set to a value greater than 0, the batch operation will split the entire batch into separate batches of size BatchSize. The split batches will then be submitted to the server individually. This is useful when the server has limitations on the size of the request that can be submitted.

Setting BatchSize to 0 will submit the entire batch as specified.





Cache Connection

The connection string for the cache database. Always used in conjunction with CacheProvider . Setting both properties will override the value set for CacheLocation for caching data.
Data Type

string

Default Value

""

Remarks

The cache database is determined based on the CacheDriver and CacheConnection properties. Both properies are required to use the cache database. Examples of common cache database settings can be found below. For more information on setting the caching database's driver, refer to CacheDriver.

The connection string specified in the CacheConnection property is passed directly to the underlying CacheDriver. Consult the documentation for the specific JDBC driver for more information on the available properties. Make sure to include the JDBC driver in your application's classpath.

Derby and Java DB

The driver simplifies caching to Derby, only requiring you to set the CacheLocation property to make a basic connection.

Alternatively, you can configure the connection to Derby manually using CacheProvider and CacheConnection. Below is the Derby JDBC URL syntax:

jdbc:derby:[subsubprotocol:][databaseName][;attribute=value[;attribute=value] ... ]
For example, to cache to an in-memory database, use the following:
jdbc:derby:memory

SQLite

To cache to SQLite, you can use the SQLite JDBC driver. Below is the syntax of the JDBC URL:

jdbc:sqlite:dataSource

MySQL

The installation includes the CData JDBC Driver for MySQL. Below is an example JDBC URL:

jdbc:mysql:User=root;Password=root;Server=localhost;Port=3306;Database=cache
Below are typical connection properties:

SQL Server

The JDBC URL for the Microsoft JDBC Driver for SQL Server has the following syntax:

jdbc:sqlserver://[serverName[\instance][:port]][;database=databaseName][;property=value[;property=value] ... ]
For example:
jdbc:sqlserver://localhost\sqlexpress:1433;integratedSecurity=true
Below are typical SQL Server connection properties:
Oracle

Below is the conventional JDBC URL syntax for the Oracle JDBC Thin driver:

jdbc:oracle:thin:[userId/password]@[//]host[[:port][:sid]]
For example:
jdbc:oracle:thin:scott/tiger@myhost:1521:orcl
Below are typical connection properties:
PostgreSQL

Below is the JDBC URL syntax for the official PostgreSQL JDBC driver:

jdbc:postgresql:[//[host[:port]]/]database[[?option=value][[&option=value][&option=value] ... ]]
For example, the following connection string connects to a database on the default host (localhost) and port (5432):
jdbc:postgresql:postgres
Below are typical connection properties:





Cache Driver

The database driver to be used to cache data.
Data Type

string

Default Value

""

Remarks

You can cache to any database for which you have a JDBC driver, including CData JDBC drivers.

The cache database is determined based on the CacheDriver and CacheConnection properties. The CacheDriver is the name of the JDBC driver class that you would like to use to cache data.

Note that you must also add the CacheDriver JAR to the classpath.

The following examples show how to cache to several major databases. Refer to CacheConnection for more information on the JDBC URL syntax and typical connection properties.

Derby and Java DB

The driver simplifies Derby configuration. Java DB is the Oracle distribution of Derby. The JAR is shipped in the JDK. You can find the JAR, derby.jar, in the db subfolder of the JDK installation. In most caching scenarios, you need to specify only the following, after adding derby.jar to the classpath.

jdbc:sparksql:CacheLocation='c:/Temp/cachedir';Server=127.0.0.1;
To customize the Derby JDBC URL, use CacheDriver and CacheConnection. For example, to cache to an in-memory database, use a JDBC URL like the following:
jdbc:sparksql:CacheDriver=org.apache.derby.jdbc.EmbeddedDriver;CacheConnection='jdbc:derby:memory';Server=127.0.0.1;
SQLite

Below is a JDBC URL for the SQLite JDBC driver:

jdbc:sparksql:CacheDriver=org.sqlite.JDBC;CacheConnection='jdbc:sqlite:C:/Temp/sqlite.db';Server=127.0.0.1;
MySQL

Below is a JDBC URL for the included CData JDBC Driver for MySQL:

  jdbc:sparksql:Cache Driver=cdata.jdbc.mysql.MySQLDriver;Cache Connection='jdbc:mysql:Server=localhost;Port=3306;Database=cache;User=root;Password=123456';Server=127.0.0.1;
  
The CData JDBC Driver for MySQL is located in the lib subfolder of the CData JDBC Driver for SparkSQL 2018 installation directory.
SQL Server

The following JDBC URL uses the Microsoft JDBC Driver for SQL Server:

jdbc:sparksql:Cache Driver=com.microsoft.sqlserver.jdbc.SQLServerDriver;Cache Connection='jdbc:sqlserver://localhost\sqlexpress:7437;user=sa;password=123456;databaseName=Cache';Server=127.0.0.1;
Oracle

Below is a JDBC URL for the Oracle Thin Client:

jdbc:sparksql:Cache Driver=oracle.jdbc.driver.OracleDriver;CacheConnection='jdbc:oracle:thin:scott/tiger@localhost:1521:orcldb';Server=127.0.0.1;
PostgreSQL

The following JDBC URL uses the official PostgreSQL JDBC driver:

jdbc:sparksql:CacheDriver=org.postgresql.Driver;CacheConnection='jdbc:postgresql://localhost:5433/postgres?user=postgres&password=admin';Server=127.0.0.1;





Cache Location

Specifies the path to the cache when caching to a file.
Data Type

string

Default Value

""

Remarks

The CacheLocation is a simple, file-based cache. The driver uses Java DB, Oracle's distribution of the Derby database. To cache to Java DB, you will need to add the Java DB JAR to the classpath. The JAR, derby.jar, is shipped in the JDK and located in the db subfolder of the JDK installation.

CacheLocation defaults to the directory specified by the Location setting.

See Also





Cache Metadata

Whether or not to cache the table metadata to a file store.
Data Type

bool

Default Value

false

Remarks

As you execute queries with this property set, table metadata in the SparkSQL catalog are cached to the file store specified by CacheLocation if set or the user's home directory otherwise. A table's metadata will be retrieved only once, when the table is queried for the first time.

When to Use CacheMetadata

The driver automatically persists metadata in memory for up to two hours when you first discover the metadata for a table or view and therefore, CacheMetadata is generally not required. CacheMetadata becomes useful when metadata operations are expensive such as when you are working with large amounts of metadata or when you have many short-lived connections.

When Not to Use CacheMetadata





Cache Query Result

With AutoCache set, caches each row read from a SELECT query's results. Without this, the provider will attempt to fully replicate the table before executing the actual query against the replication database.
Data Type

bool

Default Value

false

Remarks

When CacheQueryResult and AutoCache are set, the rows returned from a SELECT query are cached in the cache database. The driver handles caching in a streaming fashion with each row being processed into the cache database from the original result set as you read the row from the returned ResultSet object. This ensures that the live data is not queried twice. Note that any rows you do not read from the returned ResultSet will not be updated in the cache.





Connection Life Time

The maximum lifetime of a connection in seconds. Once the time has elapsed, the connection object is disposed.
Data Type

string

Default Value

"0"

Remarks

The maximum lifetime of a connection in seconds. Once the time has elapsed, the connection object is disposed. The default is 0 which indicates there is no limit to the connection lifetime.





Connect On Open

Species whether to connect to the SparkSQL when the connection is opened.
Data Type

bool

Default Value

false

Remarks

When set to 'true', a connection will be made to SparkSQL when the connection is opened. This property enables the 'Test Connection' feature available in various database tools.

This feature acts as a NOOP command as it is used to verify a connection can be made to SparkSQL and nothing from this initial connection is maintained.

Setting this property to 'false' may provide performance improvements (depending upon the number of times a connection is opened).





Database

The name of the SparkSQL database.
Data Type

string

Default Value

""

Remarks

The name of the SparkSQL database.





Firewall Password

A password used to authenticate to a proxy-based firewall.
Data Type

string

Default Value

""

Remarks

This property is passed to the proxy specified by FirewallServer and FirewallPort, following the authentication method specified by FirewallType.





Firewall Port

The TCP port for a proxy-based firewall.
Data Type

string

Default Value

""

Remarks

This specifies the TCP port for a proxy allowing traversal of a firewall. Use FirewallServer to specify the name or IP address. Specify the protocol with FirewallType.





Firewall Server

The name or IP address of a proxy-based firewall.
Data Type

string

Default Value

""

Remarks

This property specifies the IP address, DNS name, or host name of a proxy allowing traversal of a firewall. The protocol is specified by FirewallType: Use FirewallServer with this property to connect through SOCKS or do tunneling. Use ProxyServer to connect to an HTTP proxy.

Note that the driver uses the system proxy by default. To use a different proxy, set ProxyAutoDetect to false.





Firewall Type

The protocol used by a proxy-based firewall.
Data Type

string

Default Value

"NONE"

Remarks

This property specifies the protocol that the driver will use to tunnel traffic through the FirewallServer proxy. Note that by default the driver connects to the system proxy; to disable this behavior and connect to one of the following proxy types, set ProxyAutoDetect to false.

Type Default Port Description
TUNNEL 80 When this is set, the driver opens a connection to SparkSQL and traffic flows back and forth through the proxy.
SOCKS4 1080 When this is set, the driver sends data through the SOCKS 4 proxy specified by FirewallServer and FirewallPort and passes the FirewallUser value to the proxy, which determines if the connection request should be granted.
SOCKS5 1080 When this is set, the driver sends data through the SOCKS 5 proxy specified by FirewallServer and FirewallPort. If your proxy requires authentication, set FirewallUser and FirewallPassword to credentials the proxy recognizes.

To connect to HTTP proxies, use ProxyServer and ProxyPort. To authenticate to HTTP proxies, use ProxyAuthScheme, ProxyUser, and ProxyPassword.





Firewall User

The user name to use to authenticate with a proxy-based firewall.
Data Type

string

Default Value

""

Remarks

The FirewallUser and FirewallPassword properties are used to authenticate against the proxy specified in FirewallServer and FirewallPort, following the authentication method specified in FirewallType.





HTTP Path

The path component of the URL endpoint when using HTTP TransportMode.
Data Type

string

Default Value

"cliservice"

Remarks

This property is used to specify the path component of the URL endpoint when using HTTP TransportMode.

This property should be set to the value specified in the 'hive.server2.thrift.http.path' property of you Hive configuration file (hive-site.xml).





Kerberos KDC

The Kerberos Key Distribution Center (KDC) service used to authenticate the user.
Data Type

string

Default Value

""

Remarks

The Kerberos properties are used when using Windows Authentication. The driver will request session tickets and temporary session keys from the Kerberos Key Distribution Center (KDC) service. The Kerberos Key Distribution Center (KDC) service is conventionally colocated with the domain controller. If Kerberos KDC is not specified the driver will attempt to detect these properties automatically from the following locations:

Note: Windows authentication is supported in JRE 1.6 and above only.





Kerberos Realm

The Kerberos Realm used to authenticate the user with.
Data Type

string

Default Value

""

Remarks

The Kerberos properties are used when using SPNEGO or Windows Authentication. The Kerberos Realm is used to authenticate the user with the Kerberos Key Distribution Service (KDC). The Kerberos Realm can be configured by an administrator to be any string, but conventionally it is based on the domain name. If Kerberos Realm is not specified the driver will attempt to detect these properties automatically from the following locations:

Note: Kerberos-based authentication is supported in JRE 1.6 and above only.





Kerberos SPN

The Service Principal Name for the Kerberos Domain Controller.
Data Type

string

Default Value

""

Remarks

If the Service Principal Name on the Kerberos Domain Controller is not the same as the URL that you are authenticating to, set the Service Principal Name here.





Location

A path to the directory that contains the schema files defining tables, views, and stored procedures.
Data Type

string

Default Value

""

Remarks

The path to a directory which contains the schema files for the driver (.rsd files for tables and views, .rsb files for stored procedures). The Location property is only needed if you would like to customize definitions (e.g., change a column name, ignore a column, etc.) or extend the data model with new tables, views, or stored procedures.

The schema files are deployed alongside the driver assemblies. You must also ensure that Location points to the folder that contains the schema files. The folder location can be a relative path from the location of the executable.





Logfile

A path to the log file.
Data Type

string

Default Value

""

Remarks

For more control over what is written to the log file, take a look at Verbosity.





Max Log File Size

A string specifying the maximum size in bytes for a log file (ex: 10MB). When the limit is hit, a new log is created in the same folder with the date and time appended to the end.
Data Type

string

Default Value

"100MB"

Remarks

A string specifying the maximum size in bytes for a log file (ex: 10MB). When the limit is hit, a new log is created in the same folder with the date and time appended to the end. The default limit is 100MB. Values lower than 100kB will use 100kB as the value instead.





Max Rows

Limits the number of rows returned rows when no aggregation or group by is used in the query. This helps avoid performance issues at design time.
Data Type

string

Default Value

"-1"

Remarks

Limits the number of rows returned rows when no aggregation or group by is used in the query. This helps avoid performance issues at design time.





Offline

Use offline mode to get the data from the cache instead of the live source.
Data Type

bool

Default Value

false

Remarks

When Offline is set to TRUE, all queries execute against the cache as opposed to the live data source. In this mode, certain queries like INSERT, UPDATE, DELETE, and CACHE are not allowed.





Other

Hidden properties needed only in specific use cases.
Data Type

string

Default Value

""

Remarks

The properties listed below are available for specific use cases. Normal driver use cases and functionality should not require these properties.

Specify multiple properties in a semicolon-separated list.

Caching Configuration

CachePartial=TrueCaches only a subset of columns, which you can specify in your query.
QueryPassthrough=TruePasses the specified query to the cache database instead of using the SQL parser of the driver.

Integration and Formatting

DefaultColumnSizeSets the default length of string fields when the data source does not provide column length in the metadata. The default value is 2000.
ConvertDateTimeToGMTWhether to convert date-time values to GMT, instead of the local time of the machine.
RecordToFile=filenameRecords the underlying socket data transfer to the specified file.





Password

The password used to authenticate with SparkSQL.
Data Type

string

Default Value

""

Remarks

The password used to authenticate with SparkSQL.





Pool Idle Timeout

The allowed idle time for a connection before it is closed.
Data Type

string

Default Value

""

Remarks

The allowed idle time a connection can remain in the pool until the connection is closed. The default is 60 seconds.





Pool Max Size

The maximum connections in the pool.
Data Type

string

Default Value

"100"

Remarks

The maximum connections in the pool. The default is 100. To disable this property, set the property value to 0 or less.





Pool Min Size

The minimum number of connections in the pool.
Data Type

string

Default Value

"1"

Remarks

The minimum number of connections in the pool. The default is 1.





Pool Wait Time

The max seconds to wait for an available connection.
Data Type

string

Default Value

""

Remarks

The max seconds to wait for a connection to become available. If a new connection request is waiting for an available connection and exceeds this time, an error is thrown. By default, new requests wait forever for an available connection.





Port

The port for the SparkSQL database.
Data Type

string

Default Value

"27017"

Remarks

The port for the SparkSQL database.





Protocol Version

The Protocol Version used to authenticate with SparkSQL.
Data Type

string

Default Value

"8"

Remarks

The Protocol Version used to authenticate with SparkSQL.





Proxy Auth Scheme

The authentication type to use to authenticate to the ProxyServer proxy.
Data Type

string

Default Value

"BASIC"

Remarks

This value specifies the authentication type to use to authenticate to the HTTP proxy specified by ProxyServer and ProxyPort.

Note that the driver will use the system proxy settings by default, without further configuration needed; if you want to connect to another proxy, you will need to set ProxyAutoDetect to false, in addition to ProxyServer and ProxyPort. To authenticate, set ProxyAuthScheme and set ProxyUser and ProxyPassword, if needed.

The authentication type can be one of the following:

If you need to use another authentication type, such as SOCKS 5 authentication, see FirewallType.





Proxy Auto Detect

This indicates whether to use the system proxy settings or not. Set ProxyAutoDetect to FALSE to use custom proxy settings. This takes precedence over other proxy settings.
Data Type

bool

Default Value

true

Remarks

By default, the driver uses the system HTTP proxy. Set this to false if you want to connect to another proxy.

To connect to an HTTP proxy, see ProxyServer.

For other proxies, such as SOCKS or tunneling, see FirewallType.





Proxy Exceptions

A semicolon separated list of hosts or IPs that will be exempt from connecting through the ProxyServer .
Data Type

string

Default Value

""

Remarks

The ProxyServer will be used for all addresses, except for addresses defined in this property. Use semicolons to separate entries.

Note that the driver will use the system proxy settings by default, without further configuration needed; if you want to explicitly configure proxy exceptions for this connection, you will need to set ProxyAutoDetect to false, and configure ProxyServer and ProxyPort. To authenticate, set ProxyAuthScheme and set ProxyUser and ProxyPassword, if needed.





Proxy Password

A password to be used to authenticate to the ProxyServer proxy.
Data Type

string

Default Value

""

Remarks

This property is used to authenticate to an HTTP proxy server that supports NTLM (Windows), Kerberos, or HTTP authentication. To specify the HTTP proxy, you can set ProxyServer and ProxyPort. To specify the authentication type, set ProxyAuthScheme.

If you are using HTTP authentication, additionally set ProxyUser and ProxyPassword to HTTP proxy.

If you are using NTLM authentication, set ProxyUser and ProxyPassword to your Windows password. You may also need these to complete Kerberos authentication.

For SOCKS 5 authentication or tunneling, see FirewallType.

By default, the driver uses the system proxy. If you want to connect to another proxy, set ProxyAutoDetect to false.





Proxy Port

The TCP port the ProxyServer proxy is running on.
Data Type

string

Default Value

"80"

Remarks

The port the HTTP proxy is running on that you want to redirect HTTP traffic through. Specify the HTTP proxy in ProxyServer. For other proxy types, see FirewallType.





Proxy Server

The hostname or IP address of a proxy to route HTTP traffic through.
Data Type

string

Default Value

""

Remarks

The hostname or IP address of a proxy to route HTTP traffic through. The driver can use the HTTP, Windows (NTLM), or Kerberos authentication types to authenticate to an HTTP proxy.

If you need to connect through a SOCKS proxy or tunnel the connection, see FirewallType.

By default, the driver uses the system proxy. If you need to use another proxy, set ProxyAutoDetect to false.





Proxy SSL Type

The SSL type to use when connecting to the ProxyServer proxy.
Data Type

string

Default Value

"AUTO"

Remarks

This property determines when to use SSL for the connection to an HTTP proxy specified by ProxyServer. This value can be AUTO, ALWAYS, NEVER, or TUNNEL. The applicable values are the following:

AUTODefault setting. If the URL is an HTTPS URL, the driver will use the TUNNEL option. If the URL is an HTTP URL, the component will use the NEVER option.
ALWAYSThe connection is always SSL enabled.
NEVERThe connection is not SSL enabled.
TUNNELThe connection is through a tunneling proxy: The proxy server opens a connection to the remote host and traffic flows back and forth through the proxy.





Proxy User

A user name to be used to authenticate to the ProxyServer proxy.
Data Type

string

Default Value

""

Remarks

The ProxyUser and ProxyPassword options are used to connect and authenticate against the HTTP proxy specified in ProxyServer.

You can select one of the available authentication types in ProxyAuthScheme. If you are using HTTP authentication, set this to the username of a user recognized by the HTTP proxy. If you are using Windows or Kerberos authentication, set this property to a username in one of the following formats:

user@domain
domain\user





Pseudo Columns

Indicates whether or not to include pseudo columns as columns to the table.
Data Type

string

Default Value

""

Remarks

This setting is particularly helpful in Entity Framework, which does not allow you to set a value for a pseudo column unless it is a table column. The value of this connection setting is of the format "Table1=Column1, Table1=Column2, Table2=Column3". You can use the "*" character to include all tables and all columns; i.e., "*=*".





Query Passthrough

This option passes the query to SparkSQL as-is.
Data Type

bool

Default Value

false

Remarks

The Protocol Version used to authenticate with SparkSQL.





Readonly

You can use this property to enforce read-only access to SparkSQL from the provider.
Data Type

bool

Default Value

false

Remarks

If this property is set to true, the driver will allow only SELECT queries. INSERT, UPDATE, DELETE, and stored procedure queries will cause an error to be thrown.





RTK

The runtime key used for licensing.
Data Type

string

Default Value

""

Remarks

The RTK property may be used to license a build. Please see the included licensing file to see how to set this property. The runtime key is only available if you purchased an OEM license.





Server

The host name or IP address of the server hosting the SparkSQL database.
Data Type

string

Default Value

""

Remarks

The host name or IP address of the server hosting the SparkSQL database.





Server Configurations

A name-value list of server configuration variables to override the server defaults.
Data Type

string

Default Value

""

Remarks

This property takes a comma separated list of configuration variables specified as name-value pairs. Any values specified here will be sent to the Hive server to override the default values.

Example: hive.enforce.bucketing=true,hive.enforce.sorting=true





SSL Server Cert

The certificate to be accepted from the server when connecting using TLS/SSL.
Data Type

string

Default Value

""

Remarks

If using a TLS/SSL connection, this property can be used to specify the TLS/SSL certificate to be accepted from the server. Any other certificate that is not trusted by the machine will be rejected.

This property can take the forms:

Description Example
A full PEM Certificate (example shortened for brevity) -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE-----
A path to a local file containing the certificate C:\cert.cer
The public key (example shortened for brevity) -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY-----
The MD5 Thumbprint (hex values can also be either space or colon separated) ecadbdda5a1529c58a1e9e09828d70e4
The SHA1 Thumbprint (hex values can also be either space or colon separated) 34a929226ae0819f2ec14b4a3d904f801cbb150d

If not specified, any certificate trusted by the machine will be accepted. Use '*' to signify to accept all certificates (not recommended for security concerns).





Support Enhanced SQL

Enhances SQL functionality beyond what can be supported through the API directly, by enabling in-memory client-side processing.
Data Type

bool

Default Value

true

Remarks

When SupportEnhancedSQL is set to true, the driver offloads as much of the SELECT statement processing as possible to SparkSQL and then processes the rest of the query in memory. In this way the driver can execute unsupported predicates, joins, and aggregation.

When SupportEnhancedSQL is set to false, the driver limits SQL execution to what is supported by the SparkSQL API.

Execution of Predicates

The driver determines which of the clauses are supported by the data source and then pushes them to the source to get the smallest superset of rows that would satisfy the query. It then filters the rest of the rows locally. The filter operation is streamed, which enables the driver to filter effectively for even very large datasets.

Execution of Joins

The driver uses various techniques to join in memory. The driver trades off memory utilization against the requirement of reading the same table more than once.

Execution of Aggregates

The driver retrieves all rows necessary to process the aggregation in memory.





Tables

Restrict the tables reported to a subset of the available tables. For example: Tables=TableA,TableB,TableC.
Data Type

string

Default Value

""

Remarks

Listing the tables from some databases can be expensive. Providing a list of tables in the connection string improves the performance of the driver.

This property can also be used as an alternative to automatically listing views if you already know which ones you want to work with and there would otherwise be too many to work with.

Specify the tables you want in a comma-separated list. For example: Tables=TableA,TableB,TableC





Timeout

The value in seconds until the timeout error is thrown, canceling the operation.
Data Type

string

Default Value

"60"

Remarks

If the Timeout property is set to 0, operations do not time out: They run until they complete successfully or encounter an error condition.

If Timeout expires and the operation is not yet complete, the driver throws an exception.





Transport Mode

The transport mode to use to communicate with the Hive server. Accepted entries are BINARY and HTTP.
Data Type

string

Default Value

"BINARY"

Remarks

The transport mode used to communicate with Hive server.

This property should be set to the 'hive.server2.transport.mode' value specified in your Hive configuration file (hive-site.xml).





Use Connection Pooling

Enables connection pooling.
Data Type

bool

Default Value

false

Remarks

Enables connection pooling. The default is false. See Connection Pooling for information on using connection pools.





Use Insert Select Syntax

Specifies whether to use SSL Encryption when connecting to Hive.
Data Type

bool

Default Value

false

Remarks

Set this property to the value specified in the 'hive.server2.use.SLL' property of your Hive configuration file (hive-site.xml).





User

The username used to authenticate with SparkSQL.
Data Type

string

Default Value

""

Remarks

The username used to authenticate with SparkSQL.





Use Show Databases Query

This option specifies whether the schemas will be retrieved using a SHOW DATABASES query or the GetSchemas Thrift API.
Data Type

bool

Default Value

false

Remarks

When set to true, a SHOW DATABASES query will be issued to retrieve the schemas.





Use Show Tables Query

This option specifies whether the tables will be retrieved using a SHOW TABLES query or the GetTables Thrift API.
Data Type

bool

Default Value

false

Remarks

When set to true, a SHOW TABLES query will be issued to retrieve the tables for the database.





Use SSL

Specifies whether to use SSL Encryption when connecting to Hive.
Data Type

bool

Default Value

false

Remarks

Set this property to the value specified in the 'hive.server2.use.SLL' property of your Hive configuration file (hive-site.xml).





Verbosity

The verbosity level that determines the amount of detail included in the log file.
Data Type

string

Default Value

"1"

Remarks

The verbosity level determines the amount of detail that the driver reports to the Logfile. Verbosity levels from 1 to 5 are supported. These are described below:

1Setting Verbosity to 1 will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
2Setting Verbosity to 2 will log everything included in Verbosity 1, cache queries, and additional information about the request, if applicable, such as HTTP headers.
3Setting Verbosity to 3 will additionally log the body of the request and the response.
4Setting Verbosity to 4 will additionally log transport-level communication with the data source. This includes SSL negotiation.
5Setting Verbosity to 5 will additionally log communication with the data source and additional details that may be helpful in troubleshooting problems. This includes interface commands.

The Verbosity should not be set to greater than 1 for normal operation. Substantial amounts of data can be logged at higher verbosities, which can delay execution times.





Views

Restrict the views reported to a subset of the available tables. For example: Views=ViewsA,ViewsB,ViewsC.
Data Type

string

Default Value

""

Remarks

Listing the views from some databases can be expensive. Providing a list of views in the connection string improves the performance of the driver.

This property can also be used as an alternative to automatically listing views if you already know which ones you want to work with and there would otherwise be too many to work with.

Specify the views you want in a comma-separated list. For example: For example: Views=ViewsA,ViewsB,ViewsC