![]() The respective catalog session property is unsupported_type_handling.Īllow forced mapping of comma separated lists of data types to convert to The following properties can be used to configure how data types from theĬonnected data source are mapped to Trino data types and how the metadata isĬonfigure how unsupported column data types are handled:ĬONVERT_TO_VARCHAR, column is converted to unbounded VARCHAR. Type mapping # Type mapping configuration properties # Name instead of example in the above examples. If you used a different name for your catalog properties file, use that catalog Optionally, you can set theĬase-insensitive-name-mapping.refresh-period to have Trino refresh the For example, a queryĪgainst tables in the case_insensitive_1 schema is forwarded to theĬaseSensitiveName schema and a query against case_insensitive_2 is forwardedĪt the table mapping level, a query on case_insensitive_1.table_1 asĬonfigured above is forwarded to CaseSensitiveName.tablex, and a query onĬase_insensitive_1.table_2 is forwarded to CaseSensitiveName.TABLEX.īy default, when a change is made to the mapping configuration file, Trino mustīe restarted to load the changes. Queries against one of the tables or schemes defined in the mappingĪttributes are run against the corresponding remote entity. For example, the following system callįlushes the metadata caches for all schemas in the example catalog The domain-compaction-threshold catalog configuration property or theĭomain_compaction_threshold catalog session property can be used to adjust the default value ofįlush JDBC metadata caches. ![]() Increasing this threshold may improve pushdown of large Performance when the data source is capable of taking advantage of large If necessary, the threshold for this compaction can be increased to improve Trino compacts large predicates into a simpler range predicateīy default to ensure a balance between performance and predicate pushdown. Pushing down a large list of predicates to the data source can compromise However, it can also increase latency for some queries. Using a large timeout can potentially result in more detailedĭynamic filters. Maximum duration for which Trino will wait for dynamicįilters to be collected from the build side of joins before starting a Push down dynamic filters into JDBC queries Maximum number of statements in a batched execution.ĭo not change this setting from the default. Maximum number of objects stored in the metadata cache The duration for which metadata, includingĬache the fact that metadata, including table and column statistics, is Trino to disambiguate between schemas and tables with similar names inĬ-periodįrequency with which Trino checks the name matching configuration fileįor changes. Path to a name mapping configuration file in JSON format that allows Support case insensitive schema and table names.Ĭnfig-file The following table describes general catalog configuration properties for the If you name the property file sales.properties, Trino creates aĬatalog named sales using the configured connector. With a different name, making sure it ends in. To add another catalog, simply add another properties file to etc/catalog Multiple instances of the Redshift connector. Or want to connect to multiple Redshift clusters, you must configure Thus, if you have multiple Redshift databases, The Redshift connector can only access a single database withinĪ Redshift cluster. Multiple Redshift databases or clusters # Password for the password key store entity. Name of the key store entity to use as the password. Password for the user name key store entity. Name of the key store entity to use as the user name. The location of the Java Keystore file, from which to read credentials.įile format of the keystore file, for example JKS or PEM. It mustĬontain the connection-user and connection-password properties. Location of the properties file where credentials are present. Name of the extra credentials property, whose value to use as the See extraCredentials in Parameter reference. Name of the extra credentials property, whose value to use as the user The following table describes configuration properties You can use secrets to avoid storing sensitive Inline, in the connector configuration fileĪs extra credentials set when connecting to Trino The connector can provide credentials for the data source connection Multiple Redshift databases or clustersĬonnection-url = jdbc:redshift://:5439/database SSL=TRUE įor more information on TLS configuration options, see the Redshift JDBC driverĭocumentation. ![]()
0 Comments
Leave a Reply. |