Connect with Exasol
Connect with other ETL or BI tools, DB clients, monitoring systems, or even databases
cancel
Showing results for 
Search instead for 
Did you mean: 
This article helps identify and troubleshoot Exasol connectivity issues.
View full article
Here is a list of lessons we learned while migrating our reporting tools from MySQL (5.6 with strict mode off) to Exasol.
View full article
Background To export CSV files from Microsoft SQL Server Options to export csv from  Microsoft SQL Server Option 1: Using the   SQL Server Import and Export Data Wizard / DTSWizard   (tool should be included in your SQL Server installation): Open the wizard Select your data source and the database you want to export Continue with   Next Select   Flat File Destination   as your destination and browse to the desired file path for your csv file. It is necessary to use *.csv as your file extension Note: If you want to override an existing file, you have to delete the old file first! Otherwise the exported data will be appended to the existing file. Change the   Text qualifier   to   “   (double quotes) and set   UTF-8   as   Code page Continue by clicking   Next Since double quotes will not be escaped properly, you have to select ‘Write a query to specify the data to transfer’ and proceed with clicking   Next Now fill in a valid   SELECT Statement   to select the data you want to export Note: It is important to make use of the REPLACE function to ensure a proper masking of double quotes. It is only necessary for Columns that might contain strings with double quotes. (Documentation of   REPLACE:   https://msdn.microsoft.com/de-de/library/ms186862.aspx) Example: SELECT [Customer Key], [WWI Customer ID], REPLACE([Customer], '"', '""') AS 'Customer', REPLACE([Bill to Customer], '"', '""') AS 'Bill to Customer', REPLACE([Category], '"', '""') AS 'Category', REPLACE([Buying Group], '"', '""') AS 'Buying Group', REPLACE([Primary Contact], '"', '""') AS 'Primary Contact', [Postal Code], [Valid From], [Valid To], [Lineage Key] FROM Dimension.Customer​ Confirm your settings for the   Flat File Destination   and continue with   Next Start the export by clicking   Finish You will receive a short report about the successful export   Option 2: Using the   bcp Utility: As this method is kind of unhandy, we recommend using option 1. If, for some reason, you are not able to use the   DTS Wizard   you can use the following manual to export your data with the   bcp Utility. (Documentation   bcp Utility:   https://msdn.microsoft.com/en-us/library/ms162802.aspx) The needed   bcp   command looks something like this: bcp “SELECT STATEMENT” queryout “OUTPUT FILEPATH” –c –t”,” –r”\n” –q –S SERVERNAME –T To ensure a correct export, the   SELECT Statement   has to meet certain criteria: All columns that might contain   commas,   double quotes   or any other special characters, have to be enclosed by “char(34)” (ASCII Code for   "). Thus will add   double quotes   before and after the exported field. Example: “SELECT [Customer Key] , char(34)   +   [Customer]   +   char(34) FROM …” All columns that might contain   double quotes, have to be selected with the   REPLACE   function. This way,   double quotes   will be masked properly in your csv file ("Example" -> ""Example""). Example: “SELECT [Customer Key] , char(34)   +   [Customer]   +   char(34), char(34)   +   REPLACE( [Category] , char(34), char(34) + char(34))   +   char(34) FROM …” Note: Since the   SELECT Statement   has to start with   double quotes, you have to use   char(34)   as a replacement for   "   aswell. Otherwise the console would interpret it as the end of the SELECT Statement. REPLACE( [Category] , ", "")   ->   REPLACE( [Category] , char(34), char(34) + char(34)) (Documentation of   REPLACE:   https://msdn.microsoft.com/de-de/library/ms186862.aspx) All columns that allow entries being   NULL, must use the   COALESCE   function. This guarantees proper conversion from   NULL   to “” (empty string). Example: “SELECT [Customer Key] , char(34)   +   [Customer]   +   char(34), char(34)   +   REPLACE( [Category] , char(34), char(34) + char(34))   +   char(34), char(34)   +   COALESCE( [Primary Contact] ,'')   +   char(34) FROM…” Note:   COALESCE   is using two single quotes as second parameter! You might have to combine the functions depending on your database design. Example: “SELECT [Customer Key] , char(34)   +   [Customer]   +   char(34), char(34)   +   COALESCE(REPLACE( [Category] , char(34), char(34) + char(34)), '')   +   char(34), char(34)   +   COALESCE( [Primary Contact] ,'')   +   char(34) FROM…” A complete bcp command can look as follows: bcp "Select [Customer Key] , [WWI Customer ID] , char(34)   +   [Customer]   +   char(34), char(34)   +   [Bill to Customer]   +   char(34), char(34)   +   REPLACE( [Category] , char(34), char(34)+char(34))   +   char(34), char(34)   +   [Buying Group]   +   char(34), char(34)   +   COALESCE(REPLACE( [Primary Contact] ,char(34),char(34)+char(34)),'')   +   char(34), char(34)   +   [Postal Code]   +   char(34), [Valid From] , [Valid To] , [Lineage Key] From WideWorldImporters.Dimension.Customer" queryout "C:\Test.csv" -c -t"," -r"\n" -q -S HW1729 –T Start the export by pressing Enter
View full article
This article shows you how to respond when you encounter problems with the Exasol ODBC driver
View full article
Table of Contents Problem Bind Variables in prepared statements for executing a script does not work. Diagnosis When the user executes this statement con.prepareStatement( "EXECUTE SCRIPT LUASCRIPT ?" )  in a java programm , the user gets an error like this: java.sql.SQLSyntaxErrorException: syntax error, unexpected '?', expecting END_OF_INPUT_ or ';' [line 1, column 26] (Session: 1681284472046092288) at com.exasol.jdbc.ExceptionFactory.createSQLException(ExceptionFactory.java:39) ... Explanation We do not support Prepared Parameters in EXECUTE SCRIPT statements.  Recommendation As a workaround, to prevent SQL injection, you can insert the parameter value into a temporary table and use that value in the script. Example: Assume, that we have a script that creates a table based upon the parameter.  --/ CREATE or REPLACE lua SCRIPT test.my_script_old (table_name) AS query([[create table ::t (a int )]], {t=table_name}) / We have to rewrite our example script: It now reads the values from a table test.temp CREATE or REPLACE lua SCRIPT test.my_script_new () AS local success, res = pquery([[SELECT name FROM test.temp]]) table_name = res[1][1] query([[create table ::t (a int )]], {t=table_name}) / So, we can use it now in our java class as follow: // create tempory table stmt = con.prepareStatement( "create or replace table test.temp (name varchar(100))" ); stmt.execute(); // insert the needed parameter stmt = con.prepareStatement( "insert into test.temp values (?)" ); stmt.setString(1, "test.testtable"); stmt.execute(); // execute the script stmt = con.prepareStatement( "execute script test.my_script()" ); stmt.execute(); // drop the temporary table stmt = con.prepareStatement( "drop table test.temp" ); stmt.execute(); // commit con.commit(); Additional References https://docs.exasol.com/database_concepts/scripting/db_interaction.htm  https://docs.exasol.com/connect_exasol/drivers/jdbc.htm    We're happy to get your experiences and feedback on this article below! 
View full article
Background You have an EXASolution database and want to connect it via JDBC to Hive for using EXAloader IMPORT/EXPORT statements. Prerequisites Choose one of the following drivers: Cloudera Hive JDBC driver: https://www.cloudera.com/downloads/connectors/hive/jdbc/2-6-5.html Simba Hive JDBC driver: https://www.simba.com/product/apache-hive-driver-with-sql-connector/?attribute_pa_license-type=evaluation&attribute_pa_os=java&attribute_pa_client-server=server How to create and work with a Hive JDBC connection? Step 1: Configuring the JDBC driver in EXAoperation:    Step 2: Create or replace connection -- Connecting EXAloader via Cloudera Hive driver and Simba Hive driver to Cloudera, MapR and Hortonworks Hadoop distributions -- cloudera-quickstart-vm-5.13.0-0-vmware create or replace connection hive_conn to 'jdbc:hive2://192.168.42.133:10000' user 'cloudera' identified by 'cloudera' ; -- MapR-Sandbox- For -Hadoop-6.1.0-vmware create or replace connection hive_conn to 'jdbc:hive2://192.168.42.134:10000' user 'mapr' identified by 'mapr' ; -- Azure Hortonworks Sandbox with HDP 2.6.4 create or replace connection hive_conn to 'jdbc:hive2://192.168.42.1:10000' user 'raj_ops' identified by 'raj_ops' ; Step 3: EXPORT/IMPORT EXPORT test for Cloudera and Simba driver export exa_syscat into jdbc driver = 'HiveCloudera' at hive_conn table exa_syscat created by ' create table exa_syscat ( schema_name varchar (128), object_name varchar (128), object_type varchar (15), object_comment varchar (2000))' replace ; export exa_syscat into jdbc driver = 'HiveSimba' at hive_conn table exa_syscat created by ' create table exa_syscat ( schema_name varchar (128), object_name varchar (128), object_type varchar (15), object_comment varchar (2000))' replace ; IMPORT test for Cloudera and Simba driver import into ( schema_name varchar (128), object_name varchar (128), object_type varchar (15), object_comment varchar (2000)) from jdbc driver = 'HiveCloudera' at hive_conn table exa_syscat; import into ( schema_name varchar (128), object_name varchar (128), object_type varchar (15), object_comment varchar (2000)) from jdbc driver = 'HiveSimba' at hive_conn table exa_syscat;  
View full article
This article describes enabling ODBC logs for Tableau.
View full article
This article explains some of the fail safety mechanisms that clients can perform. 
View full article
This article shows you how to start Exaplus CLI in Background Mode on Linux
View full article
BACKGROUND To export CSV files from Oracle DB Options to export csv from Oracle   Option 1: Using the   Oracle SQL Developer : Connect to the database you want to export Select the table you want to export Open context menu by right clicking on the table Select   Export   to start the integrated   Export Wizard To guarantee a proper export, you should deselect the option   Export DDL Change format to csv and set the path for the exported file Click   Next   to continue The following screen allows you to specify the columns you want to export (SELECT *… by default) Use the   Edit-Button   to change settings - otherwise you can proceed by clicking   Next Finally, you will receive a summary for the export process Make sure that your settings are valid and complete the export by clicking   Finish Notes: Oracle SQL Developer   automatically converts NULL to "" (empty string) Oracle SQL Developer   automatically masks double quotes ("Example" -> ""Example"")   Option 2): Using   SQLcl:SQLcl   is a free tool provided by Oracle. (Download:   http://www.oracle.com/technetwork/developer-tools/sqlcl/overview/index.html (Short documentation about how to get started and some of its features: https://mikesmithers.wordpress.com/2015/04/12/sqlcl-the-new-sqlplus/) Connect to the database you want to export To guarantee a proper export of your data, you have to change some of the system variables of   SQLcl: o   SET SQLFORMAT csv   (automatic conversion into proper csv) o   SET TERMOUT OFF   (suppresses output on display - this option is only necessary when using scripts) o   SET FEEDBACK OFF   (suppresses displaying the number of records returned by a query) To apply each option, just press   ENTER   after every single command Note: These options have to be set every time   SQLcl   is started! Use the integrated spooler with the appended file path to start the process SQLcl  will write the result set of your  SELECT-statement  to  table1.csv To finish the process, you have to stop the spooler by using the command:  spool off Notes: o You are not able to access the exported csv file until you stopped the spooler o  SQLcl  automatically masks double quotes (“Example” -> “”Example””) o  SQLcl  automatically converts NULL to “” (empty string)
View full article
Background An Internet-facing load balancer has a publicly resolvable DNS name, so it can route requests from clients over the Internet to the EC2 instances that are registered with the load balancer. We use this approach make your EXASOL DB connectable from the internet using a single DNS name. An alternative approach using HAproxy and Keepalived (Floating IP) is described in this article. Charges that may apply for the load balancer can be found here: https://aws.amazon.com/elasticloadbalancing/classicloadbalancer/pricing. When your load balancer is created, it receives a public DNS name that clients can use to send requests. The DNS servers resolve the DNS name of your load balancer to the public IP addresses of the load balancer nodes for your load balancer. Each load balancer node is connected to the back-end instances using private IP addresses. In this how to we're using a simple 2+1 cluster in a private subnet (2 active node + 1 spare node). As this how to makes the database connectable from the internet, we recommend to enforce protocol encryption for all database connections (Database parameter "-forceProtocolEncryption=1").   EXA<->EXA export and import is not supported.   Prerequisites - How to create a classic Load Balancer for EXASOL using AWS tools   1. Go to the EC2 console>Load Balancing>Load Balancers 2. Click on "Create Load Balancer" 3. Choose "Classic Load Balancer", click "Continue" 4. Choose a Load Balancer Name, this name will also be used in the public DNS name Choose the corresponding EXASOL VPC Load Balancer Protocol: TCP Load Balancer Port: 8563 (TCP Port for the clients to connect) Instance Protocol: TCP Instance Port: 8563 (EXASOL DB Port) Add EXASOL Subnet Click "Next" 5. Assign or Create a Security Group 6. Allow Incoming Traffic on the Listener Port 7. Click "Next" to configure the instances Health Check Ping Protocol: HTTPS - (This does not check if the database is connectable, please use the XML-RPC interface for this task. Please do not use the database TCP port as the AWS check keeps a TCP socket open and blocks the database) Port: 443 Click "Next" 8. Add EXASOL DB instances Select All database nodes (active + spare) Disable Cross-Zone Load Balancing and Connection Draining Click "Next" 9. Add Tags (optional) 10. Review and Create Classic Load Balancer (it will take some time for DNS entries to be propagated)   Additional Notes - Additional References -
View full article
EXASuite is an integrated, self-contained software bundle provided by EXASOL. This standalone distribution includes the EXASOL Database, EXAoperation web administration interface, EXACluster OS services, a runtime environment, and an underlying enterprise operating system (CentOS with Linux Kernel). EXASOL Clients and Drivers include the EXAplus database client and the drivers provided by EXASOL (eg. ODBC, JDBC, ADO.NET).  
View full article
You have an EXASOL database and want to read data from Microsoft Azure Blob Storage, such as: You want to import all files in a specific folder into a specific table. So you need to generate URLs for all the files in that folder. You want to use authentication, but you don't want to create the required URL signature manually.
View full article
Background Hadoop ETL UDFs are the main way to load data from Hadoop into EXASOL (HCatalog tables on HDFS). In order to deploy the ETL UDFs, you need to set up the connectivity between EXASOL and Hadoop. This SOL describes the network requirements to do this. For a full description of using Hadoop ETL UDFs, refer to the Hadoop ETL UDFs document on github:   https://github.com/EXASOL/hadoop-etl-udfs/blob/master/README.md Connectivity Requirements All EXASOL nodes need access to either the Hive Metastore (recommended) or to WebHCatalog: The   Hive Metastore   typically runs on port   9083   of the Hive Metastore server (hive.metastore.uris property in Hive). It uses a native Thrift API, which is faster than WebHCatalog. The   WebHCatalog server   (formerly called Templeton) typically runs on port   50111   on a specific server (templeton.port property). All EXASOL nodes need access to the namenode and all datanodes, either via the native HDFS interface (recommended) or via the HTTP REST API (WebHDFS or HttpFS) HDFS   (recommended): The namenode service typically runs on port 8020 (fs.defaultFS property), the datanode service on port   50010   or   1004   in Kerberos environments (dfs.datanode.address property) WebHDFS: The namenode service for WebHDFS typically runs on port   50070   on each namenode (dfs.namenode.http-address property), and on port   50075   (dfs.datanode.http.address property) on each datanode. If you use HTTPS, the ports are   50470   for the namenode (dfs.namenode.https-address) and   50475   for the datanode (dfs.datanode.https. address). HttpFS: Alternatively to WebHDFS you can use HttpFS, exposing the same REST API as WebHDFS. It typically runs on port   14000   of each namenode. The disadvantage compared to WebHDFS is that all data are streamed through a single service, whereas webHDFS redirects to the datanodes for the data transfer. Kerberos: If your Hadoop uses Kerberos authentication, the UDFs will authenticate using a keytab file. Each EXASOL node needs access to the Kerberos KDC (key distribution center), running on port   88. The KDC is configured in the kerberos config file, which is used for the authentication.
View full article
Background To export CSV files from PostgreSQL Options to export csv fromPostgreSQL Option 1: Using   pgAdmin III: Connect to the database you want to export Open the   SQL Editor   by clicking the   SQL-Button Select the tables individually with separate   SELECT Statements To run the query press   F5   or use the associated button When the task is finished you can continue the export with opening the integrated   Export data to file   tool To do so, open the   File   menu on the top left and proceed with   Export   from there Now you are able to make some adjustments for your csv format To guarantee a proper csv file we recommend the following settings Select your desired filename and start the process by clicking   OK You will receive a message about the successful export Notes: pgAdmin III   automatically converts NULL to "" (empty string) pgAdmin III   automatically masks double quotes ("Example" -> ""Example"")   Option 2) Using   PSQL: Connect to the database you want to export Use the   \COPY   [table_name | query]   TO   [filename]   WITH CSV   command (Documentation:   https://www.postgresql.org/docs/current/static/sql-copy.html) Add the following parameters to export a proper csv file: o   FORCE QUOTE *   (all values will be quoted) o   ENCODING ‘utf8’   (specifies encoding option) Example: Start the export by pressing   Enter You will receive a confirmation about the number of rows that have been exported Notes: PSQL   automatically converts NULL to "" (empty string) PSQL   automatically masks double quotes ("Example" -> ""Example"")
View full article
To ensure a proper import of your data there are some rules that have to be considered when creating the csv file.
View full article
This article shows you how to connect to Exasol from JasperSoftReports using JDBC
View full article
Background Although it is possible to directly access and copy the internal database table of an SAP system, this manual process is tedious and error-prone. Using a 3rd party product instead ensures data consistency, provides means for automation and saves development and maintenance efforts. How to extract data from SAP system (BW/ERP/R3) to Exasol. We recommend using the 3rd party product Xtract Universal from Theobald Software: Link to product home page: https://theobald-software.com/de/exasolution-destination.html  : Xtract Universal enables you to save data streams from SAP to any destination environment, and thereby significantly reduce development time, thanks to a high degree of automation. The central server component constitutes the data hub between SAP and EXASolution applications. Extractions are created as well as accesses released and monitored for consumers via the Designer. Access is to one or more SAP ERP or SAP BW systems Xtract Universal supports an   incremental delta load process, that is, the limiting of the download to the data content that has been changed or added since the last download.  
View full article
This article describes how to solve connection errors during an IMPORT FROM EXA
View full article
Background In rare cases it can happen that you might receive SQL errors like the following examples  when SQL´s (generated from reports) are pushed down from MicroStrategy to Exasol: " [EXASOL] [EXASolution driver] syntax error, unexpected DATE_, expecting UNION_ or EXCEPT_ or MINUS_ or INTERSECT_ " " [EXASOL] [EXASolution driver] syntax error, unexpected TIMESTAMP_, expecting UNION_ or EXCEPT_ or MINUS_ or INTERSECT_ " " [EXASOL] [EXASolution driver] syntax error, unexpected UPDATE_, expecting UNION_ or EXCEPT_ or MINUS_ or INTERSECT_ "   The root cause for that problem can most likely be found in the structure of your data tables, views or queries. You should generally avoid using so-called "identifiers" (SQL reserved words) as column names (e.g. DATE, TIMESTAMP, UPDATE, BEGIN, END, CHECK, TRUE, FALSE, etc.) when creating your tables/views or setting aliases in SQL queries. But... as we all know, sometimes you cannot avoid this due to given data structures. Example 1 The following query will fail with the error: [EXASOL] [EXASolution driver] syntax error, unexpected DATE_, expecting UNION_ or EXCEPT_ or MINUS_ or INTERSECT_  SELECT a.CUSTOMER_ID CUSTOMER_ID, a.CUSTOMER_NAME CUSTOMER_NAME, a.DATE DATE FROM X.CUSTOMER a WHERE a.CUSTOMER_ID = 1234; So the query needs to be modified to run without errors. In this case the column and alias, that causes the error (DATE) must be quoted with double quotes as shown in the following example: Example 2 The following query will run without errors: SELECT a.CUSTOMER_ID CUSTOMER_ID, a.CUSTOMER_NAME CUSTOMER_NAME, a."DATE" "DATE" FROM X.CUSTOMER a WHERE a.CUSTOMER_ID = 1234;   To solve this issue when pushing down SQL from MicroStrategy to Exasol you must enable "Unified Quoting" in your MicroStrategy environment.   Prerequisites Unifying the quoting behaviour is available since the release of MicroStrategy 2020 and described in the following MicroStrategy Knowledge Base article: KB483540 To implement the behaviour also for database connections to Exasol, the following steps need to be done with the release of MicroStrategy 2020. MicroStrategy 2021 supports the unified quoting for Exasol out of the box and no further configurations are needed.   How to enable Unified Quoting in MicroStrategy 2020 for Exasol   Step 1 Ensure that all of your projects are migrated to MicroStrategy 2020 when upgrading from a previous MicroStrategy version, see MicroStrategy Knowledge Base article KB483540 Upgrade the Data Engine Version Upgrade the Metadata   Step 2 Install the new database object "new_database_m2021.pds" as described in chapter 3 of the MicroStrategy Knowledge Base article Exasol 6.x Be sure to change all of your existing Exasol Database Connections in MicroStrategy to the newly installed database object (Database Connection Type) "EXAsolution 6.x" and check all connection parameters accordingly as described in the Exasol Documentation All relevant steps are described in MicroStrategy Knowledge Base article KB43537   The following steps 3 and 4 are only needed if you set up the connection to Exasol for the first time or your Exasol Database Version has changed:   Step 3 Download the latest Exasol JDBC driver (or ODBC driver) from the Exasol Download section V7.0 or V6.2 (according to your database version)   Step 4 Install the latest Exasol JDBC driver (or ODBC driver) on each MicroStrategy Intelligence Server in the cluster as described in chapter 4 of the MicroStrategy Knowledge Base article Exasol 6.x. Do not forget to restart the Intelligence Server after installing the driver. You might also follow chapter 5 and 6 in the KB article.     We're happy to get your experiences and feedback on this article below! 
View full article