Operationalize huge amounts of data in OutSystems with our new Exasol  database connector for  OutSystems !
View full article
EXAjload is a standard tool for bulk data loading. Its documentation is brief, but the tool is more powerful than you might think.
View full article
This article explains some best practices on importing/exporting data with SQL Server
View full article
This article shows you a way to load data between Exasol and Google Bigquery faster than regular JDBC EXPORT
View full article
Problem After openJDK was successfully installed EXAplus still won't start with an error message like Can not find the Java Runtime Environment needed by EXAplus. Trying to use system wide virtual machine (from registry) instead. No 64 bit java virtual machine found in registry. Diagnosis EXAplus requires a java runtime instance and proper registry entries to start Solution 1. Verify openJDK installation Execute command "java -version" on the command line A possible output may look like openjdk 11.0.2 2019-01-15 OpenJDK Runtime Environment 18.9 (build 11.0.2+9) OpenJDK 64-Bit Server VM 18.9 (build 11.0.2+9, mixed mode) The next step depends on what you downloaded, a JRE only or the complete JDK. 2. a) JDK It comes with a MSI installer and works as any other windows installer and does the necessary registry entries. But EXAplus needs a JRE instance to start. Contrary to Oracles installation, there's no own "jre" folder in the installation path of the JDK. Just create one and copy all contents again to this folder and EXAplus should start. 2.b) JRE The JRE is only a ZIP file and doesn't edit registry during the unzip to the desired folder. You can edit the registry using the Registry Editor. This article describes some tips on doing this, or you can view Microsoft's documentation. - edit existing registry entries manually - create the proper registry entries The path is {{HKEY_LOCAL_MACHINE\Software\JavaSoft}} Another workaround Start Exaplus direct with a java command. For further executions create an icon for the call. Example: "c:\Program Files\Java\openJDK8\bin\java.exe" -jar "c:\Program Files (x86)\EXASOL\EXASolution-6.0\EXAplus\exaplusgui.jar"  
View full article
This article explains how to set up Exasol as a Linked Server in SQL Server
View full article
This article shows how to set up a Python UDF to test HTTP connectivity.
View full article
This article specifically describes how to create JDBC logs when connecting to a JDBC source for an IMPORT or EXPORT statement
View full article
This article shows you how to set up a cron job to schedule database jobs
View full article
Redata is a m onitoring system for data teams. Compute health checks on data (via Airflow jobs), visualize them over time, and alert on them in Grafana.
View full article
This article adresses some common topics when migrating from SQL Server to Exasol.
View full article
If you are have issues with special characters when connecting to Exasol in PHP, this article describes a potential solution.
View full article
 In order for the Exasol Database to use a JDBC Driver in IMPORT or EXPORT commands, the JDBC Driver must support certain methods.
View full article
This article describes ways to take a CSV export from MySQL.
View full article
This article describes pagination and how it can be used to solve memory issues taking forever to complete a task.
View full article
Background An Internet-facing load balancer has a publicly available IP Address, so it can route requests from clients over the Internet to the EC2 instances that are registered with the load balancer. We use this approach make your EXASOL DB connectable from the internet using a single static EIP. In this how to we're using a simple 2+1 cluster in a private subnet (2 active node + 1 spare node). As this how to makes the database connectable from the internet, we recommend to enforce protocol encryption for all database connections (Database parameter "-forceProtocolEncryption=1").   EXA<->EXA export and import is not supported.   Prerequisites 2x equally equipped instances, eg. t2-micro Amazon Linux AMI (RedHat) 2x Public IP addresses 1x Elastic IP address AWS API access (AWS Secret and AWS Key) HAproxy and Keepalived How to create a HAproxy Load Balancer with floating IP This How to describes the installation of two HAproxy instances (Master 10.0.1.207 and Slave 10.0.1.190, the EXASOL nodes use 10.0.1.11,10.0.1.12,10.0.1.13). Installation   1. From the EC2 console launch a t2.micro Amazon Linux instance using the EXASOL subnet 2. Enable Auto-assign Public IP 3. Add Storage and Tags according to your needs 4. The security Group should allow incoming traffic on the database port TCP 8563 and allow SSH for configuration and installation of the packages. If you want to use HAproxy statistics server also open TCP 9090. Master and Slave also need to exchange vitality information. 5. Log into both instances using SSH user ec2-user 6. Update system packages and install haproxy and keepalived (ensure you get the latest version of keepalived http://www.keepalived.org/download.html) [ec2-user@ip-10-0-1-207 ~]$ sudo mkdir /usr/libexec/keepalived/ [ec2-user@ip-10-0-1-207 ~]$ sudo yum -y upgrade && sudo yum -y install haproxy keepalived && sudo reboot 7. Use the packages from the repo (Point 6) OR (Point 7) install the latest keepalived (additional packages are required see below) [root@ip-10-0-1-207]# yum install -y openssl-devel kernel-devel kernel-headers gcc && wget http://www.keepalived.org/software/keepalived-1.3.2.tar.gz && tar xf keepalived* && cd keepalived-1.3.2 && mkdir /opt/keepalived && ./configure --prefix=/opt/keepalived && make && make install Use the steps below when keepalived has been compiled from source. We also need to the init script (attached to this SOL) and sysconfig file of keepalived. [root@ip-10-0-1-207]# cp keepalived_initd.txt /etc/init.d/keepalived [root@ip-10-0-1-207]# cp /root/keepalived-1.3.2/keepalived/etc/sysconfig/keepalived /etc/sysconfig/ [root@ip-10-0-1-207]# cp /root/keepalived-1.3.2/keepalived/etc/init/keepalived.conf /etc/init/ [root@ip-10-0-1-207]# ln -s /opt/keepalived/sbin/keepalived /usr/sbin/ [root@ip-10-0-1-207]# mkdir /etc/keepalived/ [root@ip-10-0-1-207]# cp /opt/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/ [root@ip-10-0-1-207]# mkdir /usr/libexec/keepalived/ [root@ip-10-0-1-207]# useradd -M keepalived_script     Configuration   1. Configure HAproxy Master and Slave (copy it to both instances) [ec2-user@ip-10-0-1-207 ~]$ cat /etc/haproxy/haproxy.cfg global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon defaults log global mode tcp option httplog option dontlognull timeout connect 5000 timeout client 50000 timeout server 50000 listen stats :9090 balance mode http stats enable stats uri /haproxy_stats stats auth admin:strongPassworD listen exasol_proxy :8563 mode tcp option tcplog balance roundrobin server ip-10-0-1-11.eu-west-1.compute.internal 10.0.1.11:8563 weight 1 check rise 2 fall 3 server ip-10-0-1-12.eu-west-1.compute.internal 10.0.1.12:8563 weight 1 check rise 2 fall 3 server ip-10-0-1-13.eu-west-1.compute.internal 10.0.1.13:8563 weight 1 check rise 2 fall 3 2. Configure Keepalived Master [ec2-user@ip-10-0-1-207 ~]$ cat /etc/keepalived/keepalived.conf vrrp_script chk_haproxy { script "pidof haproxy" interval 2 } vrrp_instance VI_1 { debug 2 interface eth0 # interface to monitor state MASTER virtual_router_id 1 # Assign one ID for this route priority 101 # 101 on master, 100 on slave unicast_src_ip 10.0.1.207 # Private IP unicast_peer { 10.0.1.190 } track_script { chk_haproxy } notify_master "/usr/libexec/keepalived/notify.sh MASTER" notify_backup "/usr/libexec/keepalived/notify.sh BACKUP" notify_fault "/usr/libexec/keepalived/notify.sh FAULT" } 3. Configure Keepalived Slave [ec2-user@ip-10-0-1-190 ~]$ cat /etc/keepalived/keepalived.conf vrrp_script chk_haproxy { script "pidof haproxy" interval 2 } vrrp_instance VI_1 { debug 2 interface eth0 # interface to monitor state BACKUP virtual_router_id 1 # Assign one ID for this route priority 100 # 101 on master, 100 on slave unicast_src_ip 10.0.1.190 # Private IP unicast_peer { 10.0.1.207 } track_script { chk_haproxy } notify_master "/usr/libexec/keepalived/notify.sh MASTER" notify_backup "/usr/libexec/keepalived/notify.sh BACKUP" notify_fault "/usr/libexec/keepalived/notify.sh FAULT" } 4. Keepalived will trigger a script when the HAproxy service fails on the current master, deploy the scripts (notify.sh, master.sh, backup.sh) on both instances ( (!)change owner to keepalived_script if using the latest version of keepalived) [root@ip-10-0-1-207 ec2-user]# cat /usr/libexec/keepalived/notify.sh #!/bin/bash STATE=$1 NOW=$(date +"%D %T") KEEPALIVED="/tmp" case $STATE in "MASTER") touch $KEEPALIVED/MASTER echo "$NOW Becoming MASTER" >> $KEEPALIVED/COUNTER /usr/libexec/keepalived/master.sh exit 0 ;; "BACKUP") rm $KEEPALIVED/MASTER echo "$NOW Becoming BACKUP" >> $KEEPALIVED/COUNTER /usr/libexec/keepalived/backup.sh exit 0 ;; "FAULT") rm $KEEPALIVED/MASTER echo "$NOW Becoming FAULT" >> $KEEPALIVED/COUNTER /usr/libexec/keepalived/backup.sh exit 0 ;; *) echo "unknown state" echo "$NOW Becoming UNKOWN" >> $KEEPALIVED/COUNTER exit 1 ;; esac 5. Master Script (change owner to keepalived_script if using the latest version of keepalived) [root@ip-10-0-1-207 ec2-user]# cat /usr/libexec/keepalived/master.sh #!/bin/bash exec >> /tmp/master.log exec 2>&1 #set -x AWS_ACCESS_KEY=Key AWS_SECRET_KEY=Secret export EC2_URL=https://ec2.eu-west-1.amazonaws.com export EC2_HOME="/opt/aws/apitools/ec2" export JAVA_HOME=/usr/lib/jvm/jre export AWS_CLOUDWATCH_HOME=/opt/aws/apitools/mon export AWS_PATH=/opt/aws export AWS_AUTO_SCALING_HOME=/opt/aws/apitools/as export AWS_ELB_HOME=/opt/aws/apitools/elb EIP=34.249.49.35 # Elastic IP to be associated INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id) PRIVATE_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4) /opt/aws/bin/ec2-associate-address -O $AWS_ACCESS_KEY -W $AWS_SECRET_KEY -U $EC2_URL $EIP -instance $INSTANCE_ID -p $PRIVATE_IP --allow-reassociation echo "$(date) I'm master now" 6. Backup Script (change owner to keepalived_script if using the latest version of keepalived) [root@ip-10-0-1-207 ec2-user]# cat /usr/libexec/keepalived/backup.sh #!/bin/bash exec >> /tmp/backup.log exec 2>&1 #set -x echo "$(date) I'm backup nothing to do" 7. Make keepalived scripts executable (both instances) [root@ip-10-0-1-207 ec2-user]# chmod +x /usr/libexec/keepalived/*sh && chmod 700 /usr/libexec/keepalived/*sh 8. Enable HAproxy und Keepalived on Start-up (both instances) [root@ip-10-0-1-207 ec2-user]# chkconfig haproxy on && chkconfig keepalived on 9. Start HAproxy and Keepalived on the master and check logs [root@ip-10-0-1-207 ec2-user]# service haproxy start && service keepalived start [root@ip-10-0-1-207 ec2-user]# tail -n 30 /var/log/messages 10. Start HAproxy and Keepalived on the slave and check logs [root@ip-10-0-1-207 ec2-user]# service haproxy start && service keepalived start [root@ip-10-0-1-207 ec2-user]# tail -n 30 /var/log/messages   Additional Notes - Additional References -
View full article
Here is a list of lessons we learned while migrating our reporting tools from MySQL (5.6 with strict mode off) to Exasol.
View full article
Background To export CSV files from Microsoft SQL Server Options to export csv from  Microsoft SQL Server Option 1: Using the   SQL Server Import and Export Data Wizard / DTSWizard   (tool should be included in your SQL Server installation): Open the wizard Select your data source and the database you want to export Continue with   Next Select   Flat File Destination   as your destination and browse to the desired file path for your csv file. It is necessary to use *.csv as your file extension Note: If you want to override an existing file, you have to delete the old file first! Otherwise the exported data will be appended to the existing file. Change the   Text qualifier   to   “   (double quotes) and set   UTF-8   as   Code page Continue by clicking   Next Since double quotes will not be escaped properly, you have to select ‘Write a query to specify the data to transfer’ and proceed with clicking   Next Now fill in a valid   SELECT Statement   to select the data you want to export Note: It is important to make use of the REPLACE function to ensure a proper masking of double quotes. It is only necessary for Columns that might contain strings with double quotes. (Documentation of   REPLACE:   https://msdn.microsoft.com/de-de/library/ms186862.aspx) Example: SELECT [Customer Key], [WWI Customer ID], REPLACE([Customer], '"', '""') AS 'Customer', REPLACE([Bill to Customer], '"', '""') AS 'Bill to Customer', REPLACE([Category], '"', '""') AS 'Category', REPLACE([Buying Group], '"', '""') AS 'Buying Group', REPLACE([Primary Contact], '"', '""') AS 'Primary Contact', [Postal Code], [Valid From], [Valid To], [Lineage Key] FROM Dimension.Customer​ Confirm your settings for the   Flat File Destination   and continue with   Next Start the export by clicking   Finish You will receive a short report about the successful export   Option 2: Using the   bcp Utility: As this method is kind of unhandy, we recommend using option 1. If, for some reason, you are not able to use the   DTS Wizard   you can use the following manual to export your data with the   bcp Utility. (Documentation   bcp Utility:   https://msdn.microsoft.com/en-us/library/ms162802.aspx) The needed   bcp   command looks something like this: bcp “SELECT STATEMENT” queryout “OUTPUT FILEPATH” –c –t”,” –r”\n” –q –S SERVERNAME –T To ensure a correct export, the   SELECT Statement   has to meet certain criteria: All columns that might contain   commas,   double quotes   or any other special characters, have to be enclosed by “char(34)” (ASCII Code for   "). Thus will add   double quotes   before and after the exported field. Example: “SELECT [Customer Key] , char(34)   +   [Customer]   +   char(34) FROM …” All columns that might contain   double quotes, have to be selected with the   REPLACE   function. This way,   double quotes   will be masked properly in your csv file ("Example" -> ""Example""). Example: “SELECT [Customer Key] , char(34)   +   [Customer]   +   char(34), char(34)   +   REPLACE( [Category] , char(34), char(34) + char(34))   +   char(34) FROM …” Note: Since the   SELECT Statement   has to start with   double quotes, you have to use   char(34)   as a replacement for   "   aswell. Otherwise the console would interpret it as the end of the SELECT Statement. REPLACE( [Category] , ", "")   ->   REPLACE( [Category] , char(34), char(34) + char(34)) (Documentation of   REPLACE:   https://msdn.microsoft.com/de-de/library/ms186862.aspx) All columns that allow entries being   NULL, must use the   COALESCE   function. This guarantees proper conversion from   NULL   to “” (empty string). Example: “SELECT [Customer Key] , char(34)   +   [Customer]   +   char(34), char(34)   +   REPLACE( [Category] , char(34), char(34) + char(34))   +   char(34), char(34)   +   COALESCE( [Primary Contact] ,'')   +   char(34) FROM…” Note:   COALESCE   is using two single quotes as second parameter! You might have to combine the functions depending on your database design. Example: “SELECT [Customer Key] , char(34)   +   [Customer]   +   char(34), char(34)   +   COALESCE(REPLACE( [Category] , char(34), char(34) + char(34)), '')   +   char(34), char(34)   +   COALESCE( [Primary Contact] ,'')   +   char(34) FROM…” A complete bcp command can look as follows: bcp "Select [Customer Key] , [WWI Customer ID] , char(34)   +   [Customer]   +   char(34), char(34)   +   [Bill to Customer]   +   char(34), char(34)   +   REPLACE( [Category] , char(34), char(34)+char(34))   +   char(34), char(34)   +   [Buying Group]   +   char(34), char(34)   +   COALESCE(REPLACE( [Primary Contact] ,char(34),char(34)+char(34)),'')   +   char(34), char(34)   +   [Postal Code]   +   char(34), [Valid From] , [Valid To] , [Lineage Key] From WideWorldImporters.Dimension.Customer" queryout "C:\Test.csv" -c -t"," -r"\n" -q -S HW1729 –T Start the export by pressing Enter
View full article
BACKGROUND To export CSV files from IBM DB2 Options to export csv from IBM DB2   Option 1: Using the   IBM command line processor: Connect to the database you want to export.   Use the EXPORT TO-command combined with a SELECT query to export a table (Documentation:   https://www.ibm.com/support/knowledgecenter/SSEPGG_9.8.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0008303.html ) We recommend to apply following parameters to ensure compatible formatting for your exported csv file: o   MODIFIED BY datesiso -> exports dates in ISO format (YYYY-MM-DD) o   MODIFIED BY timestampformat=”YYYY-MM-DD HH:MM:SS” -> exports timestamps in selected format o   COALESCE(column,’’) -> checks if content of data field equals   NULL   and replaces it with “” (empty string) if necessary. This addition is needed for every selected column that allows   NULL   values. Example: Additional Notes: IBM command line processor   automatically masks double quotes ("Example" -> ""Example"")   Option 2: Using the   IBM Data Studio: Connect to the database you want to export Select the table you want to export Open context menu by right clicking on the table Select the option   Unload   ->   With Export Utility   to open the integrated export tool Select   JDBC   as   Run method Pick your path and file name for the csv file To specify additional options, select the tab called   Options   (top left) To ensure a compatible format for your csv file, you should check the available control boxes as follows Click  Run  to start exporting the table You will find the csv file in your selected path Additional Notes: IBM Data Studio   automatically converts   NULL   to "" (empty string) IBM Data Studio   automatically masks double quotes ("Example" -> ""Example"")
View full article
This article describes a solution to users not being able to login via LDAP
View full article