Environment Management
Manage the environment around the database, such as Cloud, Monitoring, Exaoperation and scalability
cancel
Showing results for 
Search instead for 
Did you mean: 
Background Deploy a single-node Exasol database as a Docker image for testing purposes Blog snapshot This blog will show you: How to deploy a single-node Exasol database as a Docker image for testing purposes Before we go into the step-by-step guide, please read through the following prerequisites and recommendations to make sure that you're prepared Prerequisites Host OS: Currently, Exasol only supports Docker on Linux. It’s not possible to use Docker for Windows to deploy the Exasol database. The requirement for Linux OS is O_DIRECT access. Docker installed Linux machine: In this article, I’m going to use Centos 7.6 virtual machine with the latest version of docker (currently Version 19.03). Privileged mode: Docker privileged mode is required for permissions management, UDF support, and environment configuration and validation (sysctl, hugepages, block-devices, etc.). Memory requirements for the host environment: Each database instance needs at least 2 GiB RAM. Exasol recommends that the host reserves at least 4 GiB RAM for each running Exasol container. Since in this article I’m going to deploy a single node container I will use 6 GiB RAM for VM. Service requirements for the host environment: NTP should be configured on the host OS. Also, the RNG daemon must be running to provide enough entropy for the Exasol services in the container. Recommendations Performance optimization: Exasol strongly recommends setting the CPU governor on the host to performance, to avoid serious performance problems. You can use the cpupower utility or the command below to set it. Using cpupower utility     $ sudo cpupower -c all frequency-set -g powersave     Change the content of scaling_governor files:     $ for F in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor; do echo performance >$F; done     Hugepages: Exasol recommends enabling hugepages for hosts with at least 64GB RAM. To do so, we have to set the Hugepages option in EXAConf to either auto, host, or the number of hugepages per container. If we will set it to auto, the number of hugepages will be determined automatically, depending on the DB settings. When setting it to host the nr. of hugepages from the host system will be used (i. e. /proc/sys/VM/nr_hugepages will not be changed). However, /proc/sys/VM/hugetlb_shm_group will always be set to an internal value! Resource limitation: It's possible to limit the resources of the Exasol container with the following docker run options:     $ docker run --cpuset-cpus="1,2,3,4" --memory=20g --memory-swap=20g --memory-reservation=10g exasol/docker-db:<version>       This is especially recommended if we need multiple Exasol containers (or other services) on the same host. In that case, we should evenly distribute the available CPUs and memory throughout your Exasol containers. Find more detailed information here https://docs.docker.com/config/containers/resource_constraints/ How to deploy a single-node Exasol database as a Docker image Step 1 Create a directory to store data from container persistently To store all persistent data from the container I’m going to create a directory. I will name it “container_exa” and create it in the home folder of the Linux user.     $ mkdir $HOME/container_exa/     Set the CONTAINER_EXA variable to the folder:     $ echo ‘export CONTAINER_EXA="$HOME/container_exa/"’ >> ~/.bashrc && source ~/.bashrc     Step 2 Create a configuration file for Exasol database and docker container The command for creating a configuration file is:     $ docker run -v "$CONTAINER_EXA":/exa --rm -i exasol/docker-db:<version> init-sc --template --num-nodes 1     Since I’m going to use the latest version of exasol (currently 6.2.6). I will use the latest tag. Num-nodes is the number of containers. We need to change the value of this if we want to deploy a cluster.     $ docker run -v "$CONTAINER_EXA":/exa --rm -i exasol/docker-db:latest init-sc --template --num-nodes 1     NOTE: You need to add --privileged option because the host directory belongs to root.   After the command has finished, the directory $CONTAINER_EXA contains all subdirectories as well as an EXAConf template (in /etc).   Step 3 Complete a configuration file The configuration has to be completed before the Exasol DB container can be started. The configuration file is EXAConf and it’s stored in the “$CONTAINER_EXA/etc” folder. To be able to start a container these options have to be configured:   A private network of all nodes (Public network is not mandatory in docker version of Exasol DB) EXAStorage device(s) EXAVolume configuration Network port numbers Nameservers Different options can be configured in the EXAConf file. I will post articles about most of them. 1)  A private network of the node     $ vim $CONTAINER_EXA/etc/EXAConf [Node : 11] PrivateNet = 10.10.10.11/24 # <-- replace with the real network       In this case, the IP address of Linux the virtual machine is 10.1.2.4/24. 2) EXAStorage device configuration Use the dev.1 file as an EXAStorage device for Exasol DB and mount the LVM disk to it.     3) EXAVolume configuration Configure the volume size for Exasol DB before starting the container. There are 3 types of volumes available for Exasol. Volumes in Exasol serve three different purposes. You can find detailed information in https://docs.exasol.com/administration/on-premise/manage_storage/volumes.htm?Highlight=volumes Since it’s recommended to use less disk space than the size of LVM disk (because Exasol will create a temporary volume and there should be a free disk space for it) I’d recommend using 20 GiB space for volume. The actual size of the volume increases or decreases depending on the data stored.   4) Network port numbers Since you should use the host network mode (see "Start the cluster" below), you have to adjust the port numbers used by the Exasol services. The one that's most likely to collide is the SSH daemon, which is using the well-known port 22. I’m going to change it to 2222 in EXAConf file:   The other Exasol services (e. g. Cored, BucketFS, and the DB itself) are using port numbers above 1024. However, you can change them all by editing EXAConf. In this example, I’m going to use the default ports.     Port 22 – SSH connection Port 443 – for XMLRPC Port 8888 – port of the Database Port 6583 – port for bucketfs     5) Nameservers We can define a comma-separated list of nameservers for this cluster in EXAConf under the [Global] section. Use the google DNS address 8.8.8.8. Set the checksum within EXAConf to 'COMMIT'. This is the EXAConf integrity check (introduced in version 6.0.7-d1) that protects EXAConf from accidental changes and detects file corruption. It can be found in the 'Global' section, near the top of the file. Please also adjust the Timezone depending on your requirements.   Step 5 Create the EXAStorage device files EXAStorage is a distributed storage engine. All data is stored inside volumes. It also provides a failover mechanism. I’d recommend using a 32 GB LVM disk for EXAStorage:     $ lsblk         IMPORTANT: Each device should be slightly bigger (~1%) than the required space for the volume(s) because a part of it will be reserved for metadata and checksums. Step 5 Start the cluster The cluster is started by creating all containers individually and passing each of them its ID from the EXAConf. Since we’ll be deploying a single node Exasol DB the node ID will be n11 and the command would be:     $ docker run --name exasol-db --detach --network=host --privileged -v $CONTAINER_EXA:/exa -v /dev/mapper/db-storage:/exa/data/storage/dev.1 exasol/docker-db:latest init-sc --node-id 11     NOTE: This example uses the host network stack, i.e. the containers are directly accessing a host interface to connect. There is no need to expose ports in this mode: they are all accessible on the host. Let’s user the “docker logs” command to check the log files.     $ docker logs -f exasoldb       We can see 5 different stages in the logs. Stage 5 is the last and if we can see the node is online and the stage is finished this means the container and database started successfully.     $ docker container ls       Let’s get a bash shell in the container and check the status of the database and volumes     $ docker exec -it exasol-db bash     Inside of the container, you can run some exasol specific commands to manage the database and services. You can find some of these commands below: $ dwad_client shortlist: Gives an output about the names of the databases. $ dwad_client list: Gives an output about the current status of the databases.   As we can see the name of the database is DB1 (this can be configured in EXAConf) and the state is running. The “Connection state: up” means we can connect to the database via port 8888. $ csinfo -D – Print HDD info:   csinfo -v print information about one (or all) volume(s):   As we can see the size of the data volume is 20.00 GiB. You can also find information about the temporary volume in the output of the csinfo -v command. Since the database is running and the connection state is up let’s try to connect and run for example SQL queries. You can use any SQL clients or Exaplus CLI to connect. I’m going to use DBeaver in this article. You can find more detailed information in https://docs.exasol.com/connect_exasol/sql_clients/dbeaver.htm I’m using the public IP address of the virtual machine and port 8888 which configured as a database port in EXAConf.   By default, the password of the sys user is “exasol”. Let's run an example query:     SELECT * FROM EXA_SYSCAT;         Conclusion In this article, we deployed a single-node Exasol database in a docker container and went through the EXAConf file. In the future, I will be sharing new articles about running Exasol on docker and will analyze the EXAConf file and Exasol services in-depth. Additional References https://github.com/EXASOL/docker-db https://docs.docker.com/config/containers/resource_constraints/ https://docs.exasol.com/administration/on-premise/manage_storage/volumes.htm?Highlight=volumes  
View full article
  Background Deploying 2+1 Exasol Cluster on Amazon Web Service (AWS) Post snapshot: This post will show you: How to deploy a 2+1 Exasol Cluster on Amazon Web Services (AWS) Before we go into the step-by-step guide, please read through the following prerequisites and recommendations to make sure that you're prepared Prerequisites AWS Account: Make sure you have an AWS account with the relevant permissions. If you do not have an AWS account, you can create one from the Amazon Console. AWS Key Pair: You have a Key Pair created. AWS uses public-key cryptography to secure the log-in information for your instance. For more information on how to create a Key Pair, see Amazon EC2 Key Pairs in the AWS documentation. Subscription on AWS Marketplace: You must have subscribed to one of the following Exasol subscriptions on AWS Marketplace: Exasol Analytic Database (Single Node / Cluster, Bring-Your-Own-License) Exasol Analytic Database (Single Node / Cluster, Pay-As-You-Go) How to deploy a 2+1 Exasol Cluster Step 1 Open https://cloudtools.exasol.com/ to access the cloud deployment wizard in your browser and choose your cloud provider. In this case, the Cloud Provider should be Amazon Web Services. Select your region from the drop-down list. I'm going to deploy our cluster in Frankfurt   Step 2 On the Configuration screen, by default, you see the Basic Configuration page. You can choose one of the existing configurations made by Exasol. Basic Configuration: Shows a minimum specification for your data size. Balanced Configuration: Shows an average specification for your data size for good performance. High-Performance Configuration: Shows the best possible specification for your data size for high performance. In this case, I'm going to choose the Advanced Configuration option. If you are going to deploy a cluster for production purposes we recommend discussing sizing options with the Exasol support team or use one of the existing configurations made by Exasol. RAW Data Size (in TB): You can add the required raw data size on your own, otherwise, it will be calculated automatically after setting Instance type and node count. License Model: Pay as you go (PAYG) Pay as you go (PAYG) license model is a flexible and scalable license model for Exasol's deployment on a cloud platform. In this mode, you pay for your cloud resources and Exasol software through the cloud platform's billing cycle. You can always change your setup later to scale up or down your system and the billing changes accordingly. Bring your own license (BYOL) Bring your own license (BYOL) license model lets you choose a static license for Exasol software and a dynamic billing for the cloud resources. In this model , you need to purchase a license from Exasol and add it to your cloud instance. This way, you pay only for the cloud resources through the cloud platform's billing cycle and there is no billing for the software. You can always change your setup later to scale up or down your system and the billing changes accordingly. However, there is a limit for the maximum scaling based on your license type (DB RAM or raw data size). You can find detailed information about licensing in https://docs.exasol.com/administration/aws/licenses.htm System Type: You can choose one of the Exasol Single Node and Enterprise Cluster options. I'm going to choose the Enterprise Cluster option. Instance Family: You can choose one of the instance types of AWS EC2 service to deploy virtual machines for Exasol nodes. You can find detailed information about instance types of AWS EC2 in https://aws.amazon.com/ec2/instance-types/ The number of DB Nodes: We need to determine the total number of active data nodes in this section.     After finishing the configuration we can see the RAW data size calculated automatically for us. On the left side of the screen, we can see the details of our setup on AWS. If you have a license from Exasol please choose the BYOL option in License Model, this will cause a decrease in Estimated Costs. Step 3 After click Continue to proceed with the deployment, we can see the Summary page. We can overview the cluster configuration and choose a deployment option. We have the option to select create new VPC or use existing VPC for the CloudFormation stack. Create New VPC that will create a new VPC and provision of all resources within it. Use Existing VPC will provision Exasol to use an existing VPC subnet of your choice. For more information on VPC, see Amazon Virtual Private Cloud. Based on this VPC selection, the parameters in the stack creation page on AWS will change when you launch the stack. For more information on the stack parameters, see Template Parameters. If you want to download the configuration file and upload them later to your AWS stack through CloudFormation Console, you can click the CloudFormation Templates option on the left side. Click Launch Stack. You will be redirected to the Quick create stack page on AWS.   Step 4 After redirecting to the Quick create stack page on AWSReview and I'm going to fill the required stack parameters: Stack Name Key Pair, SYS User Password, or ADMIN User Password. In the VPC/Network/Security section, the Public IPs are set to false by default. I'm going to set this to true. If you want to keep the Public IP address set to false, then you need to enable VPN or other methods to be able to access your instance. (Optional) License is applicable if your subscription model is Bring-your-own-license. Paste the entire content of the license file you have in the space provided. Click Create Stack to continue deploying Exasol in the CloudFormation Console. You can view the stack you created under AWS CloudFormation > Stacks, with the status CREATE_IN_PROGRESS . Once the stack is created successfully, the status is changed to CREATE_COMPLETE . Additionally, you can monitor the progress in the Events tab for the stack. For more information about the stack parameters, please check the table here https://docs.exasol.com/cloud_platforms/aws/installation_cf_template.htm?Highlight=Template%20Parameters After filling the required parameters I'm going to click Create Stack to continue deploying Exasol in the CloudFormation Console. We can view the stack created under AWS CloudFormation > Stacks, with the status CREATE_IN_PROGRESS . Once the stack is created successfully, the status is changed to CREATE_COMPLETE .   Additionally, we can monitor the progress in the Events tab for the stack. Step 5 Determine the Public IP Address We need the Public IP or DNS name displayed in the EC2 Console to connect to the database server or launch the instance. To know the Public IP or DNS name: Open the EC2 Dashboard from the AWS Management Console. Click on Running Instance. The Instances page is displayed with all the running instances. Select the name of the instance you created. (In this case exasol-cluster-management_node and exasol-cluster-management_node). We need the IP address of management node In the Description section, the IP address displayed for Public DNS(IPv4) is the IP address of the database server. If the Public IP parameter for your stack is set to false,  you need to enable VPN or other methods to connect to the database server via the private IP address of the instances. Step 6 Access to Initialization page Copy and paste this IP address prefixed with https in a browser. In the case of an Exasol cluster deployment, I need to copy the IP address or DNS name of the management node. After confirming the digital certificate the following screen is displayed.   Once the installation is complete, I will be redirected to the EXAoperation screen. It may take up to 45 minutes for the EXAoperation to be online after deployment. You can login with the admin user name and password provided while creating your stack.  Step 7 Connect to the database In this case (a 2+1 cluster deployment), I need to use the Public IP address of the data node along with the admin user name and password to connect to the SQL client. I can also connect to all the data nodes by entering the pubic IP address of all the nodes separated by a comma. Additional Notes Connect to Exasol After installing Exasol on AWS, you can do the following: Install drivers required to connect to other tools. Connect SQL clients to Exasol. Connect Business Intelligence tools (BI tools) to Exasol. Connect Data Integration - ETL tool to Exasol. Connect Data Warehouse Automation tools to Exasol.   Load Data After you have connected your choice of tool to Exasol, you can load your data into Exasol and process further. To know more about loading data into Exasol, see Loading Data. Conclusion In this article, we deployed a 2+1 Exasol cluster on AWS. In the future, I will be sharing new articles about managing the Exasol cluster on AWS, using lambda functions to schedule the start/stop of a cluster, etc. Additional References https://cloudtools.exasol.com https://docs.exasol.com/administration/aws.htm
View full article
With this article, you will learn how to add and change database parameters and their values. 1. Log in to your Exasol container: $ docker exec -it <container_name> /bin/bash 2. Inside the container go to the /exa/etc/ folder and open the EXAConf file with a text editor of your choice: $ cd /exa/etc $ vim EXAConf 3. Under the DB section, right above the [[JDBC]] sub-section add a line that says Params and the necessary parameters: [DB : DB1] Version = 6.1.5 MemSize = 6 GiB Port = 8563 Owner = 500 : 500 Nodes = 11,12,13 NumActiveNodes = 3 DataVolume = DataVolume1 Params = -useIndexWrapper=0 -disableIndexIteratorScan=1 [[JDBC]] BucketFS = bfsdefault Bucket = default Dir = drivers/jdbc [[Oracle]] BucketFS = bfsdefault Bucket = default Dir = drivers/oracle 4. Change the value of Checksum in EXAConf: $ sed -i '/Checksum =/c\ Checksum = COMMIT' /exa/etc/EXAConf 5. Commit the changes: $ exaconf commit 6. At this point you have 2 options: 6.1. Restart the container: $ dwad_client stop-wait <database_instance> # Stop the database instance (inside the container) $ csctrl -d # Stop the storage service (inside the container) $ exit # Exit the container $ docker restart <container_name> # Restart the container $ docker exec -it <container_name> # Log in to the container's BASH environment $ dwad_client setup-print <database_instance> # See the database parameters ... PARAMS: -netmask= -auditing_enabled=0 -lockslb=1 -sandboxPath=/usr/opt/mountjail -cosLogErrors=0 -bucketFSConfigPath=/exa/etc/bucketfs_db.cfg -sysTZ=Europe/Berlin -etlJdbcConfigDir=/exa/data/bucketfs/bfsdefault/.dest/default/drivers/jdbc:/usr/opt/EXASuite-6/3rd-party/JDBC/@JDBCVERSION@:/usr/opt/EXASuite-6/EXASolution-6.1.5/jdbc -useIndexWrapper=0 -disableIndexIteratorScan=1 ... As you can from the output mentioned above, the parameters have been added. However, rebooting the cluster can cause some downtime. In order to shorten the duration of your downtime, you can try the method below. 6.2. Use a configuration file to change the parameters by just rebooting the database, not container: $ dwad_client setup-print <database_instance> > db1.cfg # See the database parameters $ vim db1.cfg # Edit the configuration file When you open the file, find the line starting with PARAMS and the parameter you need, like: PARAMS: -netmask= -auditing_enabled=0 -lockslb=1 -sandboxPath=/usr/opt/mountjail -cosLogErrors=0 -bucketFSConfigPath=/exa/etc/bucketfs_db.cfg -sysTZ=Europe/Berlin -etlJdbcConfigDir=/exa/data/bucketfs/bfsdefault/.dest/default/drivers/jdbc:/usr/opt/EXASuite-6/3rd-party/JDBC/@JDBCVERSION@:/usr/opt/EXASuite-6/EXASolution-6.1.5/jdbc -useIndexWrapper=0 -disableIndexIteratorScan=1 After adding the parameters, save the file and execute the following commands: $ dwad_client stop-wait <database_instance> # Stop the database instance (inside the container) $ dwad_client setup <database_instance> db1.cfg # Setup the database with the db1.cfg configuration file (inside the container) $ dwad_client start-wait <database_instance> # Start the database instance (inside the container) This will add the database parameters, but will not be persistent throughout reboots. Therefore, by adding the parameters this way you shorten your downtime, but the changes aren't permanent. After doing this, we would recommend to also do method 6.1, in case you decide to reboot sometime in the future. 7. Verify the parameters: 7.1. With dwad_client list:             7.2. With dwad_list print-setup <database_instance>:                
View full article
Background ConfD is the EXASOL configuration and administration daemon that runs on all nodes of an EXASOL cluster. It provides an interface for cluster administration and synchronizes the configuration across all nodes. In this article, you can find examples to manage the Exasol docker cluster using XML-RPC.   Prerequisites and Notes Please note that this is still under development and is not officially supported by Exasol. We will try to help you as much as possible, but can't guarantee anything. Note: Any SSL checks disabled for these examples in order to avoid exceptions with self-signed certificates Note: If you got an error message like xmlrpclib.ProtocolError: <ProtocolError for root:testing@IPADDRESS:443/: 401 Unauthorized> please login to cluster and reset root password via the exaconf passwd-user command. Note: All of the examples tested with Exasol version 6.2.7 and python 2.7   Explanation & Examples We need to create a connection and get a master IP before running any ConfD job via XML-RPC. You can find how to do it below: Import required modules and get the master IP: >>> import xmlrpclib, requests, urllib3, ssl >>> urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) Get current master IP (you can use any valid IP in the cluster for this request) >>> master_ip = requests.get( "https: //11.10.10.11:443/master" , verify = False).content In this case, 11.10.10.11 is the IP address of one of the cluster nodes Create connection: Note: We assume you've set the root password "testing". You can set a password via exaconf passwd-user command >>> connection_string = "https: //root:testing@%s:443/" % master_ip >>> sslcontext = ssl._create_unverified_context() >>> conn = xmlrpclib.ServerProxy(connection_string, context = sslcontext, allow_none=True)   The list of examples: Example 1 - 2: Database jobs Example 3: Working with archive volumes Example 4: Cluster Node Jobs Example 5: EXAStorage Volume Jobs Example 6: Working with backups   Example 1: Database jobs How to use ConfD jobs to get the database status and information about a database Run a job to check the status of the database: Note: In this example we assume the database name is "DB1". Please adjust the database name. conn.job_exec( 'db_state' , { 'params' : { 'db_name' : 'DB1' }})  Output: { 'result_name' : 'OK' , 'result_output' : 'running' , 'result_desc' : 'Success' , 'result_jobid' : '12.2' , 'result_code' : 0} As you can see in the output the 'result_output' is  'running' and 'result_desc' is 'Success'. This means the database is up and running. Note: If you want to format the JSON output you can use pprint module Run a job to get information about the database: >>> import pprint >>> pprint.pprint(conn.job_exec( 'db_info' , { 'params' : { 'db_name' : 'DB1' }})) { 'result_code' : 0, 'result_desc' : 'Success' , 'result_jobid' : '11.89' , 'result_name' : 'OK' , 'result_output' : { 'connectible' : 'Yes' , 'connection string' : '192.168.31.171:8888' , 'info' : '', 'name' : 'DB1' , 'nodes' : { 'active' : [ 'n11' ], 'failed' : [], 'reserve' : []}, 'operation' : 'None' , 'persistent volume' : 'DataVolume1' , 'quota' : 0, 'state' : 'running' , 'temporary volume' : 'v0001' , 'usage persistent' : [{ 'host' : 'n11' , 'size' : '10 GiB' , 'used' : '6.7109 MiB' , 'volume id' : '0' }], 'usage temporary' : [{ 'host' : 'n11' , 'size' : '1 GiB' , 'used' : '0 B' , 'volume id' : '1' }]}}   Example 2: Database jobs. How to list, start and stop databases   Run a job to list databases in cluster: conn.job_exec( 'db_list' ) Output example: >>> pprint.pprint(conn.job_exec( 'db_list' )) { 'result_code' : 0, 'result_desc' : 'Success' , 'result_jobid' : '11.91' , 'result_name' : 'OK' , 'result_output' : [ 'DB1' ]}   Stop the DB1 database: Run a job to stop database DB1 in cluster: >>> conn.job_exec( 'db_stop' , { 'params' : { 'db_name' : 'DB1' }}) { 'result_name' : 'OK' , 'result_desc' : 'Success' , 'result_jobid' : '12.11' , 'result_code' : 0}   Run a job to confirm the state of the database DB1: >>> conn.job_exec( 'db_state' , { 'params' : { 'db_name' : 'DB1' }}) { 'result_name' : 'OK' , 'result_output' : 'setup' , 'result_desc' : 'Success' , 'result_jobid' : '12.12' , 'result_code' : 0}  Note: 'result_output': 'setup': the status of the database is "setup"    Run a job to start database DB1 in cluster: >>> conn.job_exec( 'db_start' , { 'params' : { 'db_name' : 'DB1' }}) { 'result_name' : 'OK' , 'result_desc' : 'Success' , 'result_jobid' : '12.13' , 'result_code' : 0}   Run a job to verify the state of the database of DB1 is up and running: >>> conn.job_exec( 'db_state' , { 'params' : { 'db_name' : 'DB1' }}) { 'result_name' : 'OK' , 'result_output' : 'running' , 'result_desc' : 'Success' , 'result_jobid' : '12.14' , 'result_code' : 0}   Example 3: Working with archive volumes Example 3.1: Add a remote archive volume to cluster Name Description Parameters remote_volume_add Add a remote volume vol_type, url optional: remote_volume_name, username, password, labels, options, owner, allowed_users substitutes: remote_volume_id allowed_groups: root, exaadm, exastoradm notes: * 'ID' is assigned automatically if omitted (10000 + next free ID) 'ID' must be >= 10000 if specified 'name' may be empty (for backwards compat.) and is generated from 'ID' in that case ("r%04i" % ('ID' - 10000)) if 'owner' is omitted, the requesting user becomes the owner     >>> conn.job_exec( 'remote_volume_add' , { 'params' : { 'vol_type' : 's3' , 'url' : 'http: //bucketname.s3.amazonaws.com' ,'username': 'ACCESS-KEY','password': 'BASE64-ENCODED-SECRET-KEY'}}) { 'result_revision' : 18, 'result_jobid' : '11.3' , 'result_output' : [[ 'r0001' , 'root' , '/exa/etc/remote_volumes/root.0.conf' ]], 'result_name' : 'OK' , 'result_desc' : 'Success' , 'result_code' : 0}   Example 3.2: list all containing  remote volume names Name Description Parameter Returns remote_volume_list List all existing remote volumes None a list containing all remote volume names     >>> pprint.pprint(conn.job_exec( 'remote_volume_list' )) { 'result_code' : 0, 'result_desc' : 'Success' , 'result_jobid' : '11.94' , 'result_name' : 'OK' , 'result_output' : [ 'RemoteVolume1' ]}   Example 3.3: Connection state of the given remote volume Name Description Parameter Returns remote_volume_state Return the connection state of the given remote volume, online / Unmounted / Connection problem remote_volume_name substitutes: remote_volume_id List of the connection state of the given remote volume on all nodes   >>> conn.job_exec( 'remote_volume_state' , { 'params' : { 'remote_volume_name' : 'r0001' }}) { 'result_name' : 'OK' , 'result_output' : [ 'Online' ], 'result_desc' : 'Success' , 'result_jobid' : '11.10' , 'result_code' : 0}   Example 4: Manage cluster nodes Example 4.1: get node list Name Description Parameter Returns node_list List all cluster nodes (from EXAConf)  None Dict containing all cluster nodes.   >>> pprint.pprint( conn.job_exec( 'node_list' )) { 'result_code' : 0, 'result_desc' : 'Success' , 'result_jobid' : '11.95' , 'result_name' : 'OK' , 'result_output' : { '11' : { 'disks' : { 'disk1' : { 'component' : 'exastorage' , 'devices' : [ 'dev.1' ], 'direct_io' : True, 'ephemeral' : False, 'name' : 'disk1' }}, 'docker_volume' : '/exa/etc/n11' , 'exposed_ports' : [[8888, 8899], [6583, 6594]], 'id' : '11' , 'name' : 'n11' , 'private_ip' : '192.168.31.171' , 'private_net' : '192.168.31.171/24' , 'uuid' : 'C5ED84F591574F97A337B2EC9357B68EF0EC4EDE' }}}    Example 4.2: get node state Name Description Parameter Returns node_state State of all nodes (online, offline, deactivated)  None  A list containing a string representing the current node state.     >>> pprint.pprint(conn.job_exec( 'node_state' )) { 'result_code' : 0, 'result_desc' : 'Success' , 'result_jobid' : '11.96' , 'result_name' : 'OK' , 'result_output' : { '11' : 'online' , 'booted' : { '11' : 'Tue Jul 7 14:14:07 2020' }}}   other available options: node_add Add a node to the cluster priv_net optional: id, name, pub_net, space_warn_threshold, bg_rec_limit allowed_groups: root, exaadm int node_id node_remove Remove a node from the cluster node_id optional: force allowed_groups: root, exaadm None node_info Single node info with extended information (Cored, platform, load, state) None See the output of  cosnodeinfo node_suspend Suspend node, i. e. mark it as "permanently offline". node_id allowed_groups: root, exaadm mark one node as suspended node_resume Manually resume a suspended node. node_id allowed_groups: root, exaadm unmark one suspended node   Example 5: EXAStorage volume jobs  Example 5.1: list EXAStorage volumes Name Description Parameter Returns st_volume_list List all existing volumes in the cluster. none List of dicts     >>> pprint.pprint(conn.job_exec( 'st_volume_list' )) { 'result_code' : 0, 'result_desc' : 'Success' , 'result_jobid' : '11.97' , 'result_name' : 'OK' , 'result_output' : [{ 'app_io_enabled' : True, 'block_distribution' : 'vertical' , 'block_size' : 4096, 'bytes_per_block' : 4096, 'group' : 500, 'hdd_type' : 'disk1' , 'hdds_per_node' : 1, 'id' : '0' , 'int_io_enabled' : True, 'labels' : [ '#Name#DataVolume1' , 'pub:DB1_persistent' ], 'name' : 'DataVolume1' , 'nodes_list' : [{ 'id' : 11, 'unrecovered_segments' : 0}], 'num_master_nodes' : 1, 'owner' : 500, 'permissions' : 'rwx------' , 'priority' : 10, 'redundancy' : 1, 'segments' : [{ 'end_block' : '2621439' , 'index' : '0' , 'nid' : 0, 'partitions' : [], 'phys_nid' : 11, 'sid' : '0' , 'start_block' : '0' , 'state' : 'ONLINE' , 'type' : 'MASTER' , 'vid' : '0' }], 'shared' : True, 'size' : '10 GiB' , 'snapshots' : [], 'state' : 'ONLINE' , 'stripe_size' : 262144, 'type' : 'MASTER' , 'unlock_conditions' : [], 'use_crc' : True, 'users' : [[30, False]], 'volume_nodes' : [11]}, { 'app_io_enabled' : True, 'block_distribution' : 'vertical' , 'block_size' : 4096, 'bytes_per_block' : 4096, 'group' : 500, 'hdd_type' : 'disk1' , 'hdds_per_node' : 1, 'id' : '1' , 'int_io_enabled' : True, 'labels' : [ 'temporary' , 'pub:DB1_temporary' ], 'name' : 'v0001' , 'nodes_list' : [{ 'id' : 11, 'unrecovered_segments' : 0}], 'num_master_nodes' : 1, 'owner' : 500, 'permissions' : 'rwx------' , 'priority' : 10, 'redundancy' : 1, 'segments' : [{ 'end_block' : '262143' , 'index' : '0' , 'nid' : 0, 'partitions' : [], 'phys_nid' : 11, 'sid' : '0' , 'start_block' : '0' , 'state' : 'ONLINE' , 'type' : 'MASTER' , 'vid' : '1' }], 'shared' : True, 'size' : '1 GiB' , 'snapshots' : [], 'state' : 'ONLINE' , 'stripe_size' : 262144, 'type' : 'MASTER' , 'unlock_conditions' : [], 'use_crc' : True, 'users' : [[30, False]], 'volume_nodes' : [11]}]}    Example 5.2: Get information about volume with id "vid" Name Description Parameter Returns st_volume_info Return information about volume with id vid vid       >>> pprint.pprint(conn.job_exec( 'st_volume_info' , { 'params' : { 'vid' : 0}})) { 'result_code' : 0, 'result_desc' : 'Success' , 'result_jobid' : '11.98' , 'result_name' : 'OK' , 'result_output' : { 'app_io_enabled' : True, 'block_distribution' : 'vertical' , 'block_size' : '4 KiB' , 'bytes_per_block' : 4096, 'group' : 500, 'hdd_type' : 'disk1' , 'hdds_per_node' : 1, 'id' : '0' , 'int_io_enabled' : True, 'labels' : [ '#Name#DataVolume1' , 'pub:DB1_persistent' ], 'name' : 'DataVolume1' , 'nodes_list' : [{ 'id' : 11, 'unrecovered_segments' : 0}], 'num_master_nodes' : 1, 'owner' : 500, 'permissions' : 'rwx------' , 'priority' : 10, 'redundancy' : 1, 'segments' : [{ 'end_block' : '2621439' , 'index' : '0' , 'nid' : 0, 'partitions' : [], 'phys_nid' : 11, 'sid' : '0' , 'start_block' : '0' , 'state' : 'ONLINE' , 'type' : 'MASTER' , 'vid' : '0' }], 'shared' : True, 'size' : '10 GiB' , 'snapshots' : [], 'state' : 'ONLINE' , 'stripe_size' : '256 KiB' , 'type' : 'MASTER' , 'unlock_conditions' : [], 'use_crc' : True, 'users' : [[30, False]], 'volume_nodes' : [11]}}   other options: EXAStorage Volume Jobs     Name description Parameters st_volume_info Return information about volume with id vid vid st_volume_list List all existing volumes in the cluster. None st_volume_set_io_status Enable or disable application / internal io for volume app_io, int_io, vid st_volume_add_label Add a label to specified volume vid, label st_volume_remove_label Remove given label from the specified volume vid label st_volume_enlarge Enlarge volume by blocks_per_node vid, blocks_per_node st_volume_shrink Shrink volume by blocks_per_node vid, blocks_per_node st_volume_append_node Append nodes to a volume. storage.append_nodes(vid, node_num, node_ids) -> None vid, node_num, node_ids st_volume_move_node Move nodes of specified volume vid, src_nodes, dst_nodes st_volume_increase_redundancy Increase volume redundancy by delta value vid, delta, nodes st_volume_decrease_redundancy decrease volume redundancy by delta value vid, delta, nodes st_volume_lock Lock a volume vid optional: vname st_volume_lock Unlock a volume vid optional: vname st_volume_clear_data Clear data on (a part of) the given volume vid, num__bytes, node_ids optional: vname    Example 6: Working with backups Example 6.1: start a new backup Name Description Parameter Returns db_backup_start Start a backup of the given database to the given volume db_name, backup_volume_id, level, expire_time substitutes: dackup_volume_name       >>> conn.job_exec( 'db_backup_start' , { 'params' : { 'db_name' : 'DB1' , 'backup_volume_name' : 'RemoteVolume1' , 'level' : 0, 'expire_time' : '10d' }}) { 'result_name' : 'OK' , 'result_desc' : 'Success' , 'result_jobid' : '11.77' , 'result_code' : 0}   Example 6.2: abort backup Name Description Parameter Returns db_backup_abort Aborts the running backup of the given database db_name     >>> conn.job_exec( 'db_backup_abort' , { 'params' : { 'db_name' : 'DB1' }}) { 'result_name' : 'OK' , 'result_desc' : 'Success' , 'result_jobid' : '11.82' , 'result_code' : 0}   Example 6.3: list backups Name Description Parameter Returns db_backup_list Lists available backups for the given database db_name       >>> pprint.pprint(conn.job_exec( 'db_backup_list' , { 'params' : { 'db_name' : 'DB1' }})) { 'result_code' : 0, 'result_desc' : 'Success' , 'result_jobid' : '11.99' , 'result_name' : 'OK' , 'result_output' : [{ 'bid' : 11, 'comment' : '', 'dependencies' : '-' , 'expire' : '', 'expire_alterable' : '10001 DB1/id_11/level_0' , 'expired' : False, 'id' : '10001 DB1/id_11/level_0/node_0/backup_202007071405 DB1' , 'last_item' : True, 'level' : 0, 'path' : 'DB1/id_11/level_0/node_0/backup_202007071405' , 'system' : 'DB1' , 'timestamp' : '2020-07-07 14:05' , 'ts' : '202007071405' , 'usable' : True, 'usage' : '0.001 GiB' , 'volume' : 'RemoteVolume1' }]}    other options: Jobs to manage backups     Name description Parameters db_backups_delete Delete given backups of given database db_name, backup list (as returned by 'db_backup_list()') db_backup_change_expiration Change expiration time of the given backup files backup volume ID backup_files: Prefix of the backup files, like exa_db1/id_1/level_0 ) expire_time : Timestamp in seconds since the Epoch on which the backup should expire. db_backup_delete_unusable Delete all unusable backups for a given database db_name db_restore Restore a given database from given backup db_name, backup ID, restore type ('blocking' | 'nonblocking' | 'virtual access') db_backup_add_schedule Add a backup schedule to an existing database db_name, backup_name, volume, level, expire, minute, hour, day, month, weekday, enabled notes: * 'level' must be  int * 'expire' is string (use  common/util.str2sec to convert) 'backup_name' is  string (unique within a DB) db_backup_remove_schedule Remove an existing backup schedule  db_name, backup_name db_backup_modify_schedule Modify an existing backup schedule  db_name, backup_name   optional: hour, minute, day, month, weekday, enabled      We will continue to add more examples and we will add more options to this article. Additional References https://github.com/EXASOL/docker-db https://github.com/exasol/exaoperation-xmlrpc You can find another article about deploying a exasol database as an docker image in https://community.exasol.com/t5/environment-management/how-to-deploy-a-single-node-exasol-database-as-a-docker-image/ta-p/921
View full article
Background This article describes the calculation of the optimal (maximum) DB RAM on a: 4+1 system with one database (dedicated environment) 4+1 system with two databases (shared environment) The calculation of the OS Memory per Node stays the same for both environments. Shared environments are not recommended for production systems. Example Setup: The 4+1 cluster contains four active data nodes and one standby node. Each node has 384GiB of main memory. How to calculate Database RAM OS Memory per Node It is vital for the database that there is enough memory allocatable through the OS. We recommend using at least 10% of the main memory on each node. This prevents the nodes from swapping on high load (many sessions). Main Memory per Node * 0.1 = OS Memory per Node 384 * 0.1 = 38,4 -> 38GiB In order to set this value, the database needs to be shut down. EXAoperation 'Configuration > Network' - "OS Memory/Node (GiB)" Maximum DB RAM (dedicated environment) (Main Memory per Node - OS Memory per Node) * Number of active Nodes = Maximum DB RAM Example: 4 x data nodes with 384GiB (Main Memory per Node) - 38GiB (OS Memory per Node) (384GiB - 38 GiB) * 4 = 1380GiB Maximum DB RAM (shared environment) Example Database "one" on four data nodes (exa_db1) Database "two" on two data nodes (exa_db2) As before the "Maximum DB RAM" is 1380GiB. With two databases sharing the Maximum DB RAM, we need to recalculate and redistribute it. Maximum DB RAM / Number of Databases = Maximum DB RAM per database 1380GiB / 2 = 690GiB For database "one" (exa_db1), which is running on all four nodes 690GiB DB RAM can be configured. The smaller database "two" (exa_db2) is running on two nodes, therefore "Maximum DB RAM per database" needs to be divided by the number of data nodes it's running on (2). Maximum DB RAM per database / Number of active Nodes = Maximum DB RAM per database 690GiB / 2 = 345GiB       Additional References Sizing Considerations  
View full article
This article shows you how to allow internet access for the Community Edition running on VMWare
View full article
This article explains how to create a VPN between your AWS cluster and Exasol Support infrastructure
View full article
Background Installation of Protegrity via XML-RPC   Prerequisites Ask at service@exasol.com for Protegrity plugin. How to Install Protegrity via XML-RPC 1. Upload "Plugin.Security.Protegrity-6.6.4.19.pkg" to EXAoperation Login to EXAoperation (User privilege Administrator) Upload pkg Configuration>Software>Versions>Browse>Submit 2. Connect to EXAoperation via XML-RPC (this example uses Python) Following code block is ready to copy and paste into a python shell: from xmlrpclib import Server as xmlrpc from ssl import _create_unverified_context as ssl_context from pprint import pprint as pp from base64 import b64encode import getpass server = raw_input('Please enter IP or Hostname of Licenze Server:') ; user = raw_input('Enter your User login: ') ; password = getpass.getpass(prompt='Please enter Login Password:') server = xmlrpc('https://%s:%s@%s/cluster1/' % (user,password,server) , context = ssl_context()) 3. Show installed plugins >>> pp(server.showPluginList()) ['Security.Protegrity-6.6.4.19'] 4. Show plugin functions >>> pp(server.showPluginFunctions('Security.Protegrity-6.6.4.19')) 'INSTALL': 'Install plugin.', 'UPLOAD_DATA': 'Upload data directory.', 'UNINSTALL': 'Uninstall plugin.', 'START': 'Start pepserver.', 'STOP': 'Stop pepserver.', 'STATUS': 'Show status of plugin (not installed, started, stopped).' 5. For further usage we store the plugin name and the node list in variables: >>> pname = 'Security.Protegrity-6.6.4.19' >>> nlist = server.getNodeList() 6. Install the plugin >>> pp([[node] + server.callPlugin(pname, node, 'INSTALL', '') for node in nlist]) [['n0011', 0, ''], ['n0012', 0, ''], ['n0013', 0, ''], ['n0014', 0, '']] 7. Get the plugin status on each node: >>> pp([[node] + server.callPlugin(pname, node, 'STATUS', '') for node in nlist]) [['n0011', 0, 'stopped'], ['n0012', 0, 'stopped'], ['n0013', 0, 'stopped'], ['n0014', 0, 'stopped']] 8. Start plugin on each node: >>> pp([[node] + server.callPlugin(pname, node, 'START', '') for node in nlist]) [['n0011', 0, 'started'], ['n0012', 0, 'started'], ['n0013', 0, 'started'], ['n0014', 0, 'started']] 9. Push ESA config to nodes, server-side task Client Port (pepserver) is listening on TCP 15700 Additional Notes - Additional References -
View full article
This article goes through how to synchronize archive volumes between clusters using a Python UDF
View full article
Background Enlarge EXAStorage disk(s) after changing disk size of the ec2 instances Prerequisites To complete these steps, you need access to the AWS Management Console and have the permissions to do these actions in EXAoperation Please ensure you have a valid backup before proceeding. The below approach works only with the cluster installation. How to enlarge disk space in AWS Stop all databases and stop EXAStorage in EXAoperation Stop your EC2 instances, except the license node (ensure they don’t get terminated on shutdown; check shutdown behavior http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html) Modify the disk on AWS console (Select Volume -> Actions -> Modify -> Enter the new size -> Click Modify) Ensure Storage disk size is set to “Rest” <EXAoperation node setting>, if d03_storage/d04_storage is not set to "Rest", set INSTALL flag for all nodes adjust the setting and set the ACTIVE flag for all nodes, otherwise nodes will be reinstalled during boot (data loss)! Start instances Start EXAStorage Enlarge each node device using the “Enlarge Button” in EXAoperation/EXAStorage/n00xx/h000x/ Re-Start database Additional References https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modify-volume.html  
View full article
This article explains how to create a VPN between your GCP cluster and Exasol Support infrastructure
View full article
WHAT WE'LL LEARN? In this article you will learn how to update a Docker-based Exasol system. HOW-TO 1. Ensure that your Docker container is running with persistent storage. This means that your docker run command should contain a -v statement, like the example below: $ docker run --detach --network=host --privileged --name <container_name> -v $CONTAINER_EXA:/exa exasol/docker-db:6.2.8-d1 init-sc --node-id <node_id> 2. Log in to your Docker container's BASH environment: $ docker exec -it <container_name> /bin/bash  3. Stop the database, storage services and exit the container: $ dwad_client stop-wait <database_instance> $ csctrl -d $ exit 4. Stop the container: $ docker stop $container_name 5. Rename the existing container. Append with old, so that you know that this is the container which you won't be using anymore $ docker rename <container_name> <container_name_old> 6. Create a new tag for the older container image: $ docker tag exasol/docker-db:latest exasol/docker-db:older_image 7. Remove the "latest" tag for the "older_image": $ docker rmi exasol/docker-db:latest 8. Pull the latest Docker-based Exasol image: $ docker image pull exasol/docker-db:latest 8.1. Or pull the specific version you want. You can view the available versions and pull one of them with the commands bellow: $ wget -q https://registry.hub.docker.com/v1/repositories/exasol/docker-db/tags -O - | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n' | awk -F: '{print $3}' ... 6.2.3-d1 6.2.4-d1 6.2.5-d1 ... $ docker image pull exasol/docker-db:<image_version>  9. Run the following command to execute the update: $ docker run --privileged --rm -v $CONTAINER_EXA:/exa -v <all_other_volumes> exasol/docker-db:latest update-sc or $ docker run --privileged --rm -v $CONTAINER_EXA:/exa -v <all_other_volumes> exasol/docker-db:<image_version> update-sc Output should be similar to this: Updating EXAConf '/exa/etc/EXAConf' from version '6.1.5' to '6.2.0' Container has been successfully updated! - Image ver. : 6.1.5-d1 --> 6.2.0-d1 - DB ver. : 6.1.5 --> 6.2.0 - OS ver. : 6.1.5 --> 6.2.0 - RE ver. : 6.1.5 --> 6.2.0 - EXAConf : 6.1.5 --> 6.2.0  10. Run the container(s) the same way as you did before. Example: $ docker run --detach --network=host --privileged --name <container_name> -v $CONTAINER_EXA:/exa exasol/docker-db:latest init-sc --node-id <node_id> 11. You can check the status of your booting container (optional): $ docker logs <container_name> -f 12. You can remove the old container (optional): $ docker rm <container_name_old>
View full article
WHAT WE'LL LEARN? This article will show you how to change your license file in your Docker Exasol environment. HOW-TO NOTE: $CONTAINER_EXA is a variable set before deploying an Exasol database container with persistent storage. For more information, please check our Github repo. 1. Ensure that your Docker container is running with persistent storage. This means that your docker run command should contain a -v statement, like the example below: $ docker run --detach --network=host --privileged --name <container_name> -v $CONTAINER_EXA:/exa exasol/docker-db:6.1.5-d1 init-sc --node-id <node_id> 2. Copy the new license file to the the $CONTAINER_EXA/etc/ folder: $ cp /home/user/Downloads/new_license.xml $CONTAINER_EXA/etc/new_license.xml 3. Log in to your Docker container's BASH environment: $ docker exec -it <container_name> /bin/bash 4. Go to the /exa/etc folder and rename the old license.xml file: $ cd /exa/etc/ $ mv license.xml license.xml.old 5. Rename the new license file: $ mv new_license.xml license.xml 6. Double-check the contents of the directory, to ensure that the newer file is name license.xml: $ ls -l <other files> -rw-r--r-- 1 root root 2275 Jul 15 10:13 license.xml.old -rw-r--r-- 1 root root 1208 Jul 21 07:38 license.xml <other files> 7. Sync file across all nodes if you are using a multi-node cluster: $ cos_sync_files /exa/etc/license.xml $ cos_sync_files /exa/etc/license.xml.old 8. Stop the Database and Storage services: $ dwad_client stop-wait <database_instance> $ csctrl -d 9. Restart the Container: $ docker restart <container_name> 10. Log in to the container and check if the proper license is installed: $ docker exec -it <container_name> /bin/bash $ awk '/SHLVL/ {for(i=1; i<=6; i++) {getline; print}}' /exa/logs/cored/exainit.log | tail -6 You should get an output similar to this: [2020-07-21 09:43:50] stage0: You have following license limits: [2020-07-21 09:43:50] stage0: >>> Database memory (GiB): 50 Main memory (RAM) usable by databases [2020-07-21 09:43:50] stage0: >>> Database raw size (GiB): unlimited Raw Size of Databases (see Value RAW_OBJECT_SIZE in System Tables) [2020-07-21 09:43:50] stage0: >>> Database mem size (GiB): unlimited Compressed Size of Databases (see Value MEM_OBJECT_SIZE in System Tables) [2020-07-21 09:43:50] stage0: >>> Cluster nodes: unlimited Number of usable cluster nodes [2020-07-21 09:43:50] stage0: >>> Expiration date: unlimited Date of license expiration Check the parameters and see if it corresponds to your requested license parameters.
View full article
Background For database instances that are running with one or more reserve nodes, it is possible to swap an active node with a reserve node. You might need to do this in the case of a hardware problem with the active node, as an example. The following steps are required to perform the swap: Shut down the database instance(s) Switch nodes of the database instance(s) Start up the database instance(s) Move the data node to the reserve node At the end of the procedure, the node that was previously the reserve node is active. Prerequisites The procedure requires a maintenance window of at least 15 minutes. Before you continue with the below steps, make sure the reserve node on EXAStorage is online and running without any errors or issues. If there any are issues with the reserve node, when you restart the database, the reverse node may not function as expected. During this procedure, data is recovered from one node to another. The performance of the database is reduced until the data has fully been transferred to the target node. Data redundancy of the data volume should be at least 2.   How to Replace Active Node with Reserve Node on Docker/NGA based systems Step 1: Shut down the database instance(s) In order to shut down the database instances, we can use the "dwad_client" command. The syntax of the command is:   dwad_client stop-wait {database_name}   The database instance name(s) can be found via:   dwad_client shortlist   In order to verify the state of database instance(s) you can use the command below:   dwad_client list     Step 2: Switch nodes of the database instance(s) In order to switch nodes:   dwad_client switch-nodes {database_name} {active node} {reserve node}   In order to verify the state of nodes use the "cosps -N" command. You can list of nodes and find the current reserve node via:   dwad_client sys-nodes {database_name}     Step 3: Start up the database instance(s) In order to start up the database instances, we can use the "dwad_client" command. The syntax of the command is:   dwad_client start-wait {database_name}     Step 4: Move the data node to the reserve node In order to start up the database instances, we can use the "csmove" command. The syntax of the command is:   csmove -s {source node ID} -d {destination node ID} -m -v {volume ID}   - Source and Destination node IDs can be found via "cosps -N" command. - The volume ID can be found via "csinfo -v" command. Please check the volume labels in order to verify the data volume. for example:   === Labels === Name : 'DataVolume1' (#) pub : 'DB1_persistent'   After starting the moving procedure the data volume will start the data recovery process automatically. The performance of the database is reduced until the data has fully been transferred to the target node. You can monitor the recovery process via "cstop" command. Run "cstop" command Press 'r' for recovery Press 'a' to add node -> enter node number (or 'a' for all) Additional Notes This procedure also can be followed for on-premise and cloud deployments. In case of the redundancy of the data volume is less than 2 the command below can be used for increase redundancy.   csresize -i -l1 -v{VID}   Please run only run once otherwise redundancy will be increased to 3. Additional References Here I link to other sites/information that may be relevant. https://docs.exasol.com/administration/aws/nodes/replace_active_node.htm https://docs.exasol.com/administration/on-premise/nodes/replace_active_node.htm We're happy to get your experiences and feedback on this article below! 
View full article
This article describes the Exasol database backup process.
View full article
Portal registration tips.
View full article
Background With versions prior to 5.0.15 EXASOL cluster deployments only supported CIDR block 27.1.0.0/16 and subnet 27.1.0.0/16, now it's possible to use custom CIDR blocks but with some restrictions, because the CIDR block will automatically be managed by our cluster operating system. VPC CIDR block netmask must be between /16 (255.255.0.0) and /24 (255.255.255.0)   The first ten IP addresses of the cluster's subnet are reserved and cannot be used Explanation Getting the right VPC / subnet configuration: The subnet used for installation of the EXASOL cluster is calculated according to the VPC CIDR range: 1. For VPCs with 16 to 23 Bit netmasks, the subnet will have a 24 Bit mask. For a 24 Bit VPC, the subnet will have 26 Bit range. VPC CIDR RANGE -> Subnet mask 192.168.20.0/16 -> .../24 192.168.20.0/17 -> .../24 ... -> .../24 192.168.20.0/22 -> .../24 192.168.20.0/23     FORBIDDEN 192.168.20.0/24 -> .../26 192.168.20.0/25     FORBIDDEN   2. For the EXASOL subnet, the VPS's second available subnet is automatically used. Helpful is the tool sipcalc (http://sipcalc.tools.uebi.net/), e.g. Example 1: The VPC is 192.168.20.0/22 (255.255.252.0) -> A .../24 subnet is used (255.255.255.0). `sipcalc 192.168.20.0/24' calculates a network range of 192.168.20.0 - 192.168.20.255 which is the VPC's first subnet. => EXASOL uses the subsequent subnet, which is 192.168.21.0/24 Example 2: The VPC is 192.168.20.0/24 (255.255.255.0) -> A .../26 subnet is used (255.255.255.192). `sipcalc 192.168.20.0/26' calculates a network range of 192.168.20.0 - 192.168.20.63 which is the VPC's first subnet. => EXASOL uses the subsequent subnet, which is 192.168.20.64/26 3. The first 10 IP addresses of the subnet are reserved. The license server, therefore, gets the subnet base + 10, the other nodes follow. This table shows some example configurations: VPC CIDR block Public Subnet Gateway License Server IP address IPMI network host addresses First additional VLAN address 10.0.0.0/16 10.0.1.0/24 10.0.1.1 10.0.1.10 10.0.128.0 10.0.65.0/16 192.168.0.0/24 192.168.1.0/24 192.168.1.1 192.168.1.10 192.168.1.128 192.168.64.0/24 192.168.128.0/24 192.168.129.0/24 192.168.128.1 192.168.129.10 192.168.129.128 192.168.32.0/24 192.168.20.0/22 192.168.21.0/24 192.168.21.1 192.168.21.10 192.168.21.128   192.168.16.0/24 192.168.16.64/26 192.168.16.65 192.168.16.74 192.168.16.96 192.168.128.0/26   Additional References https://docs.exasol.com/administration/aws.htm
View full article
With this article, you will learn how to add an LDAP server for your database: 1. Log in to your Exasol container: $ docker exec -it <container_name> /bin/bash 2. Inside the container go to the /exa/etc/ folder and open the EXAConf file with a text editor of your choice: $ cd /exa/etc $ vim EXAConf 3. Under the DB section, right above the [[JDBC]] sub-section add a line that says Params and the values mentioned after it: [DB : DB1] Version = 6.1.5 MemSize = 6 GiB Port = 8563 Owner = 500 : 500 Nodes = 11,12,13 NumActiveNodes = 3 DataVolume = DataVolume1 Params = -LDAPServer="ldap://<your_ldap_server.your_domain>" [[JDBC]] BucketFS = bfsdefault Bucket = default Dir = drivers/jdbc [[Oracle]] BucketFS = bfsdefault Bucket = default Dir = drivers/oracle NOTE: You can also use ldaps instead of ldap 4. Change the value of Checksum in EXAConf: $ sed -i '/Checksum =/c\ Checksum = COMMIT' /exa/etc/EXAConf 5. Commit the changes: $ exaconf commit 6. At this point you have 2 options: 6.1. Restart the container: $ dwad_client stop-wait <database_instance> # Stop the database instance (inside the container) $ csctrl -d # Stop the storage service (inside the container) $ exit # Exit the container $ docker restart <container_name> # Restart the container $ docker exec -it <container_name> # Log in to the container's BASH environment $ dwad_client setup-print <database_instance> # See the database parameters ... PARAMS: -netmask= -auditing_enabled=0 -lockslb=1 -sandboxPath=/usr/opt/mountjail -cosLogErrors=0 -bucketFSConfigPath=/exa/etc/bucketfs_db.cfg -sysTZ=Europe/Berlin -etlJdbcConfigDir=/exa/data/bucketfs/bfsdefault/.dest/default/drivers/jdbc:/usr/opt/EXASuite-6/3rd-party/JDBC/@JDBCVERSION@:/usr/opt/EXASuite-6/EXASolution-6.1.5/jdbc -LDAPServer="ldap://your_ldap_server.your_domain" ... As you can from the output mentioned above, the parameters have been added. However, rebooting the cluster can cause some downtime. In order to shorten the duration of your downtime, you can try the method below. 6.2. Use a configuration file to change the parameters by just rebooting the database, not container: $ dwad_client setup-print <database_instance> > db1.cfg # See the database parameters $ vim db1.cfg # Edit the configuration file When you open the file, find the line starting with PARAMS and the parameter you need, like: PARAMS: -netmask= -auditing_enabled=0 -lockslb=1 -sandboxPath=/usr/opt/mountjail -cosLogErrors=0 -bucketFSConfigPath=/exa/etc/bucketfs_db.cfg -sysTZ=Europe/Berlin -etlJdbcConfigDir=/exa/data/bucketfs/bfsdefault/.dest/default/drivers/jdbc:/usr/opt/EXASuite-6/3rd-party/JDBC/@JDBCVERSION@:/usr/opt/EXASuite-6/EXASolution-6.1.5/jdbc -LDAPServer="ldap://your_ldap_server.your_domain" After adding the parameters, save the file and execute the following commands: $ dwad_client stop-wait <database_instance> # Stop the database instance (inside the container) $ dwad_client setup <database_instance> db1.cfg # Setup the database with the db1.cfg configuration file (inside the container) $ dwad_client start-wait <database_instance> # Start the database instance (inside the container) This will add the database parameters, but will not be persistent throughout reboots. Therefore, by adding the parameters this way you shorten your downtime, but the changes aren't permanent. After doing this, we would recommend to also do method 6.1, in case you decide to reboot sometime in the future. 7. Verify the parameters: 7.1. With dwad_client list:             7.2. With dwad_list print-setup <database_instance>:
View full article
Server installation Minimal Centos 7 Register at Splunk and download the e.g. free version Download splunk-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Install RPM, target directory will be /opt/splunk rpm -ivh splunk-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Start Splunk, accept EULA and enter   username   and   password /opt/splunk/bin/splunk start Create a SSH port forward to access the web UI.  NOTE!     If /etc/hosts is configured properly and name resolution is working, no port forwarding is needed. ssh root@HOST-IP -L8000:localhost:8000 Login with username and password you provided during the installation https://localhost:8000 Setup an Index to store data From the web UI go to Settings Indexes New Index Name: remotelogs Type: Events Max Size: e.g. 20GB Save Create a new listener to receive data From the web UI go to Settings Forwarding and receiving Configure receiving: Add new New Receiving Port: 9700 Save Restart Splunk /opt/splunk/bin/splunk restart Client installation Splunk Universal Forwarder Download the splunkforwarder-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Install via rpm, target directory will be /opt/splunkforwarder rpm -ivh splunkforwarder-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Start Splunk Forwarder, accept EULA and       enter the same username and password as for the Splunk Server /opt/splunkforwarder/bin/splunk start Setup forward-server and monitor Add Splunk server as a server to receive forwarded log files. Same username and password as before /opt/splunkforwarder/bin/splunk add forward-server HOST-IP:9700 -auth USER:PASSWORD Add a log file, e.g. audit.log from auditd. It requires the log file location, type of logs and the index we created before. /opt/splunkforwarder/bin/splunk add monitor /var/log/audit/audit.log -sourcetype linux_logs -index remotelogs Check if forward server and log files have been enabled, restart splunkforwarder if nothing happens /opt/splunkforwarder/bin/splunk list monitor Your session is invalid. Please login. Splunk username: admin Password: Monitored Directories: $SPLUNK_HOME/var/log/splunk /opt/splunkforwarder/var/log/splunk/audit.log /opt/splunkforwarder/var/log/splunk/btool.log /opt/splunkforwarder/var/log/splunk/conf.log /opt/splunkforwarder/var/log/splunk/first_install.log /opt/splunkforwarder/var/log/splunk/health.log /opt/splunkforwarder/var/log/splunk/license_usage.log /opt/splunkforwarder/var/log/splunk/mongod.log /opt/splunkforwarder/var/log/splunk/remote_searches.log /opt/splunkforwarder/var/log/splunk/scheduler.log /opt/splunkforwarder/var/log/splunk/searchhistory.log /opt/splunkforwarder/var/log/splunk/splunkd-utility.log /opt/splunkforwarder/var/log/splunk/splunkd_access.log /opt/splunkforwarder/var/log/splunk/splunkd_stderr.log /opt/splunkforwarder/var/log/splunk/splunkd_stdout.log /opt/splunkforwarder/var/log/splunk/splunkd_ui_access.log $SPLUNK_HOME/var/log/splunk/license_usage_summary.log /opt/splunkforwarder/var/log/splunk/license_usage_summary.log $SPLUNK_HOME/var/log/splunk/metrics.log /opt/splunkforwarder/var/log/splunk/metrics.log $SPLUNK_HOME/var/log/splunk/splunkd.log /opt/splunkforwarder/var/log/splunk/splunkd.log $SPLUNK_HOME/var/log/watchdog/watchdog.log* /opt/splunkforwarder/var/log/watchdog/watchdog.log $SPLUNK_HOME/var/run/splunk/search_telemetry/*search_telemetry.json $SPLUNK_HOME/var/spool/splunk/...stash_new Monitored Files: $SPLUNK_HOME/etc/splunk.version /var/log/all.log /var/log/audit/audit.log Check if the Splunk server is available /opt/splunkforwarder/bin/splunk list forward-server Active forwards: 10.70.0.186:9700 Configured but inactive forwards: None Search logs in the Web UI                                       Collecting Metrics Download the Splunk Unix Add-on splunk-add-on-for-unix-and-linux_602.tgz unpack and copy to the splunkforwarder app folder tar xf splunk-add-on-for-unix-and-linux_602.tgz mv Splunk_TA_nix /opt/splunkforwarder/etc/apps/ Enable Metrics you want to receivce, set   disable = 0   to *enable *metric vim /opt/splunkforwarder/etc/apps/Splunk_TA_nix/default/inputs.conf Stop splunk forwarder and start splunk forwarder /opt/splunkforwarder/bin/splunk stop /opt/splunkforwarder/bin/splunk start
View full article
Background Installation of FSC Linux Agents via XML-RPC   Prerequisites Ask at service@exasol.com for FSC Monitoring plugin.   How to Install FSC Linux Agents via XML-RPC   1. Upload "Plugin.Administration.FSC-7.31-16.pkg" to EXAoperation Login to EXAoperation (User privilege Administrator) Upload pkg Configuration>Software>Versions>Browse>Submit   2. Connect to EXAoperation via XML-RPC (this example uses Python) >>> import xmlrpclib, pprint >>> s = xmlrpclib.ServerProxy("http://user:password@license-server/cluster1")   3. Show plugin functions >>> pprint.pprint(s.showPluginFunctions('Administration.FSC-7.31-16')) { 'INSTALL_AND_START': 'Install and start plugin.', 'UNINSTALL': 'Uninstall plugin.', 'START': 'Start FSC and SNMP services.', 'STOP': 'Stop FSC and SNMP services.', 'RESTART': 'Restart FSC and SNMP services.', 'PUT_SNMP_CONFIG': 'Upload new SNMP configuration.', 'GET_SNMP_CONFIG': 'Download current SNMP configuration.', 'STATUS': 'Show status of plugin (not installed, started, stopped).' }   4. Install FSC and check for return code >>> sts, ret = s.callPlugin('Administration.FSC-7.31-16','n10','INSTALL_AND_START') >>> ret 0   5. Upload snmpd.conf (Example attached to this article) >>> sts, ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'PUT_SNMP_CONFIG', file('/home/user/snmpd.conf').read())   6. Start FSC and check status >>> ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'RESTART') >>> ret >>> ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'STATUS') >>> ret [0, 'started']   7. Repeat steps 4-6 have for each node.   Additional Notes For monitoring the FSC agents go to http://support.ts.fujitsu.com/content/QuicksearchResult.asp and search for "ServerView Integration Pack for NAGIOS Additional References -
View full article
Certified Hardware List
The hardware certified by Exasol can be found in the link below:

Certified Hardware List

If your preferred hardware is not certified, refer to our Certification Process for more information on this process.
Top Contributors