Environment Management
Manage the environment around the database, such as Cloud, Monitoring, Exaoperation and scalability
cancel
Showing results for 
Search instead for 
Did you mean: 
Background Deploy a single-node Exasol database as a Docker image for testing purposes Blog snapshot This blog will show you: How to deploy a single-node Exasol database as a Docker image for testing purposes Before we go into the step-by-step guide, please read through the following prerequisites and recommendations to make sure that you're prepared Prerequisites Host OS: Currently, Exasol only supports Docker on Linux. It’s not possible to use Docker for Windows to deploy the Exasol database. The requirement for Linux OS is O_DIRECT access. Docker installed Linux machine: In this article, I’m going to use Centos 7.6 virtual machine with the latest version of docker (currently Version 19.03). Privileged mode: Docker privileged mode is required for permissions management, UDF support, and environment configuration and validation (sysctl, hugepages, block-devices, etc.). Memory requirements for the host environment: Each database instance needs at least 2 GiB RAM. Exasol recommends that the host reserves at least 4 GiB RAM for each running Exasol container. Since in this article I’m going to deploy a single node container I will use 6 GiB RAM for VM. Service requirements for the host environment: NTP should be configured on the host OS. Also, the RNG daemon must be running to provide enough entropy for the Exasol services in the container. Recommendations Performance optimization: Exasol strongly recommends setting the CPU governor on the host to performance, to avoid serious performance problems. You can use the cpupower utility or the command below to set it. Using cpupower utility     $ sudo cpupower -c all frequency-set -g powersave     Change the content of scaling_governor files:     $ for F in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor; do echo performance >$F; done     Hugepages: Exasol recommends enabling hugepages for hosts with at least 64GB RAM. To do so, we have to set the Hugepages option in EXAConf to either auto, host, or the number of hugepages per container. If we will set it to auto, the number of hugepages will be determined automatically, depending on the DB settings. When setting it to host the nr. of hugepages from the host system will be used (i. e. /proc/sys/VM/nr_hugepages will not be changed). However, /proc/sys/VM/hugetlb_shm_group will always be set to an internal value! Resource limitation: It's possible to limit the resources of the Exasol container with the following docker run options:     $ docker run --cpuset-cpus="1,2,3,4" --memory=20g --memory-swap=20g --memory-reservation=10g exasol/docker-db:<version>       This is especially recommended if we need multiple Exasol containers (or other services) on the same host. In that case, we should evenly distribute the available CPUs and memory throughout your Exasol containers. Find more detailed information here https://docs.docker.com/config/containers/resource_constraints/ How to deploy a single-node Exasol database as a Docker image Step 1 Create a directory to store data from container persistently To store all persistent data from the container I’m going to create a directory. I will name it “container_exa” and create it in the home folder of the Linux user.     $ mkdir $HOME/container_exa/     Set the CONTAINER_EXA variable to the folder:     $ echo ‘export CONTAINER_EXA="$HOME/container_exa/"’ >> ~/.bashrc && source ~/.bashrc     Step 2 Create a configuration file for Exasol database and docker container The command for creating a configuration file is:     $ docker run -v "$CONTAINER_EXA":/exa --rm -i exasol/docker-db:<version> init-sc --template --num-nodes 1     Since I’m going to use the latest version of exasol (currently 6.2.6). I will use the latest tag. Num-nodes is the number of containers. We need to change the value of this if we want to deploy a cluster.     $ docker run -v "$CONTAINER_EXA":/exa --rm -i exasol/docker-db:latest init-sc --template --num-nodes 1     NOTE: You need to add --privileged option because the host directory belongs to root.   After the command has finished, the directory $CONTAINER_EXA contains all subdirectories as well as an EXAConf template (in /etc).   Step 3 Complete a configuration file The configuration has to be completed before the Exasol DB container can be started. The configuration file is EXAConf and it’s stored in the “$CONTAINER_EXA/etc” folder. To be able to start a container these options have to be configured:   A private network of all nodes (Public network is not mandatory in docker version of Exasol DB) EXAStorage device(s) EXAVolume configuration Network port numbers Nameservers Different options can be configured in the EXAConf file. I will post articles about most of them. 1)  A private network of the node     $ vim $CONTAINER_EXA/etc/EXAConf [Node : 11] PrivateNet = 10.10.10.11/24 # <-- replace with the real network       In this case, the IP address of Linux the virtual machine is 10.1.2.4/24. 2) EXAStorage device configuration Use the dev.1 file as an EXAStorage device for Exasol DB and mount the LVM disk to it.     3) EXAVolume configuration Configure the volume size for Exasol DB before starting the container. There are 3 types of volumes available for Exasol. Volumes in Exasol serve three different purposes. You can find detailed information in https://docs.exasol.com/administration/on-premise/manage_storage/volumes.htm?Highlight=volumes Since it’s recommended to use less disk space than the size of LVM disk (because Exasol will create a temporary volume and there should be a free disk space for it) I’d recommend using 20 GiB space for volume. The actual size of the volume increases or decreases depending on the data stored.   4) Network port numbers Since you should use the host network mode (see "Start the cluster" below), you have to adjust the port numbers used by the Exasol services. The one that's most likely to collide is the SSH daemon, which is using the well-known port 22. I’m going to change it to 2222 in EXAConf file:   The other Exasol services (e. g. Cored, BucketFS, and the DB itself) are using port numbers above 1024. However, you can change them all by editing EXAConf. In this example, I’m going to use the default ports.     Port 22 – SSH connection Port 443 – for XMLRPC Port 8888 – port of the Database Port 6583 – port for bucketfs     5) Nameservers We can define a comma-separated list of nameservers for this cluster in EXAConf under the [Global] section. Use the google DNS address 8.8.8.8. Set the checksum within EXAConf to 'COMMIT'. This is the EXAConf integrity check (introduced in version 6.0.7-d1) that protects EXAConf from accidental changes and detects file corruption. It can be found in the 'Global' section, near the top of the file. Please also adjust the Timezone depending on your requirements.   Step 5 Create the EXAStorage device files EXAStorage is a distributed storage engine. All data is stored inside volumes. It also provides a failover mechanism. I’d recommend using a 32 GB LVM disk for EXAStorage:     $ lsblk         IMPORTANT: Each device should be slightly bigger (~1%) than the required space for the volume(s) because a part of it will be reserved for metadata and checksums. Step 5 Start the cluster The cluster is started by creating all containers individually and passing each of them its ID from the EXAConf. Since we’ll be deploying a single node Exasol DB the node ID will be n11 and the command would be:     $ docker run --name exasol-db --detach --network=host --privileged -v $CONTAINER_EXA:/exa -v /dev/mapper/db-storage:/exa/data/storage/dev.1 exasol/docker-db:latest init-sc --node-id 11     NOTE: This example uses the host network stack, i.e. the containers are directly accessing a host interface to connect. There is no need to expose ports in this mode: they are all accessible on the host. Let’s user the “docker logs” command to check the log files.     $ docker logs -f exasoldb       We can see 5 different stages in the logs. Stage 5 is the last and if we can see the node is online and the stage is finished this means the container and database started successfully.     $ docker container ls       Let’s get a bash shell in the container and check the status of the database and volumes     $ docker exec -it exasol-db bash     Inside of the container, you can run some exasol specific commands to manage the database and services. You can find some of these commands below: $ dwad_client shortlist: Gives an output about the names of the databases. $ dwad_client list: Gives an output about the current status of the databases.   As we can see the name of the database is DB1 (this can be configured in EXAConf) and the state is running. The “Connection state: up” means we can connect to the database via port 8888. $ csinfo -D – Print HDD info:   csinfo -v print information about one (or all) volume(s):   As we can see the size of the data volume is 20.00 GiB. You can also find information about the temporary volume in the output of the csinfo -v command. Since the database is running and the connection state is up let’s try to connect and run for example SQL queries. You can use any SQL clients or Exaplus CLI to connect. I’m going to use DBeaver in this article. You can find more detailed information in https://docs.exasol.com/connect_exasol/sql_clients/dbeaver.htm I’m using the public IP address of the virtual machine and port 8888 which configured as a database port in EXAConf.   By default, the password of the sys user is “exasol”. Let's run an example query:     SELECT * FROM EXA_SYSCAT;         Conclusion In this article, we deployed a single-node Exasol database in a docker container and went through the EXAConf file. In the future, I will be sharing new articles about running Exasol on docker and will analyze the EXAConf file and Exasol services in-depth. Additional References https://github.com/EXASOL/docker-db https://docs.docker.com/config/containers/resource_constraints/ https://docs.exasol.com/administration/on-premise/manage_storage/volumes.htm?Highlight=volumes  
View full article
Background This article describes the calculation of the optimal (maximum) DB RAM on a: 4+1 system with one database (dedicated environment) 4+1 system with two databases (shared environment) The calculation of the OS Memory per Node stays the same for both environments. Shared environments are not recommended for production systems. Example Setup: The 4+1 cluster contains four active data nodes and one standby node. Each node has 384GiB of main memory. How to calculate Database RAM OS Memory per Node It is vital for the database that there is enough memory allocatable through the OS. We recommend using at least 10% of the main memory on each node. This prevents the nodes from swapping on high load (many sessions). Main Memory per Node * 0.1 = OS Memory per Node 384 * 0.1 = 38,4 -> 38GiB In order to set this value, the database needs to be shut down. EXAoperation 'Configuration > Network' - "OS Memory/Node (GiB)" Maximum DB RAM (dedicated environment) (Main Memory per Node - OS Memory per Node) * Number of active Nodes = Maximum DB RAM Example: 4 x data nodes with 384GiB (Main Memory per Node) - 38GiB (OS Memory per Node) (384GiB - 38 GiB) * 4 = 1380GiB Maximum DB RAM (shared environment) Example Database "one" on four data nodes (exa_db1) Database "two" on two data nodes (exa_db2) As before the "Maximum DB RAM" is 1380GiB. With two databases sharing the Maximum DB RAM, we need to recalculate and redistribute it. Maximum DB RAM / Number of Databases = Maximum DB RAM per database 1380GiB / 2 = 690GiB For database "one" (exa_db1), which is running on all four nodes 690GiB DB RAM can be configured. The smaller database "two" (exa_db2) is running on two nodes, therefore "Maximum DB RAM per database" needs to be divided by the number of data nodes it's running on (2). Maximum DB RAM per database / Number of active Nodes = Maximum DB RAM per database 690GiB / 2 = 345GiB       Additional References Sizing Considerations  
View full article
With this article, you will learn how to add and change database parameters and their values. 1. Log in to your Exasol container: $ docker exec -it <container_name> /bin/bash 2. Inside the container go to the /exa/etc/ folder and open the EXAConf file with a text editor of your choice: $ cd /exa/etc $ vim EXAConf 3. Under the DB section, right above the [[JDBC]] sub-section add a line that says Params and the necessary parameters: [DB : DB1] Version = 6.1.5 MemSize = 6 GiB Port = 8563 Owner = 500 : 500 Nodes = 11,12,13 NumActiveNodes = 3 DataVolume = DataVolume1 Params = -useIndexWrapper=0 -disableIndexIteratorScan=1 [[JDBC]] BucketFS = bfsdefault Bucket = default Dir = drivers/jdbc [[Oracle]] BucketFS = bfsdefault Bucket = default Dir = drivers/oracle 4. Change the value of Checksum in EXAConf: $ sed -i '/Checksum =/c\ Checksum = COMMIT' /exa/etc/EXAConf 5. Commit the changes: $ exaconf commit 6. At this point you have 2 options: 6.1. Restart the container: $ dwad_client stop-wait <database_instance> # Stop the database instance (inside the container) $ csctrl -d # Stop the storage service (inside the container) $ exit # Exit the container $ docker restart <container_name> # Restart the container $ docker exec -it <container_name> # Log in to the container's BASH environment $ dwad_client setup-print <database_instance> # See the database parameters ... PARAMS: -netmask= -auditing_enabled=0 -lockslb=1 -sandboxPath=/usr/opt/mountjail -cosLogErrors=0 -bucketFSConfigPath=/exa/etc/bucketfs_db.cfg -sysTZ=Europe/Berlin -etlJdbcConfigDir=/exa/data/bucketfs/bfsdefault/.dest/default/drivers/jdbc:/usr/opt/EXASuite-6/3rd-party/JDBC/@JDBCVERSION@:/usr/opt/EXASuite-6/EXASolution-6.1.5/jdbc -useIndexWrapper=0 -disableIndexIteratorScan=1 ... As you can from the output mentioned above, the parameters have been added. However, rebooting the cluster can cause some downtime. In order to shorten the duration of your downtime, you can try the method below. 6.2. Use a configuration file to change the parameters by just rebooting the database, not container: $ dwad_client setup-print <database_instance> > db1.cfg # See the database parameters $ vim db1.cfg # Edit the configuration file When you open the file, find the line starting with PARAMS and the parameter you need, like: PARAMS: -netmask= -auditing_enabled=0 -lockslb=1 -sandboxPath=/usr/opt/mountjail -cosLogErrors=0 -bucketFSConfigPath=/exa/etc/bucketfs_db.cfg -sysTZ=Europe/Berlin -etlJdbcConfigDir=/exa/data/bucketfs/bfsdefault/.dest/default/drivers/jdbc:/usr/opt/EXASuite-6/3rd-party/JDBC/@JDBCVERSION@:/usr/opt/EXASuite-6/EXASolution-6.1.5/jdbc -useIndexWrapper=0 -disableIndexIteratorScan=1 After adding the parameters, save the file and execute the following commands: $ dwad_client stop-wait <database_instance> # Stop the database instance (inside the container) $ dwad_client setup <database_instance> db1.cfg # Setup the database with the db1.cfg configuration file (inside the container) $ dwad_client start-wait <database_instance> # Start the database instance (inside the container) This will add the database parameters, but will not be persistent throughout reboots. Therefore, by adding the parameters this way you shorten your downtime, but the changes aren't permanent. After doing this, we would recommend to also do method 6.1, in case you decide to reboot sometime in the future. 7. Verify the parameters: 7.1. With dwad_client list:             7.2. With dwad_list print-setup <database_instance>:      
View full article
This article shows you how to allow internet access for the Community Edition running on VMWare
View full article
This article goes through how to synchronize archive volumes between clusters using a Python UDF
View full article
This article explains how to create a VPN between your AWS cluster and Exasol Support infrastructure
View full article
Background Installation of Protegrity via XML-RPC   Prerequisites Ask at service@exasol.com for Protegrity plugin. How to Install Protegrity via XML-RPC 1. Upload "Plugin.Security.Protegrity-6.6.4.19.pkg" to EXAoperation Login to EXAoperation (User privilege Administrator) Upload pkg Configuration>Software>Versions>Browse>Submit 2. Connect to EXAoperation via XML-RPC (this example uses Python) Following code block is ready to copy and paste into a python shell: from xmlrpclib import Server as xmlrpc from ssl import _create_unverified_context as ssl_context from pprint import pprint as pp from base64 import b64encode import getpass server = raw_input('Please enter IP or Hostname of Licenze Server:') ; user = raw_input('Enter your User login: ') ; password = getpass.getpass(prompt='Please enter Login Password:') server = xmlrpc('https://%s:%s@%s/cluster1/' % (user,password,server) , context = ssl_context()) 3. Show installed plugins >>> pp(server.showPluginList()) ['Security.Protegrity-6.6.4.19'] 4. Show plugin functions >>> pp(server.showPluginFunctions('Security.Protegrity-6.6.4.19')) 'INSTALL': 'Install plugin.', 'UPLOAD_DATA': 'Upload data directory.', 'UNINSTALL': 'Uninstall plugin.', 'START': 'Start pepserver.', 'STOP': 'Stop pepserver.', 'STATUS': 'Show status of plugin (not installed, started, stopped).' 5. For further usage we store the plugin name and the node list in variables: >>> pname = 'Security.Protegrity-6.6.4.19' >>> nlist = server.getNodeList() 6. Install the plugin >>> pp([[node] + server.callPlugin(pname, node, 'INSTALL', '') for node in nlist]) [['n0011', 0, ''], ['n0012', 0, ''], ['n0013', 0, ''], ['n0014', 0, '']] 7. Get the plugin status on each node: >>> pp([[node] + server.callPlugin(pname, node, 'STATUS', '') for node in nlist]) [['n0011', 0, 'stopped'], ['n0012', 0, 'stopped'], ['n0013', 0, 'stopped'], ['n0014', 0, 'stopped']] 8. Start plugin on each node: >>> pp([[node] + server.callPlugin(pname, node, 'START', '') for node in nlist]) [['n0011', 0, 'started'], ['n0012', 0, 'started'], ['n0013', 0, 'started'], ['n0014', 0, 'started']] 9. Push ESA config to nodes, server-side task Client Port (pepserver) is listening on TCP 15700 Additional Notes - Additional References -
View full article
This article explains how to create a VPN between your GCP cluster and Exasol Support infrastructure
View full article
WHAT WE'LL LEARN? In this article you will learn how to update a Docker-based Exasol system. HOW-TO 1. Ensure that your Docker container is running with persistent storage. This means that your docker run command should contain a -v statement, like the example below: $ docker run --detach --network=host --privileged --name <container_name> -v $CONTAINER_EXA:/exa exasol/docker-db:6.2.8-d1 init-sc --node-id <node_id> 2. Log in to your Docker container's BASH environment: $ docker exec -it <container_name> /bin/bash  3. Stop the database, storage services and exit the container: $ dwad_client stop-wait <database_instance> $ csctrl -d $ exit 4. Stop the container: $ docker stop $container_name 5. Rename the existing container. Append with old, so that you know that this is the container which you won't be using anymore $ docker rename <container_name> <container_name_old> 6. Create a new tag for the older container image: $ docker tag exasol/docker-db:latest exasol/docker-db:older_image 7. Remove the "latest" tag for the "older_image": $ docker rmi exasol/docker-db:latest 8. Pull the latest Docker-based Exasol image: $ docker image pull exasol/docker-db:latest 8.1. Or pull the specific version you want. You can view the available versions and pull one of them with the commands bellow: $ wget -q https://registry.hub.docker.com/v1/repositories/exasol/docker-db/tags -O - | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n' | awk -F: '{print $3}' ... 6.2.3-d1 6.2.4-d1 6.2.5-d1 ... $ docker image pull exasol/docker-db:<image_version>  9. Run the following command to execute the update: $ docker run --privileged --rm -v $CONTAINER_EXA:/exa -v <all_other_volumes> exasol/docker-db:latest update-sc or $ docker run --privileged --rm -v $CONTAINER_EXA:/exa -v <all_other_volumes> exasol/docker-db:<image_version> update-sc Output should be similar to this: Updating EXAConf '/exa/etc/EXAConf' from version '6.1.5' to '6.2.0' Container has been successfully updated! - Image ver. : 6.1.5-d1 --> 6.2.0-d1 - DB ver. : 6.1.5 --> 6.2.0 - OS ver. : 6.1.5 --> 6.2.0 - RE ver. : 6.1.5 --> 6.2.0 - EXAConf : 6.1.5 --> 6.2.0  10. Run the container(s) the same way as you did before. Example: $ docker run --detach --network=host --privileged --name <container_name> -v $CONTAINER_EXA:/exa exasol/docker-db:latest init-sc --node-id <node_id> 11. You can check the status of your booting container (optional): $ docker logs <container_name> -f 12. You can remove the old container (optional): $ docker rm <container_name_old>
View full article
Portal registration tips.
View full article
WHAT WE'LL LEARN? This article will show you how to change your license file in your Docker Exasol environment. HOW-TO NOTE: $CONTAINER_EXA is a variable set before deploying an Exasol database container with persistent storage. For more information, please check our Github repo. 1. Ensure that your Docker container is running with persistent storage. This means that your docker run command should contain a -v statement, like the example below: $ docker run --detach --network=host --privileged --name <container_name> -v $CONTAINER_EXA:/exa exasol/docker-db:6.1.5-d1 init-sc --node-id <node_id> 2. Copy the new license file to the the $CONTAINER_EXA/etc/ folder: $ cp /home/user/Downloads/new_license.xml $CONTAINER_EXA/etc/new_license.xml 3. Log in to your Docker container's BASH environment: $ docker exec -it <container_name> /bin/bash 4. Go to the /exa/etc folder and rename the old license.xml file: $ cd /exa/etc/ $ mv license.xml license.xml.old 5. Rename the new license file: $ mv new_license.xml license.xml 6. Double-check the contents of the directory, to ensure that the newer file is name license.xml: $ ls -l <other files> -rw-r--r-- 1 root root 2275 Jul 15 10:13 license.xml.old -rw-r--r-- 1 root root 1208 Jul 21 07:38 license.xml <other files> 7. Sync file across all nodes if you are using a multi-node cluster: $ cos_sync_files /exa/etc/license.xml $ cos_sync_files /exa/etc/license.xml.old 8. Stop the Database and Storage services: $ dwad_client stop-wait <database_instance> $ csctrl -d 9. Restart the Container: $ docker restart <container_name> 10. Log in to the container and check if the proper license is installed: $ docker exec -it <container_name> /bin/bash $ awk '/SHLVL/ {for(i=1; i<=6; i++) {getline; print}}' /exa/logs/cored/exainit.log | tail -6 You should get an output similar to this: [2020-07-21 09:43:50] stage0: You have following license limits: [2020-07-21 09:43:50] stage0: >>> Database memory (GiB): 50 Main memory (RAM) usable by databases [2020-07-21 09:43:50] stage0: >>> Database raw size (GiB): unlimited Raw Size of Databases (see Value RAW_OBJECT_SIZE in System Tables) [2020-07-21 09:43:50] stage0: >>> Database mem size (GiB): unlimited Compressed Size of Databases (see Value MEM_OBJECT_SIZE in System Tables) [2020-07-21 09:43:50] stage0: >>> Cluster nodes: unlimited Number of usable cluster nodes [2020-07-21 09:43:50] stage0: >>> Expiration date: unlimited Date of license expiration Check the parameters and see if it corresponds to your requested license parameters.
View full article
This article describes the Exasol database backup process.
View full article
Background With versions prior to 5.0.15 EXASOL cluster deployments only supported CIDR block 27.1.0.0/16 and subnet 27.1.0.0/16, now it's possible to use custom CIDR blocks but with some restrictions, because the CIDR block will automatically be managed by our cluster operating system. VPC CIDR block netmask must be between /16 (255.255.0.0) and /24 (255.255.255.0)   The first ten IP addresses of the cluster's subnet are reserved and cannot be used Explanation Getting the right VPC / subnet configuration: The subnet used for installation of the EXASOL cluster is calculated according to the VPC CIDR range: 1. For VPCs with 16 to 23 Bit netmasks, the subnet will have a 24 Bit mask. For a 24 Bit VPC, the subnet will have 26 Bit range. VPC CIDR RANGE -> Subnet mask 192.168.20.0/16 -> .../24 192.168.20.0/17 -> .../24 ... -> .../24 192.168.20.0/22 -> .../24 192.168.20.0/23     FORBIDDEN 192.168.20.0/24 -> .../26 192.168.20.0/25     FORBIDDEN   2. For the EXASOL subnet, the VPS's second available subnet is automatically used. Helpful is the tool sipcalc (http://sipcalc.tools.uebi.net/), e.g. Example 1: The VPC is 192.168.20.0/22 (255.255.252.0) -> A .../24 subnet is used (255.255.255.0). `sipcalc 192.168.20.0/24' calculates a network range of 192.168.20.0 - 192.168.20.255 which is the VPC's first subnet. => EXASOL uses the subsequent subnet, which is 192.168.21.0/24 Example 2: The VPC is 192.168.20.0/24 (255.255.255.0) -> A .../26 subnet is used (255.255.255.192). `sipcalc 192.168.20.0/26' calculates a network range of 192.168.20.0 - 192.168.20.63 which is the VPC's first subnet. => EXASOL uses the subsequent subnet, which is 192.168.20.64/26 3. The first 10 IP addresses of the subnet are reserved. The license server, therefore, gets the subnet base + 10, the other nodes follow. This table shows some example configurations: VPC CIDR block Public Subnet Gateway License Server IP address IPMI network host addresses First additional VLAN address 10.0.0.0/16 10.0.1.0/24 10.0.1.1 10.0.1.10 10.0.128.0 10.0.65.0/16 192.168.0.0/24 192.168.1.0/24 192.168.1.1 192.168.1.10 192.168.1.128 192.168.64.0/24 192.168.128.0/24 192.168.129.0/24 192.168.128.1 192.168.129.10 192.168.129.128 192.168.32.0/24 192.168.20.0/22 192.168.21.0/24 192.168.21.1 192.168.21.10 192.168.21.128   192.168.16.0/24 192.168.16.64/26 192.168.16.65 192.168.16.74 192.168.16.96 192.168.128.0/26   Additional References https://docs.exasol.com/administration/aws.htm
View full article
With this article, you will learn how to add an LDAP server for your database: 1. Log in to your Exasol container: $ docker exec -it <container_name> /bin/bash 2. Inside the container go to the /exa/etc/ folder and open the EXAConf file with a text editor of your choice: $ cd /exa/etc $ vim EXAConf 3. Under the DB section, right above the [[JDBC]] sub-section add a line that says Params and the values mentioned after it: [DB : DB1] Version = 6.1.5 MemSize = 6 GiB Port = 8563 Owner = 500 : 500 Nodes = 11,12,13 NumActiveNodes = 3 DataVolume = DataVolume1 Params = -LDAPServer="ldap://<your_ldap_server.your_domain>" [[JDBC]] BucketFS = bfsdefault Bucket = default Dir = drivers/jdbc [[Oracle]] BucketFS = bfsdefault Bucket = default Dir = drivers/oracle NOTE: You can also use ldaps instead of ldap 4. Change the value of Checksum in EXAConf: $ sed -i '/Checksum =/c\ Checksum = COMMIT' /exa/etc/EXAConf 5. Commit the changes: $ exaconf commit 6. At this point you have 2 options: 6.1. Restart the container: $ dwad_client stop-wait <database_instance> # Stop the database instance (inside the container) $ csctrl -d # Stop the storage service (inside the container) $ exit # Exit the container $ docker restart <container_name> # Restart the container $ docker exec -it <container_name> # Log in to the container's BASH environment $ dwad_client setup-print <database_instance> # See the database parameters ... PARAMS: -netmask= -auditing_enabled=0 -lockslb=1 -sandboxPath=/usr/opt/mountjail -cosLogErrors=0 -bucketFSConfigPath=/exa/etc/bucketfs_db.cfg -sysTZ=Europe/Berlin -etlJdbcConfigDir=/exa/data/bucketfs/bfsdefault/.dest/default/drivers/jdbc:/usr/opt/EXASuite-6/3rd-party/JDBC/@JDBCVERSION@:/usr/opt/EXASuite-6/EXASolution-6.1.5/jdbc -LDAPServer="ldap://your_ldap_server.your_domain" ... As you can from the output mentioned above, the parameters have been added. However, rebooting the cluster can cause some downtime. In order to shorten the duration of your downtime, you can try the method below. 6.2. Use a configuration file to change the parameters by just rebooting the database, not container: $ dwad_client setup-print <database_instance> > db1.cfg # See the database parameters $ vim db1.cfg # Edit the configuration file When you open the file, find the line starting with PARAMS and the parameter you need, like: PARAMS: -netmask= -auditing_enabled=0 -lockslb=1 -sandboxPath=/usr/opt/mountjail -cosLogErrors=0 -bucketFSConfigPath=/exa/etc/bucketfs_db.cfg -sysTZ=Europe/Berlin -etlJdbcConfigDir=/exa/data/bucketfs/bfsdefault/.dest/default/drivers/jdbc:/usr/opt/EXASuite-6/3rd-party/JDBC/@JDBCVERSION@:/usr/opt/EXASuite-6/EXASolution-6.1.5/jdbc -LDAPServer="ldap://your_ldap_server.your_domain" After adding the parameters, save the file and execute the following commands: $ dwad_client stop-wait <database_instance> # Stop the database instance (inside the container) $ dwad_client setup <database_instance> db1.cfg # Setup the database with the db1.cfg configuration file (inside the container) $ dwad_client start-wait <database_instance> # Start the database instance (inside the container) This will add the database parameters, but will not be persistent throughout reboots. Therefore, by adding the parameters this way you shorten your downtime, but the changes aren't permanent. After doing this, we would recommend to also do method 6.1, in case you decide to reboot sometime in the future. 7. Verify the parameters: 7.1. With dwad_client list:             7.2. With dwad_list print-setup <database_instance>:
View full article
Server installation Minimal Centos 7 Register at Splunk and download the e.g. free version Download splunk-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Install RPM, target directory will be /opt/splunk rpm -ivh splunk-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Start Splunk, accept EULA and enter   username   and   password /opt/splunk/bin/splunk start Create a SSH port forward to access the web UI.  NOTE!     If /etc/hosts is configured properly and name resolution is working, no port forwarding is needed. ssh root@HOST-IP -L8000:localhost:8000 Login with username and password you provided during the installation https://localhost:8000 Setup an Index to store data From the web UI go to Settings Indexes New Index Name: remotelogs Type: Events Max Size: e.g. 20GB Save Create a new listener to receive data From the web UI go to Settings Forwarding and receiving Configure receiving: Add new New Receiving Port: 9700 Save Restart Splunk /opt/splunk/bin/splunk restart Client installation Splunk Universal Forwarder Download the splunkforwarder-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Install via rpm, target directory will be /opt/splunkforwarder rpm -ivh splunkforwarder-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Start Splunk Forwarder, accept EULA and       enter the same username and password as for the Splunk Server /opt/splunkforwarder/bin/splunk start Setup forward-server and monitor Add Splunk server as a server to receive forwarded log files. Same username and password as before /opt/splunkforwarder/bin/splunk add forward-server HOST-IP:9700 -auth USER:PASSWORD Add a log file, e.g. audit.log from auditd. It requires the log file location, type of logs and the index we created before. /opt/splunkforwarder/bin/splunk add monitor /var/log/audit/audit.log -sourcetype linux_logs -index remotelogs Check if forward server and log files have been enabled, restart splunkforwarder if nothing happens /opt/splunkforwarder/bin/splunk list monitor Your session is invalid. Please login. Splunk username: admin Password: Monitored Directories: $SPLUNK_HOME/var/log/splunk /opt/splunkforwarder/var/log/splunk/audit.log /opt/splunkforwarder/var/log/splunk/btool.log /opt/splunkforwarder/var/log/splunk/conf.log /opt/splunkforwarder/var/log/splunk/first_install.log /opt/splunkforwarder/var/log/splunk/health.log /opt/splunkforwarder/var/log/splunk/license_usage.log /opt/splunkforwarder/var/log/splunk/mongod.log /opt/splunkforwarder/var/log/splunk/remote_searches.log /opt/splunkforwarder/var/log/splunk/scheduler.log /opt/splunkforwarder/var/log/splunk/searchhistory.log /opt/splunkforwarder/var/log/splunk/splunkd-utility.log /opt/splunkforwarder/var/log/splunk/splunkd_access.log /opt/splunkforwarder/var/log/splunk/splunkd_stderr.log /opt/splunkforwarder/var/log/splunk/splunkd_stdout.log /opt/splunkforwarder/var/log/splunk/splunkd_ui_access.log $SPLUNK_HOME/var/log/splunk/license_usage_summary.log /opt/splunkforwarder/var/log/splunk/license_usage_summary.log $SPLUNK_HOME/var/log/splunk/metrics.log /opt/splunkforwarder/var/log/splunk/metrics.log $SPLUNK_HOME/var/log/splunk/splunkd.log /opt/splunkforwarder/var/log/splunk/splunkd.log $SPLUNK_HOME/var/log/watchdog/watchdog.log* /opt/splunkforwarder/var/log/watchdog/watchdog.log $SPLUNK_HOME/var/run/splunk/search_telemetry/*search_telemetry.json $SPLUNK_HOME/var/spool/splunk/...stash_new Monitored Files: $SPLUNK_HOME/etc/splunk.version /var/log/all.log /var/log/audit/audit.log Check if the Splunk server is available /opt/splunkforwarder/bin/splunk list forward-server Active forwards: 10.70.0.186:9700 Configured but inactive forwards: None Search logs in the Web UI                                       Collecting Metrics Download the Splunk Unix Add-on splunk-add-on-for-unix-and-linux_602.tgz unpack and copy to the splunkforwarder app folder tar xf splunk-add-on-for-unix-and-linux_602.tgz mv Splunk_TA_nix /opt/splunkforwarder/etc/apps/ Enable Metrics you want to receivce, set   disable = 0   to *enable *metric vim /opt/splunkforwarder/etc/apps/Splunk_TA_nix/default/inputs.conf Stop splunk forwarder and start splunk forwarder /opt/splunkforwarder/bin/splunk stop /opt/splunkforwarder/bin/splunk start
View full article
Background Installation of FSC Linux Agents via XML-RPC   Prerequisites Ask at service@exasol.com for FSC Monitoring plugin.   How to Install FSC Linux Agents via XML-RPC   1. Upload "Plugin.Administration.FSC-7.31-16.pkg" to EXAoperation Login to EXAoperation (User privilege Administrator) Upload pkg Configuration>Software>Versions>Browse>Submit   2. Connect to EXAoperation via XML-RPC (this example uses Python) >>> import xmlrpclib, pprint >>> s = xmlrpclib.ServerProxy("http://user:password@license-server/cluster1")   3. Show plugin functions >>> pprint.pprint(s.showPluginFunctions('Administration.FSC-7.31-16')) { 'INSTALL_AND_START': 'Install and start plugin.', 'UNINSTALL': 'Uninstall plugin.', 'START': 'Start FSC and SNMP services.', 'STOP': 'Stop FSC and SNMP services.', 'RESTART': 'Restart FSC and SNMP services.', 'PUT_SNMP_CONFIG': 'Upload new SNMP configuration.', 'GET_SNMP_CONFIG': 'Download current SNMP configuration.', 'STATUS': 'Show status of plugin (not installed, started, stopped).' }   4. Install FSC and check for return code >>> sts, ret = s.callPlugin('Administration.FSC-7.31-16','n10','INSTALL_AND_START') >>> ret 0   5. Upload snmpd.conf (Example attached to this article) >>> sts, ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'PUT_SNMP_CONFIG', file('/home/user/snmpd.conf').read())   6. Start FSC and check status >>> ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'RESTART') >>> ret >>> ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'STATUS') >>> ret [0, 'started']   7. Repeat steps 4-6 have for each node.   Additional Notes For monitoring the FSC agents go to http://support.ts.fujitsu.com/content/QuicksearchResult.asp and search for "ServerView Integration Pack for NAGIOS Additional References -
View full article
This article explains how to set up a new BucketFS bucket.
View full article
Certified Hardware List
The hardware certified by Exasol can be found in the link below:

Certified Hardware List

If your preferred hardware is not certified, refer to our Certification Process for more information on this process.
Top Contributors