Environment Management
Manage the environment around the database, such as Cloud, Monitoring, Exaoperation and scalability
cancel
Showing results for 
Search instead for 
Did you mean: 
Background Deploy a single-node Exasol database as a Docker image for testing purposes Blog snapshot This blog will show you: How to deploy a single-node Exasol database as a Docker image for testing purposes Before we go into the step-by-step guide, please read through the following prerequisites and recommendations to make sure that you're prepared Prerequisites Host OS: Currently, Exasol only supports Docker on Linux. It’s not possible to use Docker for Windows to deploy the Exasol database. The requirement for Linux OS is O_DIRECT access. Docker installed Linux machine: In this article, I’m going to use Centos 7.6 virtual machine with the latest version of docker (currently Version 19.03). Privileged mode: Docker privileged mode is required for permissions management, UDF support, and environment configuration and validation (sysctl, hugepages, block-devices, etc.). Memory requirements for the host environment: Each database instance needs at least 2 GiB RAM. Exasol recommends that the host reserves at least 4 GiB RAM for each running Exasol container. Since in this article I’m going to deploy a single node container I will use 6 GiB RAM for VM. Service requirements for the host environment: NTP should be configured on the host OS. Also, the RNG daemon must be running to provide enough entropy for the Exasol services in the container. Recommendations Performance optimization: Exasol strongly recommends setting the CPU governor on the host to performance, to avoid serious performance problems. You can use the cpupower utility or the command below to set it. Using cpupower utility     $ sudo cpupower -c all frequency-set -g powersave     Change the content of scaling_governor files:     $ for F in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor; do echo performance >$F; done     Hugepages: Exasol recommends enabling hugepages for hosts with at least 64GB RAM. To do so, we have to set the Hugepages option in EXAConf to either auto, host, or the number of hugepages per container. If we will set it to auto, the number of hugepages will be determined automatically, depending on the DB settings. When setting it to host the nr. of hugepages from the host system will be used (i. e. /proc/sys/VM/nr_hugepages will not be changed). However, /proc/sys/VM/hugetlb_shm_group will always be set to an internal value! Resource limitation: It's possible to limit the resources of the Exasol container with the following docker run options:     $ docker run --cpuset-cpus="1,2,3,4" --memory=20g --memory-swap=20g --memory-reservation=10g exasol/docker-db:<version>       This is especially recommended if we need multiple Exasol containers (or other services) on the same host. In that case, we should evenly distribute the available CPUs and memory throughout your Exasol containers. Find more detailed information here https://docs.docker.com/config/containers/resource_constraints/ How to deploy a single-node Exasol database as a Docker image Step 1 Create a directory to store data from container persistently To store all persistent data from the container I’m going to create a directory. I will name it “container_exa” and create it in the home folder of the Linux user.     $ mkdir $HOME/container_exa/     Set the CONTAINER_EXA variable to the folder:     $ echo ‘export CONTAINER_EXA="$HOME/container_exa/"’ >> ~/.bashrc && source ~/.bashrc     Step 2 Create a configuration file for Exasol database and docker container The command for creating a configuration file is:     $ docker run -v "$CONTAINER_EXA":/exa --rm -i exasol/docker-db:<version> init-sc --template --num-nodes 1     Since I’m going to use the latest version of exasol (currently 6.2.6). I will use the latest tag. Num-nodes is the number of containers. We need to change the value of this if we want to deploy a cluster.     $ docker run -v "$CONTAINER_EXA":/exa --rm -i exasol/docker-db:latest init-sc --template --num-nodes 1     NOTE: You need to add --privileged option because the host directory belongs to root.   After the command has finished, the directory $CONTAINER_EXA contains all subdirectories as well as an EXAConf template (in /etc).   Step 3 Complete a configuration file The configuration has to be completed before the Exasol DB container can be started. The configuration file is EXAConf and it’s stored in the “$CONTAINER_EXA/etc” folder. To be able to start a container these options have to be configured:   A private network of all nodes (Public network is not mandatory in docker version of Exasol DB) EXAStorage device(s) EXAVolume configuration Network port numbers Nameservers Different options can be configured in the EXAConf file. I will post articles about most of them. 1)  A private network of the node     $ vim $CONTAINER_EXA/etc/EXAConf [Node : 11] PrivateNet = 10.10.10.11/24 # <-- replace with the real network       In this case, the IP address of Linux the virtual machine is 10.1.2.4/24. 2) EXAStorage device configuration Use the dev.1 file as an EXAStorage device for Exasol DB and mount the LVM disk to it.     3) EXAVolume configuration Configure the volume size for Exasol DB before starting the container. There are 3 types of volumes available for Exasol. Volumes in Exasol serve three different purposes. You can find detailed information in https://docs.exasol.com/administration/on-premise/manage_storage/volumes.htm?Highlight=volumes Since it’s recommended to use less disk space than the size of LVM disk (because Exasol will create a temporary volume and there should be a free disk space for it) I’d recommend using 20 GiB space for volume. The actual size of the volume increases or decreases depending on the data stored.   4) Network port numbers Since you should use the host network mode (see "Start the cluster" below), you have to adjust the port numbers used by the Exasol services. The one that's most likely to collide is the SSH daemon, which is using the well-known port 22. I’m going to change it to 2222 in EXAConf file:   The other Exasol services (e. g. Cored, BucketFS, and the DB itself) are using port numbers above 1024. However, you can change them all by editing EXAConf. In this example, I’m going to use the default ports.     Port 22 – SSH connection Port 443 – for XMLRPC Port 8888 – port of the Database Port 6583 – port for bucketfs     5) Nameservers We can define a comma-separated list of nameservers for this cluster in EXAConf under the [Global] section. Use the google DNS address 8.8.8.8. Set the checksum within EXAConf to 'COMMIT'. This is the EXAConf integrity check (introduced in version 6.0.7-d1) that protects EXAConf from accidental changes and detects file corruption. It can be found in the 'Global' section, near the top of the file. Please also adjust the Timezone depending on your requirements.   Step 5 Create the EXAStorage device files EXAStorage is a distributed storage engine. All data is stored inside volumes. It also provides a failover mechanism. I’d recommend using a 32 GB LVM disk for EXAStorage:     $ lsblk         IMPORTANT: Each device should be slightly bigger (~1%) than the required space for the volume(s) because a part of it will be reserved for metadata and checksums. Step 5 Start the cluster The cluster is started by creating all containers individually and passing each of them its ID from the EXAConf. Since we’ll be deploying a single node Exasol DB the node ID will be n11 and the command would be:     $ docker run --name exasol-db --detach --network=host --privileged -v $CONTAINER_EXA:/exa -v /dev/mapper/db-storage:/exa/data/storage/dev.1 exasol/docker-db:latest init-sc --node-id 11     NOTE: This example uses the host network stack, i.e. the containers are directly accessing a host interface to connect. There is no need to expose ports in this mode: they are all accessible on the host. Let’s user the “docker logs” command to check the log files.     $ docker logs -f exasoldb       We can see 5 different stages in the logs. Stage 5 is the last and if we can see the node is online and the stage is finished this means the container and database started successfully.     $ docker container ls       Let’s get a bash shell in the container and check the status of the database and volumes     $ docker exec -it exasol-db bash     Inside of the container, you can run some exasol specific commands to manage the database and services. You can find some of these commands below: $ dwad_client shortlist: Gives an output about the names of the databases. $ dwad_client list: Gives an output about the current status of the databases.   As we can see the name of the database is DB1 (this can be configured in EXAConf) and the state is running. The “Connection state: up” means we can connect to the database via port 8888. $ csinfo -D – Print HDD info:   csinfo -v print information about one (or all) volume(s):   As we can see the size of the data volume is 20.00 GiB. You can also find information about the temporary volume in the output of the csinfo -v command. Since the database is running and the connection state is up let’s try to connect and run for example SQL queries. You can use any SQL clients or Exaplus CLI to connect. I’m going to use DBeaver in this article. You can find more detailed information in https://docs.exasol.com/connect_exasol/sql_clients/dbeaver.htm I’m using the public IP address of the virtual machine and port 8888 which configured as a database port in EXAConf.   By default, the password of the sys user is “exasol”. Let's run an example query:     SELECT * FROM EXA_SYSCAT;         Conclusion In this article, we deployed a single-node Exasol database in a docker container and went through the EXAConf file. In the future, I will be sharing new articles about running Exasol on docker and will analyze the EXAConf file and Exasol services in-depth. Additional References https://github.com/EXASOL/docker-db https://docs.docker.com/config/containers/resource_constraints/ https://docs.exasol.com/administration/on-premise/manage_storage/volumes.htm?Highlight=volumes  
View full article
  Background Deploying 2+1 Exasol Cluster on Amazon Web Service (AWS) Post snapshot: This post will show you: How to deploy a 2+1 Exasol Cluster on Amazon Web Services (AWS) Before we go into the step-by-step guide, please read through the following prerequisites and recommendations to make sure that you're prepared Prerequisites AWS Account: Make sure you have an AWS account with the relevant permissions. If you do not have an AWS account, you can create one from the Amazon Console. AWS Key Pair: You have a Key Pair created. AWS uses public-key cryptography to secure the log-in information for your instance. For more information on how to create a Key Pair, see Amazon EC2 Key Pairs in the AWS documentation. Subscription on AWS Marketplace: You must have subscribed to one of the following Exasol subscriptions on AWS Marketplace: Exasol Analytic Database (Single Node / Cluster, Bring-Your-Own-License) Exasol Analytic Database (Single Node / Cluster, Pay-As-You-Go) How to deploy a 2+1 Exasol Cluster Step 1 Open https://cloudtools.exasol.com/ to access the cloud deployment wizard in your browser and choose your cloud provider. In this case, the Cloud Provider should be Amazon Web Services. Select your region from the drop-down list. I'm going to deploy our cluster in Frankfurt   Step 2 On the Configuration screen, by default, you see the Basic Configuration page. You can choose one of the existing configurations made by Exasol. Basic Configuration: Shows a minimum specification for your data size. Balanced Configuration: Shows an average specification for your data size for good performance. High-Performance Configuration: Shows the best possible specification for your data size for high performance. In this case, I'm going to choose the Advanced Configuration option. If you are going to deploy a cluster for production purposes we recommend discussing sizing options with the Exasol support team or use one of the existing configurations made by Exasol. RAW Data Size (in TB): You can add the required raw data size on your own, otherwise, it will be calculated automatically after setting Instance type and node count. License Model: Pay as you go (PAYG) Pay as you go (PAYG) license model is a flexible and scalable license model for Exasol's deployment on a cloud platform. In this mode, you pay for your cloud resources and Exasol software through the cloud platform's billing cycle. You can always change your setup later to scale up or down your system and the billing changes accordingly. Bring your own license (BYOL) Bring your own license (BYOL) license model lets you choose a static license for Exasol software and a dynamic billing for the cloud resources. In this model , you need to purchase a license from Exasol and add it to your cloud instance. This way, you pay only for the cloud resources through the cloud platform's billing cycle and there is no billing for the software. You can always change your setup later to scale up or down your system and the billing changes accordingly. However, there is a limit for the maximum scaling based on your license type (DB RAM or raw data size). You can find detailed information about licensing in https://docs.exasol.com/administration/aws/licenses.htm System Type: You can choose one of the Exasol Single Node and Enterprise Cluster options. I'm going to choose the Enterprise Cluster option. Instance Family: You can choose one of the instance types of AWS EC2 service to deploy virtual machines for Exasol nodes. You can find detailed information about instance types of AWS EC2 in https://aws.amazon.com/ec2/instance-types/ The number of DB Nodes: We need to determine the total number of active data nodes in this section.     After finishing the configuration we can see the RAW data size calculated automatically for us. On the left side of the screen, we can see the details of our setup on AWS. If you have a license from Exasol please choose the BYOL option in License Model, this will cause a decrease in Estimated Costs. Step 3 After click Continue to proceed with the deployment, we can see the Summary page. We can overview the cluster configuration and choose a deployment option. We have the option to select create new VPC or use existing VPC for the CloudFormation stack. Create New VPC that will create a new VPC and provision of all resources within it. Use Existing VPC will provision Exasol to use an existing VPC subnet of your choice. For more information on VPC, see Amazon Virtual Private Cloud. Based on this VPC selection, the parameters in the stack creation page on AWS will change when you launch the stack. For more information on the stack parameters, see Template Parameters. If you want to download the configuration file and upload them later to your AWS stack through CloudFormation Console, you can click the CloudFormation Templates option on the left side. Click Launch Stack. You will be redirected to the Quick create stack page on AWS.   Step 4 After redirecting to the Quick create stack page on AWSReview and I'm going to fill the required stack parameters: Stack Name Key Pair, SYS User Password, or ADMIN User Password. In the VPC/Network/Security section, the Public IPs are set to false by default. I'm going to set this to true. If you want to keep the Public IP address set to false, then you need to enable VPN or other methods to be able to access your instance. (Optional) License is applicable if your subscription model is Bring-your-own-license. Paste the entire content of the license file you have in the space provided. Click Create Stack to continue deploying Exasol in the CloudFormation Console. You can view the stack you created under AWS CloudFormation > Stacks, with the status CREATE_IN_PROGRESS . Once the stack is created successfully, the status is changed to CREATE_COMPLETE . Additionally, you can monitor the progress in the Events tab for the stack. For more information about the stack parameters, please check the table here https://docs.exasol.com/cloud_platforms/aws/installation_cf_template.htm?Highlight=Template%20Parameters After filling the required parameters I'm going to click Create Stack to continue deploying Exasol in the CloudFormation Console. We can view the stack created under AWS CloudFormation > Stacks, with the status CREATE_IN_PROGRESS . Once the stack is created successfully, the status is changed to CREATE_COMPLETE .   Additionally, we can monitor the progress in the Events tab for the stack. Step 5 Determine the Public IP Address We need the Public IP or DNS name displayed in the EC2 Console to connect to the database server or launch the instance. To know the Public IP or DNS name: Open the EC2 Dashboard from the AWS Management Console. Click on Running Instance. The Instances page is displayed with all the running instances. Select the name of the instance you created. (In this case exasol-cluster-management_node and exasol-cluster-management_node). We need the IP address of management node In the Description section, the IP address displayed for Public DNS(IPv4) is the IP address of the database server. If the Public IP parameter for your stack is set to false,  you need to enable VPN or other methods to connect to the database server via the private IP address of the instances. Step 6 Access to Initialization page Copy and paste this IP address prefixed with https in a browser. In the case of an Exasol cluster deployment, I need to copy the IP address or DNS name of the management node. After confirming the digital certificate the following screen is displayed.   Once the installation is complete, I will be redirected to the EXAoperation screen. It may take up to 45 minutes for the EXAoperation to be online after deployment. You can login with the admin user name and password provided while creating your stack.  Step 7 Connect to the database In this case (a 2+1 cluster deployment), I need to use the Public IP address of the data node along with the admin user name and password to connect to the SQL client. I can also connect to all the data nodes by entering the pubic IP address of all the nodes separated by a comma. Additional Notes Connect to Exasol After installing Exasol on AWS, you can do the following: Install drivers required to connect to other tools. Connect SQL clients to Exasol. Connect Business Intelligence tools (BI tools) to Exasol. Connect Data Integration - ETL tool to Exasol. Connect Data Warehouse Automation tools to Exasol.   Load Data After you have connected your choice of tool to Exasol, you can load your data into Exasol and process further. To know more about loading data into Exasol, see Loading Data. Conclusion In this article, we deployed a 2+1 Exasol cluster on AWS. In the future, I will be sharing new articles about managing the Exasol cluster on AWS, using lambda functions to schedule the start/stop of a cluster, etc. Additional References https://cloudtools.exasol.com https://docs.exasol.com/administration/aws.htm
View full article
Background Installation of Protegrity via XML-RPC   Prerequisites Ask at service@exasol.com for Protegrity plugin. How to Install Protegrity via XML-RPC 1. Upload "Plugin.Security.Protegrity-6.6.4.19.pkg" to EXAoperation Login to EXAoperation (User privilege Administrator) Upload pkg Configuration>Software>Versions>Browse>Submit 2. Connect to EXAoperation via XML-RPC (this example uses Python) Following code block is ready to copy and paste into a python shell: from xmlrpclib import Server as xmlrpc from ssl import _create_unverified_context as ssl_context from pprint import pprint as pp from base64 import b64encode import getpass server = raw_input('Please enter IP or Hostname of Licenze Server:') ; user = raw_input('Enter your User login: ') ; password = getpass.getpass(prompt='Please enter Login Password:') server = xmlrpc('https://%s:%s@%s/cluster1/' % (user,password,server) , context = ssl_context()) 3. Show installed plugins >>> pp(server.showPluginList()) ['Security.Protegrity-6.6.4.19'] 4. Show plugin functions >>> pp(server.showPluginFunctions('Security.Protegrity-6.6.4.19')) 'INSTALL': 'Install plugin.', 'UPLOAD_DATA': 'Upload data directory.', 'UNINSTALL': 'Uninstall plugin.', 'START': 'Start pepserver.', 'STOP': 'Stop pepserver.', 'STATUS': 'Show status of plugin (not installed, started, stopped).' 5. For further usage we store the plugin name and the node list in variables: >>> pname = 'Security.Protegrity-6.6.4.19' >>> nlist = server.getNodeList() 6. Install the plugin >>> pp([[node] + server.callPlugin(pname, node, 'INSTALL', '') for node in nlist]) [['n0011', 0, ''], ['n0012', 0, ''], ['n0013', 0, ''], ['n0014', 0, '']] 7. Get the plugin status on each node: >>> pp([[node] + server.callPlugin(pname, node, 'STATUS', '') for node in nlist]) [['n0011', 0, 'stopped'], ['n0012', 0, 'stopped'], ['n0013', 0, 'stopped'], ['n0014', 0, 'stopped']] 8. Start plugin on each node: >>> pp([[node] + server.callPlugin(pname, node, 'START', '') for node in nlist]) [['n0011', 0, 'started'], ['n0012', 0, 'started'], ['n0013', 0, 'started'], ['n0014', 0, 'started']] 9. Push ESA config to nodes, server-side task Client Port (pepserver) is listening on TCP 15700 Additional Notes - Additional References -
View full article
Docker is a PaaS "Platform as a Service" product that uses OS-level virtualization technology to deploy software in relatively small packages called containers that are completely isolated, have their own software, libraries, and even network. Exasol supports Docker as a platform and you can easily obtain our image via Github or Docker Hub. This tutorial below will show you how to install Docker on Ubuntu and other Debian-based systems (however, the installed repo will vary). NOTE: This method was tested on Ubuntu Server 18.04 (Bionic Beaver) and 20.04 ( Focal Fossa) 1. Update your Packages list: $ sudo apt update 2. Install the necessary packages: $ sudo apt install apt-transport-https ca-certificates curl software-properties-common - 3. Add the GPG keys for the Docker repository: $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 4. Add the official Docker repository: $ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" (for 18.04) $ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" (for 20.04) 5. Update the new packages list (you should see the Docker package list being downloaded): $ sudo apt update 6. Install Docker Community Edition: $ sudo apt install docker-ce -y 7. Check if Docker is running: $ sudo systemctl status docker 7.1. If not running, run the following commands: $ sudo systemctl start docker $ sudo systemctl enable docker 8. Run the "Hello World" container to verify: $ docker run hello-world ... Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal.  9. Download other images via: $ docker image pull <image_name>  After you finish the steps above you are ready to continue with installing your Exasol system. You can do so by following the instructions at How to Deploy a Single-Node Exasol Database as a Docker Image for Testing Purposes.
View full article
WHAT WE'LL LEARN? In this article you will learn how to update a Docker-based Exasol system. HOW-TO 1. Ensure that your Docker container is running with persistent storage. This means that your docker run command should contain a -v statement, like the example below: $ docker run --detach --network=host --privileged --name <container_name> -v $CONTAINER_EXA:/exa exasol/docker-db:6.2.8-d1 init-sc --node-id <node_id> 2. Log in to your Docker container's BASH environment: $ docker exec -it <container_name> /bin/bash  3. Stop the database, storage services and exit the container: $ dwad_client stop-wait <database_instance> $ csctrl -d $ exit 4. Stop the container: $ docker stop $container_name 5. Rename the existing container. Append with old, so that you know that this is the container which you won't be using anymore $ docker rename <container_name> <container_name_old> 6. Create a new tag for the older container image: $ docker tag exasol/docker-db:latest exasol/docker-db:older_image 7. Remove the "latest" tag for the "older_image": $ docker rmi exasol/docker-db:latest 8. Pull the latest Docker-based Exasol image: $ docker image pull exasol/docker-db:latest 8.1. Or pull the specific version you want. You can view the available versions and pull one of them with the commands bellow: $ wget -q https://registry.hub.docker.com/v1/repositories/exasol/docker-db/tags -O - | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n' | awk -F: '{print $3}' ... 6.2.3-d1 6.2.4-d1 6.2.5-d1 ... $ docker image pull exasol/docker-db:<image_version>  9. Run the following command to execute the update: $ docker run --privileged --rm -v $CONTAINER_EXA:/exa -v <all_other_volumes> exasol/docker-db:latest update-sc or $ docker run --privileged --rm -v $CONTAINER_EXA:/exa -v <all_other_volumes> exasol/docker-db:<image_version> update-sc Output should be similar to this: Updating EXAConf '/exa/etc/EXAConf' from version '6.1.5' to '6.2.0' Container has been successfully updated! - Image ver. : 6.1.5-d1 --> 6.2.0-d1 - DB ver. : 6.1.5 --> 6.2.0 - OS ver. : 6.1.5 --> 6.2.0 - RE ver. : 6.1.5 --> 6.2.0 - EXAConf : 6.1.5 --> 6.2.0  10. Run the container(s) the same way as you did before. Example: $ docker run --detach --network=host --privileged --name <container_name> -v $CONTAINER_EXA:/exa exasol/docker-db:latest init-sc --node-id <node_id> 11. You can check the status of your booting container (optional): $ docker logs <container_name> -f 12. You can remove the old container (optional): $ docker rm <container_name_old>
View full article
Docker is a PaaS "Platform as a Service" product that uses OS-level virtualization technology to deploy software in relatively small packages called containers that are completely isolated, have their own software, libraries, and even network. Exasol supports Docker as a platform and you can easily obtain our image via Github or Docker Hub. This tutorial below will show you how to install Docker on CentOS and other RHEL-based systems (however, the installed repo will vary). NOTE: This method was tested on CentOS 7.7 1. Update your Packages list: $ sudo yum update 2. Install the necessary packages: $ sudo yum install -y yum-utils device-mapper-persistent-data lvm2 3. Add the official Docker repository: $ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo for CentOS $ sudo yum-config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo for RHEL $ sudo yum-config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo for Fedora 4. Update the new packages list (you should see the Docker package list being downloaded): $ sudo yum update 5. Install Docker Community Edition: $ sudo yum install docker-ce -y 6. Check if Docker is running: $ sudo systemctl status docker 6.1. If not running, run the following commands: $ sudo systemctl start docker $ sudo systemctl enable docker 7. Run the "Hello World" container to verify: $ docker run hello-world ... Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal.  8. Download other images via: $ docker image pull <image_name> After you finish the steps above you are ready to continue with installing your Exasol system. You can do so by following the instructions at How to Deploy a Single-Node Exasol Database as a Docker Image for Testing Purposes.
View full article
Background Installation of FSC Linux Agents via XML-RPC   Prerequisites Ask at service@exasol.com for FSC Monitoring plugin.   How to Install FSC Linux Agents via XML-RPC   1. Upload "Plugin.Administration.FSC-7.31-16.pkg" to EXAoperation Login to EXAoperation (User privilege Administrator) Upload pkg Configuration>Software>Versions>Browse>Submit   2. Connect to EXAoperation via XML-RPC (this example uses Python) >>> import xmlrpclib, pprint >>> s = xmlrpclib.ServerProxy("http://user:password@license-server/cluster1")   3. Show plugin functions >>> pprint.pprint(s.showPluginFunctions('Administration.FSC-7.31-16')) { 'INSTALL_AND_START': 'Install and start plugin.', 'UNINSTALL': 'Uninstall plugin.', 'START': 'Start FSC and SNMP services.', 'STOP': 'Stop FSC and SNMP services.', 'RESTART': 'Restart FSC and SNMP services.', 'PUT_SNMP_CONFIG': 'Upload new SNMP configuration.', 'GET_SNMP_CONFIG': 'Download current SNMP configuration.', 'STATUS': 'Show status of plugin (not installed, started, stopped).' }   4. Install FSC and check for return code >>> sts, ret = s.callPlugin('Administration.FSC-7.31-16','n10','INSTALL_AND_START') >>> ret 0   5. Upload snmpd.conf (Example attached to this article) >>> sts, ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'PUT_SNMP_CONFIG', file('/home/user/snmpd.conf').read())   6. Start FSC and check status >>> ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'RESTART') >>> ret >>> ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'STATUS') >>> ret [0, 'started']   7. Repeat steps 4-6 have for each node.   Additional Notes For monitoring the FSC agents go to http://support.ts.fujitsu.com/content/QuicksearchResult.asp and search for "ServerView Integration Pack for NAGIOS Additional References -
View full article
This article describes how the physical hardware must be configured prior to an Exasol installation. There are 2 categories of settings that must be applied, for data nodes and for the management node. Data nodes settings: BIOS: Disable EFI Disable C-States (Maximum Performance) Enable PXE on "CICN" Ethernet interfaces Enable Hyperthreading Boot order: Boot from PXE 1st NIC (VLAN CICN) RAID: Controller RW-Cache only enabled with BBU All disks are configured as RAID-1 mirrors (best practice. For different setup, ask EXASOL Support) Keep the default strip size Keep the default R/W cache ratio LOM Interface Enable SOL (Serial over LAN console)   Management node settings: BIOS: Disable EFI Disable C-States (Maximum Performance) Enable Hyperthreading Boot order: Boot from disk RAID: Controller RW-Cache only enabled with BBU All disks are configured as RAID-1 mirrors (best-practice, for different setup ask EXASOL Support) Keep the default strip size Keep the default R/W cache ratio LOM Interface Enable SOL (Serial over LAN console)
View full article
This article describes the process of installing HP Service Pack for ProLiant solution using XML-RPC. 1. Upload "Plugin.Administration.HP-SPP-2014.09.0-0.pkg" to EXAoperation Login to EXAoperation (User privilege Administrator) Upload pkg   Configuration>Software>Versions>Browse>Submit 2. Connect to EXAoperation via XML-RPC (this example uses Python) >>> import xmlrpclib, pprint >>> s = xmlrpclib.ServerProxy("http://user:password@license-server/cluster1") 3. Show plugin functions >>> pprint.pprint(s.showPluginFunctions('Administration.HP-SPP-2014.09.0-0')) {'GET_CERTIFICATE': 'Get content of specified certificate.', 'GET_SNMP_CONFIG': 'Download current SNMP configuration.', 'INSTALL_AND_START': 'Install and start plugin.', 'PUT_CERTIFICATE': 'Upload new certificate.', 'PUT_SNMP_CONFIG': 'Upload new SNMP configuration.', 'REMOVE_CERTIFICATE': 'Remote a specific certificate.', 'RESTART': 'Restart HP and SNMP services.', 'START': 'Start HP and SNMP services.', 'STATUS': 'Show status of plugin (not installed, started, stopped).', 'STOP': 'Stop HP and SNMP services.', 'UNINSTALL': 'Uninstall plugin.'} 4. Install HP SPP and check for return code >>> sts, ret = s.callPlugin('Administration.HP-SPP-2014.09.0-0','n10','INSTALL_AND_START') >>> ret 0 5. Upload snmpd.conf (Example attached to this article) >>> sts, ret = s.callPlugin('Administration.HP-SPP-2014.09.0-0', 'n10', 'PUT_SNMP_CONFIG', file('/home/user/snmpd.conf').read()) 6. Start HP SPP and check status. >>> ret = s.callPlugin('Administration.HP-SPP-2014.09.0-0', 'n10', 'RESTART') >>> ret [256, '\nStopping hpsmhd: [ OK ]\n \n Shutting down NIC Agent Daemon (cmanicd): [ OK ]\n \n Shutting down Storage Event Logger (cmaeventd): [ OK ] \n Shutting down FCA agent (cmafcad): [ OK ] \n Shutting down SAS agent (cmasasd): [ OK ] \n Shutting down IDA agent (cmaidad): [ OK ] \n Shutting down IDE agent (cmaided): [ OK ] \n Shutting down SCSI agent (cmascsid): [ OK ] \n Shutting down Health agent (cmahealthd): [ OK ] \n Shutting down Standard Equipment agent (cmastdeqd): [ OK ] \n Shutting down Host agent (cmahostd): [ OK ] \n Shutting down Threshold agent (cmathreshd): [ OK ] \n Shutting down RIB agent (cmasm2d): [ OK ] \n Shutting down Performance agent (cmaperfd): [ OK ] \n Shutting down SNMP Peer (cmapeerd): [ OK ] \nStopping snmpd: [FAILED]\n Using Proliant Standard\n \tIPMI based System Health Monitor\n Shutting down Proliant Standard\n \tIPMI based System Health Monitor (hpasmlited): [ OK ] \n\nStarting hpsmhd: [ OK ]\nStarting snmpd: [FAILED]\nCould not start SNMP daemon.\nCould not restart HP services.'] >>> ret = s.callPlugin('Administration.HP-SPP-2014.09.0-0', 'n10', 'STATUS') >>> ret [0, 'started'] 7. Repeat steps 4-6 for each node. 8. For monitoring the HP SPP please review https://labs.consol.de/de/nagios/check_hpasm/index.html
View full article
This article describes how to install Dell's OpenManage Server Administration solution via XML-RPC. 1. Upload "Plugin.Administration.DELL-OpenManage-8.1.0.pkg" to EXAoperation Login to EXAoperation (User privilege required - Administrator) Upload pkg   Configuration>Software>Versions>Browse>Submit 2. Connect to EXAoperation via XML-RPC (this example uses Python) >>> import xmlrpclib, pprint >>> s = xmlrpclib.ServerProxy("http://user:password@license-server/cluster1") 3. Show plugin functions >>> pprint.pprint(s.showPluginFunctions('Administration.DELL-OpenManage-8.1.0')) {'GET_SNMP_CONFIG': 'Download current SNMP configuration.', 'INSTALL_AND_START': 'Install and start plugin.', 'PUT_SNMP_CONFIG': 'Upload new SNMP configuration.', 'RESTART': 'Restart HP and SNMP services.', 'START': 'Start HP and SNMP services.', 'STATUS': 'Show status of plugin (not installed, started, stopped).', 'STOP': 'Stop HP and SNMP services.', 'UNINSTALL': 'Uninstall plugin.'} 4. Install DELL OMSA and check for return code >>> sts, ret = s.callPlugin('Administration.DELL-OpenManage-8.1.0','n10','INSTALL_AND_START') >>> ret 0 5. Upload snmpd.conf (Example attached to this article) >>> sts, ret = s.callPlugin('Administration.DELL-OpenManage-8.1.0', 'n10', 'PUT_SNMP_CONFIG', file('/home/user/snmpd.conf').read()) 6. Restart OMSA and check status. >>> ret = s.callPlugin('Administration.DELL-OpenManage-8.1.0', 'n10', 'RESTART') >>> ret [0, '\nShutting down DSM SA Shared Services: [ OK ]\n\n\nShutting down DSM SA Connection Service: [ OK ]\n\n\nStopping Systems Management Data Engine:\nStopping dsm_sa_snmpd: [ OK ]\nStopping dsm_sa_eventmgrd: [ OK ]\nStopping dsm_sa_datamgrd: [ OK ]\nStopping Systems Management Device Drivers:\nStopping dell_rbu:[ OK ]\nStarting Systems Management Device Drivers:\nStarting dell_rbu:[ OK ]\nStarting ipmi driver: \nAlready started[ OK ]\nStarting Systems Management Data Engine:\nStarting dsm_sa_datamgrd: [ OK ]\nStarting dsm_sa_eventmgrd: [ OK ]\nStarting dsm_sa_snmpd: [ OK ]\nStarting DSM SA Shared Services: [ OK ]\n\ntput: No value for $TERM and no -T specified\nStarting DSM SA Connection Service: [ OK ]\n'] >>> ret = s.callPlugin('Administration.DELL-OpenManage-8.1.0', 'n10', 'STATUS') >>> ret [0, 'dell_rbu (module) is running\nipmi driver is running\ndsm_sa_datamgrd (pid 760 363) is running\ndsm_sa_eventmgrd (pid 732) is running\ndsm_sa_snmpd (pid 755) is running\ndsm_om_shrsvcd (pid 804) is running\ndsm_om_connsvcd (pid 850 845) is running'] 7. Repeat steps 4-6 for each node. 8. For monitoring DELL OMSA please review http://folk.uio.no/trondham/software/check_openmanage.html
View full article
This article describes the way a firewall should be configured in preparation for Exasol installation and then for operating the cluster. Installation ALLOW SSH access to the license node (TCP port 20 + 22) LOM access to the license node (KVM, EXASOL installation ISO mounted) LOM access to the data nodes (KVM) HTTP/S access to all cluster nodes (EXAoperation web UI ,TCP 80/443). The web UI is running as a cluster service and can be accessed from any cluster node. Operating ALLOW Database port clients use to connect to the database (default TCP 8563) HTTP/S access to all cluster nodes (EXAoperation web UI, TCP 80/443) SSH access to all cluster members (TCP port 20 + 22) To get most out of the web UI each cluster node should be able to access the LOM of each other (ipmitool is used for providing basic hardware vitality information) NTP (TCP/UDP 123) DNS (TCP/UDP 53) optional: LDAP (TCP/UDP 389)
View full article
Certified Hardware List
The hardware certified by Exasol can be found in the link below:

Certified Hardware List

If your preferred hardware is not certified, refer to our Certification Process for more information on this process.
Top Contributors