Environment Management
Manage the environment around the database, such as Cloud, Monitoring, Exaoperation and scalability
cancel
Showing results for 
Search instead for 
Did you mean: 
Background This article describes the calculation of the optimal (maximum) DB RAM on a: 4+1 system with one database (dedicated environment) 4+1 system with two databases (shared environment) The calculation of the OS Memory per Node stays the same for both environments. Shared environments are not recommended for production systems. Example Setup: The 4+1 cluster contains four active data nodes and one standby node. Each node has 384GiB of main memory. How to calculate Database RAM OS Memory per Node It is vital for the database that there is enough memory allocatable through the OS. We recommend using at least 10% of the main memory on each node. This prevents the nodes from swapping on high load (many sessions). Main Memory per Node * 0.1 = OS Memory per Node 384 * 0.1 = 38,4 -> 38GiB In order to set this value, the database needs to be shut down. EXAoperation 'Configuration > Network' - "OS Memory/Node (GiB)" Maximum DB RAM (dedicated environment) (Main Memory per Node - OS Memory per Node) * Number of active Nodes = Maximum DB RAM Example: 4 x data nodes with 384GiB (Main Memory per Node) - 38GiB (OS Memory per Node) (384GiB - 38 GiB) * 4 = 1380GiB Maximum DB RAM (shared environment) Example Database "one" on four data nodes (exa_db1) Database "two" on two data nodes (exa_db2) As before the "Maximum DB RAM" is 1380GiB. With two databases sharing the Maximum DB RAM, we need to recalculate and redistribute it. Maximum DB RAM / Number of Databases = Maximum DB RAM per database 1380GiB / 2 = 690GiB For database "one" (exa_db1), which is running on all four nodes 690GiB DB RAM can be configured. The smaller database "two" (exa_db2) is running on two nodes, therefore "Maximum DB RAM per database" needs to be divided by the number of data nodes it's running on (2). Maximum DB RAM per database / Number of active Nodes = Maximum DB RAM per database 690GiB / 2 = 345GiB       Additional References Sizing Considerations  
View full article
This article describes the process Exasol goes through when certifying hardware. 
View full article
Background In this article, you can find how to calculate the available database disk space. Prerequisites To calculate the available database disk space we need some informations first: available disk sizes on nodes how many volumes does exist the size of these volumes nodes are used by these volumes the redundancy of these volumes Explanation Let's explain the calculation with an example: available disk space on the "d03_storage" partition on all nodes:   Node Available Disk Size (GiB) n0011 1786 n0012 1786 n0013 1786 n0014 1786 existing volumes, sizes, and redundancies: Volume Type Size Redundancy v0000 Archive 1024 2 v0001 Data 320 2 v0002 Data, tmp 60 1 v0003 Data 120 2 v0004 Data, tmp 15 1 Calculation of the free disk space   The first step is to divide the required size of the volumes by the number of used nodes to get the segment size (example for v0000): Size / Number of Nodes = Segment Size 1024 GiB / 3 Nodes = 341.3 GiB/Node Next step is to multiply the segment size by the redundancy of the volume: Segment Size * Redundancy = Used Disk Space per Node 341.3 GiB/Node * 2 = 682.6 GiB/Node This has to be done for every volume. After that we're able to fill a table with the used disk space per node like this:       Now we can simply substract the used sizes from the available disk size per node: n0011: 1786 GiB - 213 GiB - 20 GiB = 1553 GiB n0012: 1786 GiB - 683 GiB - 213 GiB - 20 GiB = 870 GiB n0013: 1786 GiB - 683 GiB - 213 GiB - 20 GiB - 120 GiB - 7 GiB = 743 GiB n0014: 1786 GiB - 683 GiB - 120 GiB - 7 GiB = 976 GiB The minimum value over all nodes gives us the free available space: 743 GiB with a redundancy of 1. The reason for the minimum is that all segments of a volume need to have the same size.   Calculation of the available space in point of view of the database instance The database instance is able to control the size of its own data volume: data volumes can grow and they can be shrunken. Shrinking of a data volume is an expensive operation and creates a high amount of disk and network usage. To limit the usage the process will only shrink a few blocks after a defined amount of COMMIT statements. That is the reason why data volumes won't shrink immediately when data in the database has been deleted. This results in the data volume usually aren't used completely by the database and there is an amount of free space: Database Volumes v0001 + v0002 v0001 Used 200 GiB Unused 120 GiB Redundancy=2 2 * 120 GiB = 240 GiB Free 2 * 200 GiB = 400 GiB Used v0002 Used 30 GiB Unused 20 GiB Redundancy=1 1 * 20 GiB = 20 GiB Free 1 * 30 GiB = 30 GiB Used         260 GiB Free 430 GiB Used   Now we can calculate the available space for the database which is using the volumes v0001 and v0002: Free = available space for volumes + available space inside the DB volume Free = 743 GiB + 260 GiB = 1003 GiB Free (with a redundancy of 1) Usage = (1 - (free space / (free space + used space))) * 100% Usage = (1 - (1003 GiB / (1003 GiB + 430 GiB))) * 100% = 30% How to get the necessary data for monitoring the free space To monitor the free space of an EXASolution database instance we need the following information: the available disk space of the storage partition all EXAStroage volumes all sizes of these volumes the redundancy of this volume the data volumes used by the database instance we want to check (data + temp) the usage of those data volumes All those data are provided by the EXAoperation XMLRPC interface since EXASuite 4.2. You can use the following functions: node.getDiskStates() information about the available space of the storage partition database.getDatabaseInfo() volumes and its usages used by the database storage.getVolumeInfo(volume) volume sizes and redundancies   Please check the EXAoperation user manual for a full description of how to use those functions. You can find this manual on our user portal: https://www.exasol.com/portal/display/DOWNLOAD/6.0 Additional References https://www.exasol.com/portal/display/DOWNLOAD/6.0
View full article
Prerequisites The datadog-agent has one dependency which is '/bin/sh'. It is safe to just install it, also in regards to future updates of Exasol. Installation For CentOS 7.x just run on each machine (as user root): DD_API_KEY=<Your-API-Key> bash -c "$(curl -L https://raw.githubusercontent.com/DataDog/datadog-agent/master/cmd/agent/install_script.sh)" Changing hostnames The hostname can be changed in '/etc/datadog-agent/datadog.yaml', afterward, restart the agent as user root with 'systemctl restart datadog-agent'.
View full article
EXASOL offers a fully preconfigured mobile test system in the scope of a proof of concept, to showcase Exasol's capabilities. In order to ensure a smooth process to start and execute the proof of concept, we kindly ask you to take note of the following technical requirements and prerequisites for the operation of a mobile trial cluster.                       Chassis Dimensions (w/h/d): Flight case on wheels: ca. 70cm/100cm/110cm Weight: ca. 250 kg (when fully equipped) Power Supply: 2x 230 V European “Schuko” plug Total power consumption either around 3 kW-3,5 kW for smaller configs or 2x 3kW-3.5kW for larger Mobile Test Systems 16A fuse protection per relevant electric circuit Cooling Please assure that the generated heat can be compensated (room size, air conditioning). If the mobile trial cluster is not located in a data center, please consider the noise generated by the machines. Network Network Bandwidth 10 gigabit Ethernet recommended 1 gigabit and fiber optics also work. Please contact us in the latter case as an adapter is required IP Addresses Depending on the number of servers the test system contains, please send us in advance 6 or 11 IP addresses of a coherent network, ideally consecutive addresses (example for a system with 6 servers: 192.168.1.10..15/24). If a gateway is required also its address. Optionally internal addresses of NTP and DNS servers. Firewall Please make sure that the Exasol servers can be reached from the workstations and the relevant servers (ETL, BI, …) via port 8563 (TCP). For the administration console, access via port 80 and/or 443 is needed. It is required to install necessary drivers to access the database on client machines. Data Migration If data is to be loaded from another database, please make sure that you have the connection parameters and that the database can be reached by Exasol. Remote Maintenance Optionally, a remote connection from Exasol to the test system via VPN or ssh tunnel might be helpful to provide remote support for the proof of concept. Transport Delivery The test system will be delivered by a forwarding agent. Please make sure that the consignment can be accepted at the appointed date and that the place of location is at ground level or can be reached via a sufficiently sized elevator. Please name a contact person for the forwarding agent. Pick Up The test system will be picked up by a forwarding agent. Please make sure that the system is ready for pick up at the appointed date.
View full article
Background This article describes how to improve the speed of your SMB share with disabling the policy "Microsoft network server: Digitally sign communications (always) Symptoms creating backups takes unusual long performance of the remote archive volumes are poor (only a few MiB/s) remote share is a Microsoft Windows server no performance problems by using "smbclient" on other Linux clients Explanation Open the "Local Group Policy Editor" on your Windows server and goto "Windows Settings > Security Settings > Local Policies > Security Option". To improve the speed of your share you have to disable the policy "Microsoft network server: Digitally sing communications (always)". After changing the policy you should be able to read and write with normal speed again.     Additional References https://docs.exasol.com/administration/on-premise/manage_storage/create_remote_archive_volume.htm?Highlight=SMB
View full article
Background This article explains how to activate a new license. Scenario: License Upgrade with DB RAM expansion Prerequisites: The valid license file (XML) Short Downtime to stop and start the database EXAoperation User with privilege Level "Master" Explanation Step 1: Upload License file to EXAoperation In EXAoperation navigate to "Software" On the software page, click on the "License" Tab Click on the "Browse" button to open a file upload dialog. Select the new license file and confirm by clicking the "Upload" button Refresh the "License" page and review new license information Step 2: Stop all databases Click on left navigation pane "EXASolution" Select all checkboxes of the listed database instances Click on the "Shutdown" button and wait for all database instances to shut down (Monitoring->Logservice) Step 3: Adjust DB RAM (optional) Click on the DB name Click on "Edit" Adjust "DB RAM (GiB)" according to your license and click "Apply" Step 4: Start all databases Click on left navigation pane "EXASolution" Select all checkboxes of the listed database instances Start all databases and wait for all instances to be up and running (Monitoring->Logservice) Additional References https://docs.exasol.com/administration/on-premise/manage_software/activate_license.htm?Highlight=license
View full article
Background Installation of FSC Linux Agents via XML-RPC   Prerequisites Ask at service@exasol.com for FSC Monitoring plugin.   How to Install FSC Linux Agents via XML-RPC   1. Upload "Plugin.Administration.FSC-7.31-16.pkg" to EXAoperation Login to EXAoperation (User privilege Administrator) Upload pkg Configuration>Software>Versions>Browse>Submit   2. Connect to EXAoperation via XML-RPC (this example uses Python) >>> import xmlrpclib, pprint >>> s = xmlrpclib.ServerProxy("http://user:password@license-server/cluster1")   3. Show plugin functions >>> pprint.pprint(s.showPluginFunctions('Administration.FSC-7.31-16')) { 'INSTALL_AND_START': 'Install and start plugin.', 'UNINSTALL': 'Uninstall plugin.', 'START': 'Start FSC and SNMP services.', 'STOP': 'Stop FSC and SNMP services.', 'RESTART': 'Restart FSC and SNMP services.', 'PUT_SNMP_CONFIG': 'Upload new SNMP configuration.', 'GET_SNMP_CONFIG': 'Download current SNMP configuration.', 'STATUS': 'Show status of plugin (not installed, started, stopped).' }   4. Install FSC and check for return code >>> sts, ret = s.callPlugin('Administration.FSC-7.31-16','n10','INSTALL_AND_START') >>> ret 0   5. Upload snmpd.conf (Example attached to this article) >>> sts, ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'PUT_SNMP_CONFIG', file('/home/user/snmpd.conf').read())   6. Start FSC and check status >>> ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'RESTART') >>> ret >>> ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'STATUS') >>> ret [0, 'started']   7. Repeat steps 4-6 have for each node.   Additional Notes For monitoring the FSC agents go to http://support.ts.fujitsu.com/content/QuicksearchResult.asp and search for "ServerView Integration Pack for NAGIOS Additional References -
View full article
Server installation Minimal Centos 7 Register at Splunk and download the e.g. free version Download splunk-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Install RPM, target directory will be /opt/splunk rpm -ivh splunk-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Start Splunk, accept EULA and enter   username   and   password /opt/splunk/bin/splunk start Create a SSH port forward to access the web UI.  NOTE!     If /etc/hosts is configured properly and name resolution is working, no port forwarding is needed. ssh root@HOST-IP -L8000:localhost:8000 Login with username and password you provided during the installation https://localhost:8000 Setup an Index to store data From the web UI go to Settings Indexes New Index Name: remotelogs Type: Events Max Size: e.g. 20GB Save Create a new listener to receive data From the web UI go to Settings Forwarding and receiving Configure receiving: Add new New Receiving Port: 9700 Save Restart Splunk /opt/splunk/bin/splunk restart Client installation Splunk Universal Forwarder Download the splunkforwarder-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Install via rpm, target directory will be /opt/splunkforwarder rpm -ivh splunkforwarder-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Start Splunk Forwarder, accept EULA and       enter the same username and password as for the Splunk Server /opt/splunkforwarder/bin/splunk start Setup forward-server and monitor Add Splunk server as a server to receive forwarded log files. Same username and password as before /opt/splunkforwarder/bin/splunk add forward-server HOST-IP:9700 -auth USER:PASSWORD Add a log file, e.g. audit.log from auditd. It requires the log file location, type of logs and the index we created before. /opt/splunkforwarder/bin/splunk add monitor /var/log/audit/audit.log -sourcetype linux_logs -index remotelogs Check if forward server and log files have been enabled, restart splunkforwarder if nothing happens /opt/splunkforwarder/bin/splunk list monitor Your session is invalid. Please login. Splunk username: admin Password: Monitored Directories: $SPLUNK_HOME/var/log/splunk /opt/splunkforwarder/var/log/splunk/audit.log /opt/splunkforwarder/var/log/splunk/btool.log /opt/splunkforwarder/var/log/splunk/conf.log /opt/splunkforwarder/var/log/splunk/first_install.log /opt/splunkforwarder/var/log/splunk/health.log /opt/splunkforwarder/var/log/splunk/license_usage.log /opt/splunkforwarder/var/log/splunk/mongod.log /opt/splunkforwarder/var/log/splunk/remote_searches.log /opt/splunkforwarder/var/log/splunk/scheduler.log /opt/splunkforwarder/var/log/splunk/searchhistory.log /opt/splunkforwarder/var/log/splunk/splunkd-utility.log /opt/splunkforwarder/var/log/splunk/splunkd_access.log /opt/splunkforwarder/var/log/splunk/splunkd_stderr.log /opt/splunkforwarder/var/log/splunk/splunkd_stdout.log /opt/splunkforwarder/var/log/splunk/splunkd_ui_access.log $SPLUNK_HOME/var/log/splunk/license_usage_summary.log /opt/splunkforwarder/var/log/splunk/license_usage_summary.log $SPLUNK_HOME/var/log/splunk/metrics.log /opt/splunkforwarder/var/log/splunk/metrics.log $SPLUNK_HOME/var/log/splunk/splunkd.log /opt/splunkforwarder/var/log/splunk/splunkd.log $SPLUNK_HOME/var/log/watchdog/watchdog.log* /opt/splunkforwarder/var/log/watchdog/watchdog.log $SPLUNK_HOME/var/run/splunk/search_telemetry/*search_telemetry.json $SPLUNK_HOME/var/spool/splunk/...stash_new Monitored Files: $SPLUNK_HOME/etc/splunk.version /var/log/all.log /var/log/audit/audit.log Check if the Splunk server is available /opt/splunkforwarder/bin/splunk list forward-server Active forwards: 10.70.0.186:9700 Configured but inactive forwards: None Search logs in the Web UI                                       Collecting Metrics Download the Splunk Unix Add-on splunk-add-on-for-unix-and-linux_602.tgz unpack and copy to the splunkforwarder app folder tar xf splunk-add-on-for-unix-and-linux_602.tgz mv Splunk_TA_nix /opt/splunkforwarder/etc/apps/ Enable Metrics you want to receivce, set   disable = 0   to *enable *metric vim /opt/splunkforwarder/etc/apps/Splunk_TA_nix/default/inputs.conf Stop splunk forwarder and start splunk forwarder /opt/splunkforwarder/bin/splunk stop /opt/splunkforwarder/bin/splunk start
View full article
How to install SuperDoctor for SuperMicro Server via XML-RPC Step 1 Upload "Plugin.Administration.SuperDoctor-5.5.0-1.0.2-2018-08-21.pkg" to EXAoperation Login to EXAoperation (User privilege Administrator) Upload pkg Configuration>Software>Versions>Browse>Submit Step 2 Connect to EXAoperation via XML-RPC (this example uses Python) >>> import xmlrpclib, pprint, base64 >>> s = xmlrpclib.ServerProxy("http://user:password@license-server/cluster1") Step 3 Show current plugin version and plugin functions >>> pprint.pprint(s.showPluginList()) ['Administration.SuperDoctor-5.5.0-1.0.2'] >>> pprint.pprint(s.showPluginFunctions('Administration.SuperDoctor-5.5.0-1.0.2')) {'ACTIVATE': 'Activate this plugin.', 'DEACTIVATE': 'Deactivate this plugin.', 'GET_SNMP_CONFIG': 'Get snmp conf', 'INSTALL': 'Install this plugin.', 'PUT_SNMP_CONFIG': 'Put snmp conf', 'STATUS': 'Check service status.', 'UNINSTALL': 'Uninstall this plugin.'} Step 4a >>> sts, ret = s.callPlugin('Administration.SuperDoctor-5.5.0-1.0.2','n17','INSTALL') >>> ret 'Archive: /usr/opt/EXAplugins/Administration.SuperDoctor-5.5.0-1.0.2/packages/SD5_5.5.0_build.784_linux.zip\n inflating: /tmp/SuperMicro/ReleaseNote.txt \n inflating: /tmp/SuperMicro/SSM_MIB.zip \n inflating: /tmp/SuperMicro/SuperDoctor5Installer_5.5.0_build.784_linux_x64_20170511162151.bin \n inflating: /tmp/SuperMicro/SuperDoctor5_UserGuide.pdf \n inflating: /tmp/SuperMicro/crc32.txt \n inflating: /tmp/SuperMicro/installer_agent.properties ' Step 4b >>> config = base64.b64encode(open('/path/to/installer_agent.properties').read ()) >>> sts, ret = s.callPlugin('Administration.SuperDoctor-5.5.0-1.0.2','n17','INSTALL', config) >>> ret Step 5 >>> sts, ret = s.callPlugin('Administration.SuperDoctor-5.5.0-1.0.2','n17','ACTIVATE') >>> ret 'Stopping snmpd: [ OK ]\nStarting snmpd: [ OK ]' Step 6 pass .1.3.6.1.4.1.10876 /opt/Supermicro/SuperDoctor5/libs/native/snmpagent will be removed from  /etc/snmp/snmpd.conf  and SNMPD will be restarted. >>> sts, ret = s.callPlugin('Administration.SuperDoctor-5.5.0-1.0.2','n17','DEACTIVATE') >>> ret 'Stopping snmpd: [ OK ]\nStarting snmpd: [ OK ]\nDeactived' Step 7 >>> f = open('/path/to/snmpd.conf','w') >>> f.write(s.callPlugin('Administration.SuperDoctor-5.5.0-1.0.2','n17', 'GET_SNMP_CONFIG')[1]) >>> f.close() Step 8 >>> upload = base64.b64encode(open('/path/to/snmpd.conf').read ()) >>> sts, ret = s.callPlugin('Administration.SuperDoctor-5.5.0-1.0.2', 'n17', 'PUT_SNMP_CONFIG', upload) >>> ret 'Reloading snmpd: [ OK ]' Step 9 >>> sts, ret = s.callPlugin('Administration.SuperDoctor-5.5.0-1.0.2','n17','ACTIVATE') >>> ret 'Stopping snmpd: [ OK ]\nStarting snmpd: [ OK ]' Step 10 >>> sts, ret = s.callPlugin('Administration.SuperDoctor-5.5.0-1.0.2','n17','STATUS') >>> ret 'snmpd status: snmpd (pid 3711) is running...\nsuperdoctor 5 status: SuperDoctor 5 is running (45943).' Step 11 >>> sts, ret = s.callPlugin('Administration.SuperDoctor-5.5.0-1.0.2','n17','UNINSTALL') >>> ret 'Uninstalled'
View full article
This article describes how the physical hardware must be configured prior to an Exasol installation. There are 2 categories of settings that must be applied, for data nodes and for the management node. Data nodes settings: BIOS: Disable EFI Disable C-States (Maximum Performance) Enable PXE on "CICN" Ethernet interfaces Enable Hyperthreading Boot order: Boot from PXE 1st NIC (VLAN CICN) RAID: Controller RW-Cache only enabled with BBU All disks are configured as RAID-1 mirrors (best practice. For different setup, ask EXASOL Support) Keep the default strip size Keep the default R/W cache ratio LOM Interface Enable SOL (Serial over LAN console)   Management node settings: BIOS: Disable EFI Disable C-States (Maximum Performance) Enable Hyperthreading Boot order: Boot from disk RAID: Controller RW-Cache only enabled with BBU All disks are configured as RAID-1 mirrors (best-practice, for different setup ask EXASOL Support) Keep the default strip size Keep the default R/W cache ratio LOM Interface Enable SOL (Serial over LAN console)
View full article
Background min.io is an S3-compatible storage service (see https://min.io) that can be used as a backup destination (remote archive volume) for on-premise Exasol setups. Unfortunately, as of Exasol 6.2, there remains a minor incompatibility that requires patching the min.io server in order for Exasol to correctly recognize it. The steps below walk you through making your min.io service Exasol-compatible. Prerequisites Have min.io installed in a location that Exasol has access to Ability to reconfigure, recompile, and redeploy min.io Ability to add DNS aliases Have a min.io bucket and access/secret keys setup How to use min.io as an Exasol remote archive volume Enable SSL in min.io and have it listen on port 443 (Exasol will ignore any other port specified) Assuming that your min.io server is minio.yourdomain.com   and it has a bucket named backups, create a DNS alias backups.minio.yourdomain.com which also resolves to the same IP as minio.yourdomain.com In the min.io startup script set the   MINIO_DOMAIN   ENV variable to   minio.yourdomain.com.  This will cause min.io to extract the bucket name from the virtual host passed to it instead of extracting it from the URL path (which is the default) In the min.io startup script set the MINIO_REGION_NAME   ENV variable to   us-east-1 (or other region of your choice). This will cause min.io to include that in all HTTP response headers. Check out the min.io source code from https://github.com/minio/minio.git   and apply the patch below. See the repository's Dockerfile for how to rebuild it:  --- /cmd/api-headers.go +++ /cmd/api-headers.go @@ -51,7 +51,8 @@ func setCommonHeaders(w http.ResponseWriter) { // Set `x-amz-bucket-region` only if region is set on the server // by default minio uses an empty region. if region := globalServerRegion; region != "" { - w.Header().Set(xhttp.AmzBucketRegion, region) + h := strings.ToLower(xhttp.AmzBucketRegion) + w.Header()[h] = append(w.Header()[h], region) } w.Header().Set(xhttp.AcceptRanges, "bytes") Redeploy your min.io server with the patch In the Exasol ExaOperation interface for adding a remote archive volume Set the Archive URL to   https://backups.minio.yourdomain.com Specify the access and secret keys in the username/password fields Specify an Option of   s3   (and any other applicable options) Additional References https://min.io https://github.com/minio https://docs.exasol.com/6.2/administration/aws/manage_storage/create_remote_archive_volume.htm
View full article
Certified Hardware List
The hardware certified by Exasol can be found in the link below:

Certified Hardware List

If your preferred hardware is not certified, refer to our Certification Process for more information on this process.
Top Contributors