Environment Management
Manage the environment around the database, such as Cloud, Monitoring, Exaoperation and scalability
cancel
Showing results for 
Search instead for 
Did you mean: 
Background This article describes the calculation of the optimal (maximum) DB RAM on a: 4+1 system with one database (dedicated environment) 4+1 system with two databases (shared environment) The calculation of the OS Memory per Node stays the same for both environments. Shared environments are not recommended for production systems. Example Setup: The 4+1 cluster contains four active data nodes and one standby node. Each node has 384GiB of main memory. How to calculate Database RAM OS Memory per Node It is vital for the database that there is enough memory allocatable through the OS. We recommend using at least 10% of the main memory on each node. This prevents the nodes from swapping on high load (many sessions). Main Memory per Node * 0.1 = OS Memory per Node 384 * 0.1 = 38,4 -> 38GiB In order to set this value, the database needs to be shut down. EXAoperation 'Configuration > Network' - "OS Memory/Node (GiB)" Maximum DB RAM (dedicated environment) (Main Memory per Node - OS Memory per Node) * Number of active Nodes = Maximum DB RAM Example: 4 x data nodes with 384GiB (Main Memory per Node) - 38GiB (OS Memory per Node) (384GiB - 38 GiB) * 4 = 1380GiB Maximum DB RAM (shared environment) Example Database "one" on four data nodes (exa_db1) Database "two" on two data nodes (exa_db2) As before the "Maximum DB RAM" is 1380GiB. With two databases sharing the Maximum DB RAM, we need to recalculate and redistribute it. Maximum DB RAM / Number of Databases = Maximum DB RAM per database 1380GiB / 2 = 690GiB For database "one" (exa_db1), which is running on all four nodes 690GiB DB RAM can be configured. The smaller database "two" (exa_db2) is running on two nodes, therefore "Maximum DB RAM per database" needs to be divided by the number of data nodes it's running on (2). Maximum DB RAM per database / Number of active Nodes = Maximum DB RAM per database 690GiB / 2 = 345GiB       Additional References Sizing Considerations  
View full article
With this article, you will learn how to add and change database parameters and their values. 1. Log in to your Exasol container: $ docker exec -it <container_name> /bin/bash 2. Inside the container go to the /exa/etc/ folder and open the EXAConf file with a text editor of your choice: $ cd /exa/etc $ vim EXAConf 3. Under the DB section, right above the [[JDBC]] sub-section add a line that says Params and the necessary parameters: [DB : DB1] Version = 6.1.5 MemSize = 6 GiB Port = 8563 Owner = 500 : 500 Nodes = 11,12,13 NumActiveNodes = 3 DataVolume = DataVolume1 Params = -useIndexWrapper=0 -disableIndexIteratorScan=1 [[JDBC]] BucketFS = bfsdefault Bucket = default Dir = drivers/jdbc [[Oracle]] BucketFS = bfsdefault Bucket = default Dir = drivers/oracle 4. Change the value of Checksum in EXAConf: $ sed -i '/Checksum =/c\ Checksum = COMMIT' /exa/etc/EXAConf 5. Commit the changes: $ exaconf commit 6. At this point you have 2 options: 6.1. Restart the container: $ dwad_client stop-wait <database_instance> # Stop the database instance (inside the container) $ csctrl -d # Stop the storage service (inside the container) $ exit # Exit the container $ docker restart <container_name> # Restart the container $ docker exec -it <container_name> # Log in to the container's BASH environment $ dwad_client setup-print <database_instance> # See the database parameters ... PARAMS: -netmask= -auditing_enabled=0 -lockslb=1 -sandboxPath=/usr/opt/mountjail -cosLogErrors=0 -bucketFSConfigPath=/exa/etc/bucketfs_db.cfg -sysTZ=Europe/Berlin -etlJdbcConfigDir=/exa/data/bucketfs/bfsdefault/.dest/default/drivers/jdbc:/usr/opt/EXASuite-6/3rd-party/JDBC/@JDBCVERSION@:/usr/opt/EXASuite-6/EXASolution-6.1.5/jdbc -useIndexWrapper=0 -disableIndexIteratorScan=1 ... As you can from the output mentioned above, the parameters have been added. However, rebooting the cluster can cause some downtime. In order to shorten the duration of your downtime, you can try the method below. 6.2. Use a configuration file to change the parameters by just rebooting the database, not container: $ dwad_client setup-print <database_instance> > db1.cfg # See the database parameters $ vim db1.cfg # Edit the configuration file When you open the file, find the line starting with PARAMS and the parameter you need, like: PARAMS: -netmask= -auditing_enabled=0 -lockslb=1 -sandboxPath=/usr/opt/mountjail -cosLogErrors=0 -bucketFSConfigPath=/exa/etc/bucketfs_db.cfg -sysTZ=Europe/Berlin -etlJdbcConfigDir=/exa/data/bucketfs/bfsdefault/.dest/default/drivers/jdbc:/usr/opt/EXASuite-6/3rd-party/JDBC/@JDBCVERSION@:/usr/opt/EXASuite-6/EXASolution-6.1.5/jdbc -useIndexWrapper=0 -disableIndexIteratorScan=1 After adding the parameters, save the file and execute the following commands: $ dwad_client stop-wait <database_instance> # Stop the database instance (inside the container) $ dwad_client setup <database_instance> db1.cfg # Setup the database with the db1.cfg configuration file (inside the container) $ dwad_client start-wait <database_instance> # Start the database instance (inside the container) This will add the database parameters, but will not be persistent throughout reboots. Therefore, by adding the parameters this way you shorten your downtime, but the changes aren't permanent. After doing this, we would recommend to also do method 6.1, in case you decide to reboot sometime in the future. 7. Verify the parameters: 7.1. With dwad_client list:             7.2. With dwad_list print-setup <database_instance>:      
View full article
This article shows you how to allow internet access for the Community Edition running on VMWare
View full article
Background Installation of Protegrity via XML-RPC   Prerequisites Ask at service@exasol.com for Protegrity plugin. How to Install Protegrity via XML-RPC 1. Upload "Plugin.Security.Protegrity-6.6.4.19.pkg" to EXAoperation Login to EXAoperation (User privilege Administrator) Upload pkg Configuration>Software>Versions>Browse>Submit 2. Connect to EXAoperation via XML-RPC (this example uses Python) Following code block is ready to copy and paste into a python shell: from xmlrpclib import Server as xmlrpc from ssl import _create_unverified_context as ssl_context from pprint import pprint as pp from base64 import b64encode import getpass server = raw_input('Please enter IP or Hostname of Licenze Server:') ; user = raw_input('Enter your User login: ') ; password = getpass.getpass(prompt='Please enter Login Password:') server = xmlrpc('https://%s:%s@%s/cluster1/' % (user,password,server) , context = ssl_context()) 3. Show installed plugins >>> pp(server.showPluginList()) ['Security.Protegrity-6.6.4.19'] 4. Show plugin functions >>> pp(server.showPluginFunctions('Security.Protegrity-6.6.4.19')) 'INSTALL': 'Install plugin.', 'UPLOAD_DATA': 'Upload data directory.', 'UNINSTALL': 'Uninstall plugin.', 'START': 'Start pepserver.', 'STOP': 'Stop pepserver.', 'STATUS': 'Show status of plugin (not installed, started, stopped).' 5. For further usage we store the plugin name and the node list in variables: >>> pname = 'Security.Protegrity-6.6.4.19' >>> nlist = server.getNodeList() 6. Install the plugin >>> pp([[node] + server.callPlugin(pname, node, 'INSTALL', '') for node in nlist]) [['n0011', 0, ''], ['n0012', 0, ''], ['n0013', 0, ''], ['n0014', 0, '']] 7. Get the plugin status on each node: >>> pp([[node] + server.callPlugin(pname, node, 'STATUS', '') for node in nlist]) [['n0011', 0, 'stopped'], ['n0012', 0, 'stopped'], ['n0013', 0, 'stopped'], ['n0014', 0, 'stopped']] 8. Start plugin on each node: >>> pp([[node] + server.callPlugin(pname, node, 'START', '') for node in nlist]) [['n0011', 0, 'started'], ['n0012', 0, 'started'], ['n0013', 0, 'started'], ['n0014', 0, 'started']] 9. Push ESA config to nodes, server-side task Client Port (pepserver) is listening on TCP 15700 Additional Notes - Additional References -
View full article
Portal registration tips.
View full article
WHAT WE'LL LEARN? This article will show you how to change your license file in your Docker Exasol environment. HOW-TO NOTE: $CONTAINER_EXA is a variable set before deploying an Exasol database container with persistent storage. For more information, please check our Github repo. 1. Ensure that your Docker container is running with persistent storage. This means that your docker run command should contain a -v statement, like the example below: $ docker run --detach --network=host --privileged --name <container_name> -v $CONTAINER_EXA:/exa exasol/docker-db:6.1.5-d1 init-sc --node-id <node_id> 2. Copy the new license file to the the $CONTAINER_EXA/etc/ folder: $ cp /home/user/Downloads/new_license.xml $CONTAINER_EXA/etc/new_license.xml 3. Log in to your Docker container's BASH environment: $ docker exec -it <container_name> /bin/bash 4. Go to the /exa/etc folder and rename the old license.xml file: $ cd /exa/etc/ $ mv license.xml license.xml.old 5. Rename the new license file: $ mv new_license.xml license.xml 6. Double-check the contents of the directory, to ensure that the newer file is name license.xml: $ ls -l <other files> -rw-r--r-- 1 root root 2275 Jul 15 10:13 license.xml.old -rw-r--r-- 1 root root 1208 Jul 21 07:38 license.xml <other files> 7. Sync file across all nodes if you are using a multi-node cluster: $ cos_sync_files /exa/etc/license.xml $ cos_sync_files /exa/etc/license.xml.old 8. Stop the Database and Storage services: $ dwad_client stop-wait <database_instance> $ csctrl -d 9. Restart the Container: $ docker restart <container_name> 10. Log in to the container and check if the proper license is installed: $ docker exec -it <container_name> /bin/bash $ awk '/SHLVL/ {for(i=1; i<=6; i++) {getline; print}}' /exa/logs/cored/exainit.log | tail -6 You should get an output similar to this: [2020-07-21 09:43:50] stage0: You have following license limits: [2020-07-21 09:43:50] stage0: >>> Database memory (GiB): 50 Main memory (RAM) usable by databases [2020-07-21 09:43:50] stage0: >>> Database raw size (GiB): unlimited Raw Size of Databases (see Value RAW_OBJECT_SIZE in System Tables) [2020-07-21 09:43:50] stage0: >>> Database mem size (GiB): unlimited Compressed Size of Databases (see Value MEM_OBJECT_SIZE in System Tables) [2020-07-21 09:43:50] stage0: >>> Cluster nodes: unlimited Number of usable cluster nodes [2020-07-21 09:43:50] stage0: >>> Expiration date: unlimited Date of license expiration Check the parameters and see if it corresponds to your requested license parameters.
View full article
This article describes the Exasol database backup process.
View full article
Background With versions prior to 5.0.15 EXASOL cluster deployments only supported CIDR block 27.1.0.0/16 and subnet 27.1.0.0/16, now it's possible to use custom CIDR blocks but with some restrictions, because the CIDR block will automatically be managed by our cluster operating system. VPC CIDR block netmask must be between /16 (255.255.0.0) and /24 (255.255.255.0)   The first ten IP addresses of the cluster's subnet are reserved and cannot be used Explanation Getting the right VPC / subnet configuration: The subnet used for installation of the EXASOL cluster is calculated according to the VPC CIDR range: 1. For VPCs with 16 to 23 Bit netmasks, the subnet will have a 24 Bit mask. For a 24 Bit VPC, the subnet will have 26 Bit range. VPC CIDR RANGE -> Subnet mask 192.168.20.0/16 -> .../24 192.168.20.0/17 -> .../24 ... -> .../24 192.168.20.0/22 -> .../24 192.168.20.0/23     FORBIDDEN 192.168.20.0/24 -> .../26 192.168.20.0/25     FORBIDDEN   2. For the EXASOL subnet, the VPS's second available subnet is automatically used. Helpful is the tool sipcalc (http://sipcalc.tools.uebi.net/), e.g. Example 1: The VPC is 192.168.20.0/22 (255.255.252.0) -> A .../24 subnet is used (255.255.255.0). `sipcalc 192.168.20.0/24' calculates a network range of 192.168.20.0 - 192.168.20.255 which is the VPC's first subnet. => EXASOL uses the subsequent subnet, which is 192.168.21.0/24 Example 2: The VPC is 192.168.20.0/24 (255.255.255.0) -> A .../26 subnet is used (255.255.255.192). `sipcalc 192.168.20.0/26' calculates a network range of 192.168.20.0 - 192.168.20.63 which is the VPC's first subnet. => EXASOL uses the subsequent subnet, which is 192.168.20.64/26 3. The first 10 IP addresses of the subnet are reserved. The license server, therefore, gets the subnet base + 10, the other nodes follow. This table shows some example configurations: VPC CIDR block Public Subnet Gateway License Server IP address IPMI network host addresses First additional VLAN address 10.0.0.0/16 10.0.1.0/24 10.0.1.1 10.0.1.10 10.0.128.0 10.0.65.0/16 192.168.0.0/24 192.168.1.0/24 192.168.1.1 192.168.1.10 192.168.1.128 192.168.64.0/24 192.168.128.0/24 192.168.129.0/24 192.168.128.1 192.168.129.10 192.168.129.128 192.168.32.0/24 192.168.20.0/22 192.168.21.0/24 192.168.21.1 192.168.21.10 192.168.21.128   192.168.16.0/24 192.168.16.64/26 192.168.16.65 192.168.16.74 192.168.16.96 192.168.128.0/26   Additional References https://docs.exasol.com/administration/aws.htm
View full article
With this article, you will learn how to add an LDAP server for your database: 1. Log in to your Exasol container: $ docker exec -it <container_name> /bin/bash 2. Inside the container go to the /exa/etc/ folder and open the EXAConf file with a text editor of your choice: $ cd /exa/etc $ vim EXAConf 3. Under the DB section, right above the [[JDBC]] sub-section add a line that says Params and the values mentioned after it: [DB : DB1] Version = 6.1.5 MemSize = 6 GiB Port = 8563 Owner = 500 : 500 Nodes = 11,12,13 NumActiveNodes = 3 DataVolume = DataVolume1 Params = -LDAPServer="ldap://<your_ldap_server.your_domain>" [[JDBC]] BucketFS = bfsdefault Bucket = default Dir = drivers/jdbc [[Oracle]] BucketFS = bfsdefault Bucket = default Dir = drivers/oracle NOTE: You can also use ldaps instead of ldap 4. Change the value of Checksum in EXAConf: $ sed -i '/Checksum =/c\ Checksum = COMMIT' /exa/etc/EXAConf 5. Commit the changes: $ exaconf commit 6. At this point you have 2 options: 6.1. Restart the container: $ dwad_client stop-wait <database_instance> # Stop the database instance (inside the container) $ csctrl -d # Stop the storage service (inside the container) $ exit # Exit the container $ docker restart <container_name> # Restart the container $ docker exec -it <container_name> # Log in to the container's BASH environment $ dwad_client setup-print <database_instance> # See the database parameters ... PARAMS: -netmask= -auditing_enabled=0 -lockslb=1 -sandboxPath=/usr/opt/mountjail -cosLogErrors=0 -bucketFSConfigPath=/exa/etc/bucketfs_db.cfg -sysTZ=Europe/Berlin -etlJdbcConfigDir=/exa/data/bucketfs/bfsdefault/.dest/default/drivers/jdbc:/usr/opt/EXASuite-6/3rd-party/JDBC/@JDBCVERSION@:/usr/opt/EXASuite-6/EXASolution-6.1.5/jdbc -LDAPServer="ldap://your_ldap_server.your_domain" ... As you can from the output mentioned above, the parameters have been added. However, rebooting the cluster can cause some downtime. In order to shorten the duration of your downtime, you can try the method below. 6.2. Use a configuration file to change the parameters by just rebooting the database, not container: $ dwad_client setup-print <database_instance> > db1.cfg # See the database parameters $ vim db1.cfg # Edit the configuration file When you open the file, find the line starting with PARAMS and the parameter you need, like: PARAMS: -netmask= -auditing_enabled=0 -lockslb=1 -sandboxPath=/usr/opt/mountjail -cosLogErrors=0 -bucketFSConfigPath=/exa/etc/bucketfs_db.cfg -sysTZ=Europe/Berlin -etlJdbcConfigDir=/exa/data/bucketfs/bfsdefault/.dest/default/drivers/jdbc:/usr/opt/EXASuite-6/3rd-party/JDBC/@JDBCVERSION@:/usr/opt/EXASuite-6/EXASolution-6.1.5/jdbc -LDAPServer="ldap://your_ldap_server.your_domain" After adding the parameters, save the file and execute the following commands: $ dwad_client stop-wait <database_instance> # Stop the database instance (inside the container) $ dwad_client setup <database_instance> db1.cfg # Setup the database with the db1.cfg configuration file (inside the container) $ dwad_client start-wait <database_instance> # Start the database instance (inside the container) This will add the database parameters, but will not be persistent throughout reboots. Therefore, by adding the parameters this way you shorten your downtime, but the changes aren't permanent. After doing this, we would recommend to also do method 6.1, in case you decide to reboot sometime in the future. 7. Verify the parameters: 7.1. With dwad_client list:             7.2. With dwad_list print-setup <database_instance>:
View full article
Server installation Minimal Centos 7 Register at Splunk and download the e.g. free version Download splunk-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Install RPM, target directory will be /opt/splunk rpm -ivh splunk-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Start Splunk, accept EULA and enter   username   and   password /opt/splunk/bin/splunk start Create a SSH port forward to access the web UI.  NOTE!     If /etc/hosts is configured properly and name resolution is working, no port forwarding is needed. ssh root@HOST-IP -L8000:localhost:8000 Login with username and password you provided during the installation https://localhost:8000 Setup an Index to store data From the web UI go to Settings Indexes New Index Name: remotelogs Type: Events Max Size: e.g. 20GB Save Create a new listener to receive data From the web UI go to Settings Forwarding and receiving Configure receiving: Add new New Receiving Port: 9700 Save Restart Splunk /opt/splunk/bin/splunk restart Client installation Splunk Universal Forwarder Download the splunkforwarder-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Install via rpm, target directory will be /opt/splunkforwarder rpm -ivh splunkforwarder-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Start Splunk Forwarder, accept EULA and       enter the same username and password as for the Splunk Server /opt/splunkforwarder/bin/splunk start Setup forward-server and monitor Add Splunk server as a server to receive forwarded log files. Same username and password as before /opt/splunkforwarder/bin/splunk add forward-server HOST-IP:9700 -auth USER:PASSWORD Add a log file, e.g. audit.log from auditd. It requires the log file location, type of logs and the index we created before. /opt/splunkforwarder/bin/splunk add monitor /var/log/audit/audit.log -sourcetype linux_logs -index remotelogs Check if forward server and log files have been enabled, restart splunkforwarder if nothing happens /opt/splunkforwarder/bin/splunk list monitor Your session is invalid. Please login. Splunk username: admin Password: Monitored Directories: $SPLUNK_HOME/var/log/splunk /opt/splunkforwarder/var/log/splunk/audit.log /opt/splunkforwarder/var/log/splunk/btool.log /opt/splunkforwarder/var/log/splunk/conf.log /opt/splunkforwarder/var/log/splunk/first_install.log /opt/splunkforwarder/var/log/splunk/health.log /opt/splunkforwarder/var/log/splunk/license_usage.log /opt/splunkforwarder/var/log/splunk/mongod.log /opt/splunkforwarder/var/log/splunk/remote_searches.log /opt/splunkforwarder/var/log/splunk/scheduler.log /opt/splunkforwarder/var/log/splunk/searchhistory.log /opt/splunkforwarder/var/log/splunk/splunkd-utility.log /opt/splunkforwarder/var/log/splunk/splunkd_access.log /opt/splunkforwarder/var/log/splunk/splunkd_stderr.log /opt/splunkforwarder/var/log/splunk/splunkd_stdout.log /opt/splunkforwarder/var/log/splunk/splunkd_ui_access.log $SPLUNK_HOME/var/log/splunk/license_usage_summary.log /opt/splunkforwarder/var/log/splunk/license_usage_summary.log $SPLUNK_HOME/var/log/splunk/metrics.log /opt/splunkforwarder/var/log/splunk/metrics.log $SPLUNK_HOME/var/log/splunk/splunkd.log /opt/splunkforwarder/var/log/splunk/splunkd.log $SPLUNK_HOME/var/log/watchdog/watchdog.log* /opt/splunkforwarder/var/log/watchdog/watchdog.log $SPLUNK_HOME/var/run/splunk/search_telemetry/*search_telemetry.json $SPLUNK_HOME/var/spool/splunk/...stash_new Monitored Files: $SPLUNK_HOME/etc/splunk.version /var/log/all.log /var/log/audit/audit.log Check if the Splunk server is available /opt/splunkforwarder/bin/splunk list forward-server Active forwards: 10.70.0.186:9700 Configured but inactive forwards: None Search logs in the Web UI                                       Collecting Metrics Download the Splunk Unix Add-on splunk-add-on-for-unix-and-linux_602.tgz unpack and copy to the splunkforwarder app folder tar xf splunk-add-on-for-unix-and-linux_602.tgz mv Splunk_TA_nix /opt/splunkforwarder/etc/apps/ Enable Metrics you want to receivce, set   disable = 0   to *enable *metric vim /opt/splunkforwarder/etc/apps/Splunk_TA_nix/default/inputs.conf Stop splunk forwarder and start splunk forwarder /opt/splunkforwarder/bin/splunk stop /opt/splunkforwarder/bin/splunk start
View full article
Background Installation of FSC Linux Agents via XML-RPC   Prerequisites Ask at service@exasol.com for FSC Monitoring plugin.   How to Install FSC Linux Agents via XML-RPC   1. Upload "Plugin.Administration.FSC-7.31-16.pkg" to EXAoperation Login to EXAoperation (User privilege Administrator) Upload pkg Configuration>Software>Versions>Browse>Submit   2. Connect to EXAoperation via XML-RPC (this example uses Python) >>> import xmlrpclib, pprint >>> s = xmlrpclib.ServerProxy("http://user:password@license-server/cluster1")   3. Show plugin functions >>> pprint.pprint(s.showPluginFunctions('Administration.FSC-7.31-16')) { 'INSTALL_AND_START': 'Install and start plugin.', 'UNINSTALL': 'Uninstall plugin.', 'START': 'Start FSC and SNMP services.', 'STOP': 'Stop FSC and SNMP services.', 'RESTART': 'Restart FSC and SNMP services.', 'PUT_SNMP_CONFIG': 'Upload new SNMP configuration.', 'GET_SNMP_CONFIG': 'Download current SNMP configuration.', 'STATUS': 'Show status of plugin (not installed, started, stopped).' }   4. Install FSC and check for return code >>> sts, ret = s.callPlugin('Administration.FSC-7.31-16','n10','INSTALL_AND_START') >>> ret 0   5. Upload snmpd.conf (Example attached to this article) >>> sts, ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'PUT_SNMP_CONFIG', file('/home/user/snmpd.conf').read())   6. Start FSC and check status >>> ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'RESTART') >>> ret >>> ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'STATUS') >>> ret [0, 'started']   7. Repeat steps 4-6 have for each node.   Additional Notes For monitoring the FSC agents go to http://support.ts.fujitsu.com/content/QuicksearchResult.asp and search for "ServerView Integration Pack for NAGIOS Additional References -
View full article
For version 6.0 and newer, you can add new disks without reinstalling the nodes by adding them to a new partition. This article will describe how to accomplish this task. 1. Shutdown all databases Navigate to "EXASolution", select the database and click "Shutdown".  Please make sure that no backup or restore process is running when you shut down the databases.  2. Shutdown EXAStorage  Navigate to "EXAStorage" in the menu and click on "Shutdown Storage Service" 3. Hot-plug disk device(s):   This has to be done using your virtualization software. In the case of using physical hardware, add the new disks, boot the node and wait until the boot process finishes (this is necessary to continue). 4. Open the disk overview for a node in EXAoperation:  If the "Add Storage disk" button does not show up, the node has not been activated yet and still remains in the "To install" state. If the node has been installed, set the "active" flag on the nodes.   5. Add disk devices to the new EXAStorage partition:   Press the button " Add"   and choose the newly hot-plugged disk device from the list showing devices that are currently unused. When adding multiple disk devices, this procedure has to be repeated for each disk device. Please note that multiple disk devices will always be assembled as RAID-0 in this process. Press the button " Add"  again afterward. 6. Reboot cluster node using EXAoperation:   Reboot the cluster node and wait until the boot process is finished. 7. Start EXAStorage and use the newly added devices as a new partition:   (e.g. EXAStorage -> n0011 -> Select unused disk devices -> "Add devices") Please note that already existing volumes cannot be used for this disk. However, the disk can be used for new data/archive volumes.
View full article
This article explains how to set up a new BucketFS bucket.
View full article
Certified Hardware List
The hardware certified by Exasol can be found in the link below:

Certified Hardware List

If your preferred hardware is not certified, refer to our Certification Process for more information on this process.
Top Contributors