Environment Management
Manage the environment around the database, such as Cloud, Monitoring, Exaoperation and scalability
cancel
Showing results for 
Search instead for 
Did you mean: 
This article describes the process Exasol goes through when certifying hardware. 
View full article
EXASOL offers a fully preconfigured mobile test system in the scope of a proof of concept, to showcase Exasol's capabilities. In order to ensure a smooth process to start and execute the proof of concept, we kindly ask you to take note of the following technical requirements and prerequisites for the operation of a mobile trial cluster.                       Chassis Dimensions (w/h/d): Flight case on wheels: ca. 70cm/100cm/110cm Weight: ca. 250 kg (when fully equipped) Power Supply: 2x 230 V European “Schuko” plug Total power consumption either around 3 kW-3,5 kW for smaller configs or 2x 3kW-3.5kW for larger Mobile Test Systems 16A fuse protection per relevant electric circuit Cooling Please assure that the generated heat can be compensated (room size, air conditioning). If the mobile trial cluster is not located in a data center, please consider the noise generated by the machines. Network Network Bandwidth 10 gigabit Ethernet recommended 1 gigabit and fiber optics also work. Please contact us in the latter case as an adapter is required IP Addresses Depending on the number of servers the test system contains, please send us in advance 6 or 11 IP addresses of a coherent network, ideally consecutive addresses (example for a system with 6 servers: 192.168.1.10..15/24). If a gateway is required also its address. Optionally internal addresses of NTP and DNS servers. Firewall Please make sure that the Exasol servers can be reached from the workstations and the relevant servers (ETL, BI, …) via port 8563 (TCP). For the administration console, access via port 80 and/or 443 is needed. It is required to install necessary drivers to access the database on client machines. Data Migration If data is to be loaded from another database, please make sure that you have the connection parameters and that the database can be reached by Exasol. Remote Maintenance Optionally, a remote connection from Exasol to the test system via VPN or ssh tunnel might be helpful to provide remote support for the proof of concept. Transport Delivery The test system will be delivered by a forwarding agent. Please make sure that the consignment can be accepted at the appointed date and that the place of location is at ground level or can be reached via a sufficiently sized elevator. Please name a contact person for the forwarding agent. Pick Up The test system will be picked up by a forwarding agent. Please make sure that the system is ready for pick up at the appointed date.
View full article
For version 6.0 and newer, you can add new disks without reinstalling the nodes by adding them to a new partition. This article will describe how to accomplish this task. 1. Shutdown all databases Navigate to "EXASolution", select the database and click "Shutdown".  Please make sure that no backup or restore process is running when you shut down the databases.  2. Shutdown EXAStorage  Navigate to "EXAStorage" in the menu and click on "Shutdown Storage Service" 3. Hot-plug disk device(s):   This has to be done using your virtualization software. In the case of using physical hardware, add the new disks, boot the node and wait until the boot process finishes (this is necessary to continue). 4. Open the disk overview for a node in EXAoperation:  If the "Add Storage disk" button does not show up, the node has not been activated yet and still remains in the "To install" state. If the node has been installed, set the "active" flag on the nodes.   5. Add disk devices to the new EXAStorage partition:   Press the button " Add"   and choose the newly hot-plugged disk device from the list showing devices that are currently unused. When adding multiple disk devices, this procedure has to be repeated for each disk device. Please note that multiple disk devices will always be assembled as RAID-0 in this process. Press the button " Add"  again afterward. 6. Reboot cluster node using EXAoperation:   Reboot the cluster node and wait until the boot process is finished. 7. Start EXAStorage and use the newly added devices as a new partition:   (e.g. EXAStorage -> n0011 -> Select unused disk devices -> "Add devices") Please note that already existing volumes cannot be used for this disk. However, the disk can be used for new data/archive volumes.
View full article
How to install SuperDoctor for SuperMicro Server via XML-RPC Step 1 Upload "Plugin.Administration.SuperDoctor-5.5.0-1.0.2-2018-08-21.pkg" to EXAoperation Login to EXAoperation (User privilege Administrator) Upload pkg Configuration>Software>Versions>Browse>Submit Step 2 Connect to EXAoperation via XML-RPC (this example uses Python) >>> import xmlrpclib, pprint, base64 >>> s = xmlrpclib.ServerProxy("http://user:password@license-server/cluster1") Step 3 Show current plugin version and plugin functions >>> pprint.pprint(s.showPluginList()) ['Administration.SuperDoctor-5.5.0-1.0.2'] >>> pprint.pprint(s.showPluginFunctions('Administration.SuperDoctor-5.5.0-1.0.2')) {'ACTIVATE': 'Activate this plugin.', 'DEACTIVATE': 'Deactivate this plugin.', 'GET_SNMP_CONFIG': 'Get snmp conf', 'INSTALL': 'Install this plugin.', 'PUT_SNMP_CONFIG': 'Put snmp conf', 'STATUS': 'Check service status.', 'UNINSTALL': 'Uninstall this plugin.'} Step 4a >>> sts, ret = s.callPlugin('Administration.SuperDoctor-5.5.0-1.0.2','n17','INSTALL') >>> ret 'Archive: /usr/opt/EXAplugins/Administration.SuperDoctor-5.5.0-1.0.2/packages/SD5_5.5.0_build.784_linux.zip\n inflating: /tmp/SuperMicro/ReleaseNote.txt \n inflating: /tmp/SuperMicro/SSM_MIB.zip \n inflating: /tmp/SuperMicro/SuperDoctor5Installer_5.5.0_build.784_linux_x64_20170511162151.bin \n inflating: /tmp/SuperMicro/SuperDoctor5_UserGuide.pdf \n inflating: /tmp/SuperMicro/crc32.txt \n inflating: /tmp/SuperMicro/installer_agent.properties ' Step 4b >>> config = base64.b64encode(open('/path/to/installer_agent.properties').read ()) >>> sts, ret = s.callPlugin('Administration.SuperDoctor-5.5.0-1.0.2','n17','INSTALL', config) >>> ret Step 5 >>> sts, ret = s.callPlugin('Administration.SuperDoctor-5.5.0-1.0.2','n17','ACTIVATE') >>> ret 'Stopping snmpd: [ OK ]\nStarting snmpd: [ OK ]' Step 6 pass .1.3.6.1.4.1.10876 /opt/Supermicro/SuperDoctor5/libs/native/snmpagent will be removed from  /etc/snmp/snmpd.conf  and SNMPD will be restarted. >>> sts, ret = s.callPlugin('Administration.SuperDoctor-5.5.0-1.0.2','n17','DEACTIVATE') >>> ret 'Stopping snmpd: [ OK ]\nStarting snmpd: [ OK ]\nDeactived' Step 7 >>> f = open('/path/to/snmpd.conf','w') >>> f.write(s.callPlugin('Administration.SuperDoctor-5.5.0-1.0.2','n17', 'GET_SNMP_CONFIG')[1]) >>> f.close() Step 8 >>> upload = base64.b64encode(open('/path/to/snmpd.conf').read ()) >>> sts, ret = s.callPlugin('Administration.SuperDoctor-5.5.0-1.0.2', 'n17', 'PUT_SNMP_CONFIG', upload) >>> ret 'Reloading snmpd: [ OK ]' Step 9 >>> sts, ret = s.callPlugin('Administration.SuperDoctor-5.5.0-1.0.2','n17','ACTIVATE') >>> ret 'Stopping snmpd: [ OK ]\nStarting snmpd: [ OK ]' Step 10 >>> sts, ret = s.callPlugin('Administration.SuperDoctor-5.5.0-1.0.2','n17','STATUS') >>> ret 'snmpd status: snmpd (pid 3711) is running...\nsuperdoctor 5 status: SuperDoctor 5 is running (45943).' Step 11 >>> sts, ret = s.callPlugin('Administration.SuperDoctor-5.5.0-1.0.2','n17','UNINSTALL') >>> ret 'Uninstalled'
View full article
This article describes how the physical hardware must be configured prior to an Exasol installation. There are 2 categories of settings that must be applied, for data nodes and for the management node. Data nodes settings: BIOS: Disable EFI Disable C-States (Maximum Performance) Enable PXE on "CICN" Ethernet interfaces Enable Hyperthreading Boot order: Boot from PXE 1st NIC (VLAN CICN) RAID: Controller RW-Cache only enabled with BBU All disks are configured as RAID-1 mirrors (best practice. For different setup, ask EXASOL Support) Keep the default strip size Keep the default R/W cache ratio LOM Interface Enable SOL (Serial over LAN console)   Management node settings: BIOS: Disable EFI Disable C-States (Maximum Performance) Enable Hyperthreading Boot order: Boot from disk RAID: Controller RW-Cache only enabled with BBU All disks are configured as RAID-1 mirrors (best-practice, for different setup ask EXASOL Support) Keep the default strip size Keep the default R/W cache ratio LOM Interface Enable SOL (Serial over LAN console)
View full article
This article describes how to install Dell's OpenManage Server Administration solution via XML-RPC. 1. Upload "Plugin.Administration.DELL-OpenManage-8.1.0.pkg" to EXAoperation Login to EXAoperation (User privilege required - Administrator) Upload pkg   Configuration>Software>Versions>Browse>Submit 2. Connect to EXAoperation via XML-RPC (this example uses Python) >>> import xmlrpclib, pprint >>> s = xmlrpclib.ServerProxy("http://user:password@license-server/cluster1") 3. Show plugin functions >>> pprint.pprint(s.showPluginFunctions('Administration.DELL-OpenManage-8.1.0')) {'GET_SNMP_CONFIG': 'Download current SNMP configuration.', 'INSTALL_AND_START': 'Install and start plugin.', 'PUT_SNMP_CONFIG': 'Upload new SNMP configuration.', 'RESTART': 'Restart HP and SNMP services.', 'START': 'Start HP and SNMP services.', 'STATUS': 'Show status of plugin (not installed, started, stopped).', 'STOP': 'Stop HP and SNMP services.', 'UNINSTALL': 'Uninstall plugin.'} 4. Install DELL OMSA and check for return code >>> sts, ret = s.callPlugin('Administration.DELL-OpenManage-8.1.0','n10','INSTALL_AND_START') >>> ret 0 5. Upload snmpd.conf (Example attached to this article) >>> sts, ret = s.callPlugin('Administration.DELL-OpenManage-8.1.0', 'n10', 'PUT_SNMP_CONFIG', file('/home/user/snmpd.conf').read()) 6. Restart OMSA and check status. >>> ret = s.callPlugin('Administration.DELL-OpenManage-8.1.0', 'n10', 'RESTART') >>> ret [0, '\nShutting down DSM SA Shared Services: [ OK ]\n\n\nShutting down DSM SA Connection Service: [ OK ]\n\n\nStopping Systems Management Data Engine:\nStopping dsm_sa_snmpd: [ OK ]\nStopping dsm_sa_eventmgrd: [ OK ]\nStopping dsm_sa_datamgrd: [ OK ]\nStopping Systems Management Device Drivers:\nStopping dell_rbu:[ OK ]\nStarting Systems Management Device Drivers:\nStarting dell_rbu:[ OK ]\nStarting ipmi driver: \nAlready started[ OK ]\nStarting Systems Management Data Engine:\nStarting dsm_sa_datamgrd: [ OK ]\nStarting dsm_sa_eventmgrd: [ OK ]\nStarting dsm_sa_snmpd: [ OK ]\nStarting DSM SA Shared Services: [ OK ]\n\ntput: No value for $TERM and no -T specified\nStarting DSM SA Connection Service: [ OK ]\n'] >>> ret = s.callPlugin('Administration.DELL-OpenManage-8.1.0', 'n10', 'STATUS') >>> ret [0, 'dell_rbu (module) is running\nipmi driver is running\ndsm_sa_datamgrd (pid 760 363) is running\ndsm_sa_eventmgrd (pid 732) is running\ndsm_sa_snmpd (pid 755) is running\ndsm_om_shrsvcd (pid 804) is running\ndsm_om_connsvcd (pid 850 845) is running'] 7. Repeat steps 4-6 for each node. 8. For monitoring DELL OMSA please review http://folk.uio.no/trondham/software/check_openmanage.html
View full article