Environment Management
Manage the environment around the database, such as Cloud, Monitoring, Exaoperation and scalability
cancel
Showing results for 
Search instead for 
Did you mean: 
This article explains how to create a VPN between your AWS cluster and Exasol Support infrastructure
View full article
This article explains how to create a VPN between your GCP cluster and Exasol Support infrastructure
View full article
Server installation Minimal Centos 7 Register at Splunk and download the e.g. free version Download splunk-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Install RPM, target directory will be /opt/splunk rpm -ivh splunk-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Start Splunk, accept EULA and enter   username   and   password /opt/splunk/bin/splunk start Create a SSH port forward to access the web UI.  NOTE!     If /etc/hosts is configured properly and name resolution is working, no port forwarding is needed. ssh root@HOST-IP -L8000:localhost:8000 Login with username and password you provided during the installation https://localhost:8000 Setup an Index to store data From the web UI go to Settings Indexes New Index Name: remotelogs Type: Events Max Size: e.g. 20GB Save Create a new listener to receive data From the web UI go to Settings Forwarding and receiving Configure receiving: Add new New Receiving Port: 9700 Save Restart Splunk /opt/splunk/bin/splunk restart Client installation Splunk Universal Forwarder Download the splunkforwarder-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Install via rpm, target directory will be /opt/splunkforwarder rpm -ivh splunkforwarder-7.3.1-bd63e13aa157-linux-2.6-x86_64.rpm Start Splunk Forwarder, accept EULA and       enter the same username and password as for the Splunk Server /opt/splunkforwarder/bin/splunk start Setup forward-server and monitor Add Splunk server as a server to receive forwarded log files. Same username and password as before /opt/splunkforwarder/bin/splunk add forward-server HOST-IP:9700 -auth USER:PASSWORD Add a log file, e.g. audit.log from auditd. It requires the log file location, type of logs and the index we created before. /opt/splunkforwarder/bin/splunk add monitor /var/log/audit/audit.log -sourcetype linux_logs -index remotelogs Check if forward server and log files have been enabled, restart splunkforwarder if nothing happens /opt/splunkforwarder/bin/splunk list monitor Your session is invalid. Please login. Splunk username: admin Password: Monitored Directories: $SPLUNK_HOME/var/log/splunk /opt/splunkforwarder/var/log/splunk/audit.log /opt/splunkforwarder/var/log/splunk/btool.log /opt/splunkforwarder/var/log/splunk/conf.log /opt/splunkforwarder/var/log/splunk/first_install.log /opt/splunkforwarder/var/log/splunk/health.log /opt/splunkforwarder/var/log/splunk/license_usage.log /opt/splunkforwarder/var/log/splunk/mongod.log /opt/splunkforwarder/var/log/splunk/remote_searches.log /opt/splunkforwarder/var/log/splunk/scheduler.log /opt/splunkforwarder/var/log/splunk/searchhistory.log /opt/splunkforwarder/var/log/splunk/splunkd-utility.log /opt/splunkforwarder/var/log/splunk/splunkd_access.log /opt/splunkforwarder/var/log/splunk/splunkd_stderr.log /opt/splunkforwarder/var/log/splunk/splunkd_stdout.log /opt/splunkforwarder/var/log/splunk/splunkd_ui_access.log $SPLUNK_HOME/var/log/splunk/license_usage_summary.log /opt/splunkforwarder/var/log/splunk/license_usage_summary.log $SPLUNK_HOME/var/log/splunk/metrics.log /opt/splunkforwarder/var/log/splunk/metrics.log $SPLUNK_HOME/var/log/splunk/splunkd.log /opt/splunkforwarder/var/log/splunk/splunkd.log $SPLUNK_HOME/var/log/watchdog/watchdog.log* /opt/splunkforwarder/var/log/watchdog/watchdog.log $SPLUNK_HOME/var/run/splunk/search_telemetry/*search_telemetry.json $SPLUNK_HOME/var/spool/splunk/...stash_new Monitored Files: $SPLUNK_HOME/etc/splunk.version /var/log/all.log /var/log/audit/audit.log Check if the Splunk server is available /opt/splunkforwarder/bin/splunk list forward-server Active forwards: 10.70.0.186:9700 Configured but inactive forwards: None Search logs in the Web UI                                       Collecting Metrics Download the Splunk Unix Add-on splunk-add-on-for-unix-and-linux_602.tgz unpack and copy to the splunkforwarder app folder tar xf splunk-add-on-for-unix-and-linux_602.tgz mv Splunk_TA_nix /opt/splunkforwarder/etc/apps/ Enable Metrics you want to receivce, set   disable = 0   to *enable *metric vim /opt/splunkforwarder/etc/apps/Splunk_TA_nix/default/inputs.conf Stop splunk forwarder and start splunk forwarder /opt/splunkforwarder/bin/splunk stop /opt/splunkforwarder/bin/splunk start
View full article
Background Installation of FSC Linux Agents via XML-RPC   Prerequisites Ask at service@exasol.com for FSC Monitoring plugin.   How to Install FSC Linux Agents via XML-RPC   1. Upload "Plugin.Administration.FSC-7.31-16.pkg" to EXAoperation Login to EXAoperation (User privilege Administrator) Upload pkg Configuration>Software>Versions>Browse>Submit   2. Connect to EXAoperation via XML-RPC (this example uses Python) >>> import xmlrpclib, pprint >>> s = xmlrpclib.ServerProxy("http://user:password@license-server/cluster1")   3. Show plugin functions >>> pprint.pprint(s.showPluginFunctions('Administration.FSC-7.31-16')) { 'INSTALL_AND_START': 'Install and start plugin.', 'UNINSTALL': 'Uninstall plugin.', 'START': 'Start FSC and SNMP services.', 'STOP': 'Stop FSC and SNMP services.', 'RESTART': 'Restart FSC and SNMP services.', 'PUT_SNMP_CONFIG': 'Upload new SNMP configuration.', 'GET_SNMP_CONFIG': 'Download current SNMP configuration.', 'STATUS': 'Show status of plugin (not installed, started, stopped).' }   4. Install FSC and check for return code >>> sts, ret = s.callPlugin('Administration.FSC-7.31-16','n10','INSTALL_AND_START') >>> ret 0   5. Upload snmpd.conf (Example attached to this article) >>> sts, ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'PUT_SNMP_CONFIG', file('/home/user/snmpd.conf').read())   6. Start FSC and check status >>> ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'RESTART') >>> ret >>> ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'STATUS') >>> ret [0, 'started']   7. Repeat steps 4-6 have for each node.   Additional Notes For monitoring the FSC agents go to http://support.ts.fujitsu.com/content/QuicksearchResult.asp and search for "ServerView Integration Pack for NAGIOS Additional References -
View full article
This article describes the process of installing HP Service Pack for ProLiant solution using XML-RPC. 1. Upload "Plugin.Administration.HP-SPP-2014.09.0-0.pkg" to EXAoperation Login to EXAoperation (User privilege Administrator) Upload pkg   Configuration>Software>Versions>Browse>Submit 2. Connect to EXAoperation via XML-RPC (this example uses Python) >>> import xmlrpclib, pprint >>> s = xmlrpclib.ServerProxy("http://user:password@license-server/cluster1") 3. Show plugin functions >>> pprint.pprint(s.showPluginFunctions('Administration.HP-SPP-2014.09.0-0')) {'GET_CERTIFICATE': 'Get content of specified certificate.', 'GET_SNMP_CONFIG': 'Download current SNMP configuration.', 'INSTALL_AND_START': 'Install and start plugin.', 'PUT_CERTIFICATE': 'Upload new certificate.', 'PUT_SNMP_CONFIG': 'Upload new SNMP configuration.', 'REMOVE_CERTIFICATE': 'Remote a specific certificate.', 'RESTART': 'Restart HP and SNMP services.', 'START': 'Start HP and SNMP services.', 'STATUS': 'Show status of plugin (not installed, started, stopped).', 'STOP': 'Stop HP and SNMP services.', 'UNINSTALL': 'Uninstall plugin.'} 4. Install HP SPP and check for return code >>> sts, ret = s.callPlugin('Administration.HP-SPP-2014.09.0-0','n10','INSTALL_AND_START') >>> ret 0 5. Upload snmpd.conf (Example attached to this article) >>> sts, ret = s.callPlugin('Administration.HP-SPP-2014.09.0-0', 'n10', 'PUT_SNMP_CONFIG', file('/home/user/snmpd.conf').read()) 6. Start HP SPP and check status. >>> ret = s.callPlugin('Administration.HP-SPP-2014.09.0-0', 'n10', 'RESTART') >>> ret [256, '\nStopping hpsmhd: [ OK ]\n \n Shutting down NIC Agent Daemon (cmanicd): [ OK ]\n \n Shutting down Storage Event Logger (cmaeventd): [ OK ] \n Shutting down FCA agent (cmafcad): [ OK ] \n Shutting down SAS agent (cmasasd): [ OK ] \n Shutting down IDA agent (cmaidad): [ OK ] \n Shutting down IDE agent (cmaided): [ OK ] \n Shutting down SCSI agent (cmascsid): [ OK ] \n Shutting down Health agent (cmahealthd): [ OK ] \n Shutting down Standard Equipment agent (cmastdeqd): [ OK ] \n Shutting down Host agent (cmahostd): [ OK ] \n Shutting down Threshold agent (cmathreshd): [ OK ] \n Shutting down RIB agent (cmasm2d): [ OK ] \n Shutting down Performance agent (cmaperfd): [ OK ] \n Shutting down SNMP Peer (cmapeerd): [ OK ] \nStopping snmpd: [FAILED]\n Using Proliant Standard\n \tIPMI based System Health Monitor\n Shutting down Proliant Standard\n \tIPMI based System Health Monitor (hpasmlited): [ OK ] \n\nStarting hpsmhd: [ OK ]\nStarting snmpd: [FAILED]\nCould not start SNMP daemon.\nCould not restart HP services.'] >>> ret = s.callPlugin('Administration.HP-SPP-2014.09.0-0', 'n10', 'STATUS') >>> ret [0, 'started'] 7. Repeat steps 4-6 for each node. 8. For monitoring the HP SPP please review https://labs.consol.de/de/nagios/check_hpasm/index.html
View full article
This article describes how to install Dell's OpenManage Server Administration solution via XML-RPC. 1. Upload "Plugin.Administration.DELL-OpenManage-8.1.0.pkg" to EXAoperation Login to EXAoperation (User privilege required - Administrator) Upload pkg   Configuration>Software>Versions>Browse>Submit 2. Connect to EXAoperation via XML-RPC (this example uses Python) >>> import xmlrpclib, pprint >>> s = xmlrpclib.ServerProxy("http://user:password@license-server/cluster1") 3. Show plugin functions >>> pprint.pprint(s.showPluginFunctions('Administration.DELL-OpenManage-8.1.0')) {'GET_SNMP_CONFIG': 'Download current SNMP configuration.', 'INSTALL_AND_START': 'Install and start plugin.', 'PUT_SNMP_CONFIG': 'Upload new SNMP configuration.', 'RESTART': 'Restart HP and SNMP services.', 'START': 'Start HP and SNMP services.', 'STATUS': 'Show status of plugin (not installed, started, stopped).', 'STOP': 'Stop HP and SNMP services.', 'UNINSTALL': 'Uninstall plugin.'} 4. Install DELL OMSA and check for return code >>> sts, ret = s.callPlugin('Administration.DELL-OpenManage-8.1.0','n10','INSTALL_AND_START') >>> ret 0 5. Upload snmpd.conf (Example attached to this article) >>> sts, ret = s.callPlugin('Administration.DELL-OpenManage-8.1.0', 'n10', 'PUT_SNMP_CONFIG', file('/home/user/snmpd.conf').read()) 6. Restart OMSA and check status. >>> ret = s.callPlugin('Administration.DELL-OpenManage-8.1.0', 'n10', 'RESTART') >>> ret [0, '\nShutting down DSM SA Shared Services: [ OK ]\n\n\nShutting down DSM SA Connection Service: [ OK ]\n\n\nStopping Systems Management Data Engine:\nStopping dsm_sa_snmpd: [ OK ]\nStopping dsm_sa_eventmgrd: [ OK ]\nStopping dsm_sa_datamgrd: [ OK ]\nStopping Systems Management Device Drivers:\nStopping dell_rbu:[ OK ]\nStarting Systems Management Device Drivers:\nStarting dell_rbu:[ OK ]\nStarting ipmi driver: \nAlready started[ OK ]\nStarting Systems Management Data Engine:\nStarting dsm_sa_datamgrd: [ OK ]\nStarting dsm_sa_eventmgrd: [ OK ]\nStarting dsm_sa_snmpd: [ OK ]\nStarting DSM SA Shared Services: [ OK ]\n\ntput: No value for $TERM and no -T specified\nStarting DSM SA Connection Service: [ OK ]\n'] >>> ret = s.callPlugin('Administration.DELL-OpenManage-8.1.0', 'n10', 'STATUS') >>> ret [0, 'dell_rbu (module) is running\nipmi driver is running\ndsm_sa_datamgrd (pid 760 363) is running\ndsm_sa_eventmgrd (pid 732) is running\ndsm_sa_snmpd (pid 755) is running\ndsm_om_shrsvcd (pid 804) is running\ndsm_om_connsvcd (pid 850 845) is running'] 7. Repeat steps 4-6 for each node. 8. For monitoring DELL OMSA please review http://folk.uio.no/trondham/software/check_openmanage.html
View full article
Certified Hardware List
The hardware certified by Exasol can be found in the link below:

Certified Hardware List

If your preferred hardware is not certified, refer to our Certification Process for more information on this process.
Top Contributors