Environment Management
Manage the environment around the database, such as Cloud, Monitoring, Exaoperation and scalability
cancel
Showing results for 
Search instead for 
Did you mean: 
With this article, you will learn how to add and change database parameters and their values. 1. Log in to your Exasol container: $ docker exec -it <container_name> /bin/bash 2. Inside the container go to the /exa/etc/ folder and open the EXAConf file with a text editor of your choice: $ cd /exa/etc $ vim EXAConf 3. Under the DB section, right above the [[JDBC]] sub-section add a line that says Params and the necessary parameters: [DB : DB1] Version = 6.1.5 MemSize = 6 GiB Port = 8563 Owner = 500 : 500 Nodes = 11,12,13 NumActiveNodes = 3 DataVolume = DataVolume1 Params = -useIndexWrapper=0 -disableIndexIteratorScan=1 [[JDBC]] BucketFS = bfsdefault Bucket = default Dir = drivers/jdbc [[Oracle]] BucketFS = bfsdefault Bucket = default Dir = drivers/oracle 4. Change the value of Checksum in EXAConf: $ sed -i '/Checksum =/c\ Checksum = COMMIT' /exa/etc/EXAConf 5. Commit the changes: $ exaconf commit 6. At this point you have 2 options: 6.1. Restart the container: $ dwad_client stop-wait <database_instance> # Stop the database instance (inside the container) $ csctrl -d # Stop the storage service (inside the container) $ exit # Exit the container $ docker restart <container_name> # Restart the container $ docker exec -it <container_name> # Log in to the container's BASH environment $ dwad_client setup-print <database_instance> # See the database parameters ... PARAMS: -netmask= -auditing_enabled=0 -lockslb=1 -sandboxPath=/usr/opt/mountjail -cosLogErrors=0 -bucketFSConfigPath=/exa/etc/bucketfs_db.cfg -sysTZ=Europe/Berlin -etlJdbcConfigDir=/exa/data/bucketfs/bfsdefault/.dest/default/drivers/jdbc:/usr/opt/EXASuite-6/3rd-party/JDBC/@JDBCVERSION@:/usr/opt/EXASuite-6/EXASolution-6.1.5/jdbc -useIndexWrapper=0 -disableIndexIteratorScan=1 ... As you can from the output mentioned above, the parameters have been added. However, rebooting the cluster can cause some downtime. In order to shorten the duration of your downtime, you can try the method below. 6.2. Use a configuration file to change the parameters by just rebooting the database, not container: $ dwad_client setup-print <database_instance> > db1.cfg # See the database parameters $ vim db1.cfg # Edit the configuration file When you open the file, find the line starting with PARAMS and the parameter you need, like: PARAMS: -netmask= -auditing_enabled=0 -lockslb=1 -sandboxPath=/usr/opt/mountjail -cosLogErrors=0 -bucketFSConfigPath=/exa/etc/bucketfs_db.cfg -sysTZ=Europe/Berlin -etlJdbcConfigDir=/exa/data/bucketfs/bfsdefault/.dest/default/drivers/jdbc:/usr/opt/EXASuite-6/3rd-party/JDBC/@JDBCVERSION@:/usr/opt/EXASuite-6/EXASolution-6.1.5/jdbc -useIndexWrapper=0 -disableIndexIteratorScan=1 After adding the parameters, save the file and execute the following commands: $ dwad_client stop-wait <database_instance> # Stop the database instance (inside the container) $ dwad_client setup <database_instance> db1.cfg # Setup the database with the db1.cfg configuration file (inside the container) $ dwad_client start-wait <database_instance> # Start the database instance (inside the container) This will add the database parameters, but will not be persistent throughout reboots. Therefore, by adding the parameters this way you shorten your downtime, but the changes aren't permanent. After doing this, we would recommend to also do method 6.1, in case you decide to reboot sometime in the future. 7. Verify the parameters: 7.1. With dwad_client list:             7.2. With dwad_list print-setup <database_instance>:      
View full article
Background ConfD is the EXASOL configuration and administration daemon that runs on all nodes of an EXASOL cluster. It provides an interface for cluster administration and synchronizes the configuration across all nodes. In this article, you can find examples to manage the Exasol docker cluster using XML-RPC.   Prerequisites and Notes Please note that this is still under development and is not officially supported by Exasol. We will try to help you as much as possible, but can't guarantee anything. Note: Any SSL checks disabled for these examples in order to avoid exceptions with self-signed certificates Note: If you got an error message like xmlrpclib.ProtocolError: <ProtocolError for root:testing@IPADDRESS:443/: 401 Unauthorized> please login to cluster and reset root password via the exaconf passwd-user command. Note: All of the examples tested with Exasol version 6.2.7 and python 2.7   Explanation & Examples We need to create a connection and get a master IP before running any ConfD job via XML-RPC. You can find how to do it below: Import required modules and get the master IP: >>> import xmlrpclib, requests, urllib3, ssl >>> urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) Get current master IP (you can use any valid IP in the cluster for this request) >>> master_ip = requests.get( "https: //11.10.10.11:443/master" , verify = False).content In this case, 11.10.10.11 is the IP address of one of the cluster nodes Create connection: Note: We assume you've set the root password "testing". You can set a password via exaconf passwd-user command >>> connection_string = "https: //root:testing@%s:443/" % master_ip >>> sslcontext = ssl._create_unverified_context() >>> conn = xmlrpclib.ServerProxy(connection_string, context = sslcontext, allow_none=True)   The list of examples: Example 1 - 2: Database jobs Example 3: Working with archive volumes Example 4: Cluster Node Jobs Example 5: EXAStorage Volume Jobs Example 6: Working with backups   Example 1: Database jobs How to use ConfD jobs to get the database status and information about a database Run a job to check the status of the database: Note: In this example we assume the database name is "DB1". Please adjust the database name. conn.job_exec( 'db_state' , { 'params' : { 'db_name' : 'DB1' }})  Output: { 'result_name' : 'OK' , 'result_output' : 'running' , 'result_desc' : 'Success' , 'result_jobid' : '12.2' , 'result_code' : 0} As you can see in the output the 'result_output' is  'running' and 'result_desc' is 'Success'. This means the database is up and running. Note: If you want to format the JSON output you can use pprint module Run a job to get information about the database: >>> import pprint >>> pprint.pprint(conn.job_exec( 'db_info' , { 'params' : { 'db_name' : 'DB1' }})) { 'result_code' : 0, 'result_desc' : 'Success' , 'result_jobid' : '11.89' , 'result_name' : 'OK' , 'result_output' : { 'connectible' : 'Yes' , 'connection string' : '192.168.31.171:8888' , 'info' : '', 'name' : 'DB1' , 'nodes' : { 'active' : [ 'n11' ], 'failed' : [], 'reserve' : []}, 'operation' : 'None' , 'persistent volume' : 'DataVolume1' , 'quota' : 0, 'state' : 'running' , 'temporary volume' : 'v0001' , 'usage persistent' : [{ 'host' : 'n11' , 'size' : '10 GiB' , 'used' : '6.7109 MiB' , 'volume id' : '0' }], 'usage temporary' : [{ 'host' : 'n11' , 'size' : '1 GiB' , 'used' : '0 B' , 'volume id' : '1' }]}}   Example 2: Database jobs. How to list, start and stop databases   Run a job to list databases in cluster: conn.job_exec( 'db_list' ) Output example: >>> pprint.pprint(conn.job_exec( 'db_list' )) { 'result_code' : 0, 'result_desc' : 'Success' , 'result_jobid' : '11.91' , 'result_name' : 'OK' , 'result_output' : [ 'DB1' ]}   Stop the DB1 database: Run a job to stop database DB1 in cluster: >>> conn.job_exec( 'db_stop' , { 'params' : { 'db_name' : 'DB1' }}) { 'result_name' : 'OK' , 'result_desc' : 'Success' , 'result_jobid' : '12.11' , 'result_code' : 0}   Run a job to confirm the state of the database DB1: >>> conn.job_exec( 'db_state' , { 'params' : { 'db_name' : 'DB1' }}) { 'result_name' : 'OK' , 'result_output' : 'setup' , 'result_desc' : 'Success' , 'result_jobid' : '12.12' , 'result_code' : 0}  Note: 'result_output': 'setup': the status of the database is "setup"    Run a job to start database DB1 in cluster: >>> conn.job_exec( 'db_start' , { 'params' : { 'db_name' : 'DB1' }}) { 'result_name' : 'OK' , 'result_desc' : 'Success' , 'result_jobid' : '12.13' , 'result_code' : 0}   Run a job to verify the state of the database of DB1 is up and running: >>> conn.job_exec( 'db_state' , { 'params' : { 'db_name' : 'DB1' }}) { 'result_name' : 'OK' , 'result_output' : 'running' , 'result_desc' : 'Success' , 'result_jobid' : '12.14' , 'result_code' : 0}   Example 3: Working with archive volumes Example 3.1: Add a remote archive volume to cluster Name Description Parameters remote_volume_add Add a remote volume vol_type, url optional: remote_volume_name, username, password, labels, options, owner, allowed_users substitutes: remote_volume_id allowed_groups: root, exaadm, exastoradm notes: * 'ID' is assigned automatically if omitted (10000 + next free ID) 'ID' must be >= 10000 if specified 'name' may be empty (for backwards compat.) and is generated from 'ID' in that case ("r%04i" % ('ID' - 10000)) if 'owner' is omitted, the requesting user becomes the owner     >>> conn.job_exec( 'remote_volume_add' , { 'params' : { 'vol_type' : 's3' , 'url' : 'http: //bucketname.s3.amazonaws.com' ,'username': 'ACCESS-KEY','password': 'BASE64-ENCODED-SECRET-KEY'}}) { 'result_revision' : 18, 'result_jobid' : '11.3' , 'result_output' : [[ 'r0001' , 'root' , '/exa/etc/remote_volumes/root.0.conf' ]], 'result_name' : 'OK' , 'result_desc' : 'Success' , 'result_code' : 0}   Example 3.2: list all containing  remote volume names Name Description Parameter Returns remote_volume_list List all existing remote volumes None a list containing all remote volume names     >>> pprint.pprint(conn.job_exec( 'remote_volume_list' )) { 'result_code' : 0, 'result_desc' : 'Success' , 'result_jobid' : '11.94' , 'result_name' : 'OK' , 'result_output' : [ 'RemoteVolume1' ]}   Example 3.3: Connection state of the given remote volume Name Description Parameter Returns remote_volume_state Return the connection state of the given remote volume, online / Unmounted / Connection problem remote_volume_name substitutes: remote_volume_id List of the connection state of the given remote volume on all nodes   >>> conn.job_exec( 'remote_volume_state' , { 'params' : { 'remote_volume_name' : 'r0001' }}) { 'result_name' : 'OK' , 'result_output' : [ 'Online' ], 'result_desc' : 'Success' , 'result_jobid' : '11.10' , 'result_code' : 0}   Example 4: Manage cluster nodes Example 4.1: get node list Name Description Parameter Returns node_list List all cluster nodes (from EXAConf)  None Dict containing all cluster nodes.   >>> pprint.pprint( conn.job_exec( 'node_list' )) { 'result_code' : 0, 'result_desc' : 'Success' , 'result_jobid' : '11.95' , 'result_name' : 'OK' , 'result_output' : { '11' : { 'disks' : { 'disk1' : { 'component' : 'exastorage' , 'devices' : [ 'dev.1' ], 'direct_io' : True, 'ephemeral' : False, 'name' : 'disk1' }}, 'docker_volume' : '/exa/etc/n11' , 'exposed_ports' : [[8888, 8899], [6583, 6594]], 'id' : '11' , 'name' : 'n11' , 'private_ip' : '192.168.31.171' , 'private_net' : '192.168.31.171/24' , 'uuid' : 'C5ED84F591574F97A337B2EC9357B68EF0EC4EDE' }}}    Example 4.2: get node state Name Description Parameter Returns node_state State of all nodes (online, offline, deactivated)  None  A list containing a string representing the current node state.     >>> pprint.pprint(conn.job_exec( 'node_state' )) { 'result_code' : 0, 'result_desc' : 'Success' , 'result_jobid' : '11.96' , 'result_name' : 'OK' , 'result_output' : { '11' : 'online' , 'booted' : { '11' : 'Tue Jul 7 14:14:07 2020' }}}   other available options: node_add Add a node to the cluster priv_net optional: id, name, pub_net, space_warn_threshold, bg_rec_limit allowed_groups: root, exaadm int node_id node_remove Remove a node from the cluster node_id optional: force allowed_groups: root, exaadm None node_info Single node info with extended information (Cored, platform, load, state) None See the output of  cosnodeinfo node_suspend Suspend node, i. e. mark it as "permanently offline". node_id allowed_groups: root, exaadm mark one node as suspended node_resume Manually resume a suspended node. node_id allowed_groups: root, exaadm unmark one suspended node   Example 5: EXAStorage volume jobs  Example 5.1: list EXAStorage volumes Name Description Parameter Returns st_volume_list List all existing volumes in the cluster. none List of dicts     >>> pprint.pprint(conn.job_exec( 'st_volume_list' )) { 'result_code' : 0, 'result_desc' : 'Success' , 'result_jobid' : '11.97' , 'result_name' : 'OK' , 'result_output' : [{ 'app_io_enabled' : True, 'block_distribution' : 'vertical' , 'block_size' : 4096, 'bytes_per_block' : 4096, 'group' : 500, 'hdd_type' : 'disk1' , 'hdds_per_node' : 1, 'id' : '0' , 'int_io_enabled' : True, 'labels' : [ '#Name#DataVolume1' , 'pub:DB1_persistent' ], 'name' : 'DataVolume1' , 'nodes_list' : [{ 'id' : 11, 'unrecovered_segments' : 0}], 'num_master_nodes' : 1, 'owner' : 500, 'permissions' : 'rwx------' , 'priority' : 10, 'redundancy' : 1, 'segments' : [{ 'end_block' : '2621439' , 'index' : '0' , 'nid' : 0, 'partitions' : [], 'phys_nid' : 11, 'sid' : '0' , 'start_block' : '0' , 'state' : 'ONLINE' , 'type' : 'MASTER' , 'vid' : '0' }], 'shared' : True, 'size' : '10 GiB' , 'snapshots' : [], 'state' : 'ONLINE' , 'stripe_size' : 262144, 'type' : 'MASTER' , 'unlock_conditions' : [], 'use_crc' : True, 'users' : [[30, False]], 'volume_nodes' : [11]}, { 'app_io_enabled' : True, 'block_distribution' : 'vertical' , 'block_size' : 4096, 'bytes_per_block' : 4096, 'group' : 500, 'hdd_type' : 'disk1' , 'hdds_per_node' : 1, 'id' : '1' , 'int_io_enabled' : True, 'labels' : [ 'temporary' , 'pub:DB1_temporary' ], 'name' : 'v0001' , 'nodes_list' : [{ 'id' : 11, 'unrecovered_segments' : 0}], 'num_master_nodes' : 1, 'owner' : 500, 'permissions' : 'rwx------' , 'priority' : 10, 'redundancy' : 1, 'segments' : [{ 'end_block' : '262143' , 'index' : '0' , 'nid' : 0, 'partitions' : [], 'phys_nid' : 11, 'sid' : '0' , 'start_block' : '0' , 'state' : 'ONLINE' , 'type' : 'MASTER' , 'vid' : '1' }], 'shared' : True, 'size' : '1 GiB' , 'snapshots' : [], 'state' : 'ONLINE' , 'stripe_size' : 262144, 'type' : 'MASTER' , 'unlock_conditions' : [], 'use_crc' : True, 'users' : [[30, False]], 'volume_nodes' : [11]}]}    Example 5.2: Get information about volume with id "vid" Name Description Parameter Returns st_volume_info Return information about volume with id vid vid       >>> pprint.pprint(conn.job_exec( 'st_volume_info' , { 'params' : { 'vid' : 0}})) { 'result_code' : 0, 'result_desc' : 'Success' , 'result_jobid' : '11.98' , 'result_name' : 'OK' , 'result_output' : { 'app_io_enabled' : True, 'block_distribution' : 'vertical' , 'block_size' : '4 KiB' , 'bytes_per_block' : 4096, 'group' : 500, 'hdd_type' : 'disk1' , 'hdds_per_node' : 1, 'id' : '0' , 'int_io_enabled' : True, 'labels' : [ '#Name#DataVolume1' , 'pub:DB1_persistent' ], 'name' : 'DataVolume1' , 'nodes_list' : [{ 'id' : 11, 'unrecovered_segments' : 0}], 'num_master_nodes' : 1, 'owner' : 500, 'permissions' : 'rwx------' , 'priority' : 10, 'redundancy' : 1, 'segments' : [{ 'end_block' : '2621439' , 'index' : '0' , 'nid' : 0, 'partitions' : [], 'phys_nid' : 11, 'sid' : '0' , 'start_block' : '0' , 'state' : 'ONLINE' , 'type' : 'MASTER' , 'vid' : '0' }], 'shared' : True, 'size' : '10 GiB' , 'snapshots' : [], 'state' : 'ONLINE' , 'stripe_size' : '256 KiB' , 'type' : 'MASTER' , 'unlock_conditions' : [], 'use_crc' : True, 'users' : [[30, False]], 'volume_nodes' : [11]}}   other options: EXAStorage Volume Jobs     Name description Parameters st_volume_info Return information about volume with id vid vid st_volume_list List all existing volumes in the cluster. None st_volume_set_io_status Enable or disable application / internal io for volume app_io, int_io, vid st_volume_add_label Add a label to specified volume vid, label st_volume_remove_label Remove given label from the specified volume vid label st_volume_enlarge Enlarge volume by blocks_per_node vid, blocks_per_node st_volume_shrink Shrink volume by blocks_per_node vid, blocks_per_node st_volume_append_node Append nodes to a volume. storage.append_nodes(vid, node_num, node_ids) -> None vid, node_num, node_ids st_volume_move_node Move nodes of specified volume vid, src_nodes, dst_nodes st_volume_increase_redundancy Increase volume redundancy by delta value vid, delta, nodes st_volume_decrease_redundancy decrease volume redundancy by delta value vid, delta, nodes st_volume_lock Lock a volume vid optional: vname st_volume_lock Unlock a volume vid optional: vname st_volume_clear_data Clear data on (a part of) the given volume vid, num__bytes, node_ids optional: vname    Example 6: Working with backups Example 6.1: start a new backup Name Description Parameter Returns db_backup_start Start a backup of the given database to the given volume db_name, backup_volume_id, level, expire_time substitutes: dackup_volume_name       >>> conn.job_exec( 'db_backup_start' , { 'params' : { 'db_name' : 'DB1' , 'backup_volume_name' : 'RemoteVolume1' , 'level' : 0, 'expire_time' : '10d' }}) { 'result_name' : 'OK' , 'result_desc' : 'Success' , 'result_jobid' : '11.77' , 'result_code' : 0}   Example 6.2: abort backup Name Description Parameter Returns db_backup_abort Aborts the running backup of the given database db_name     >>> conn.job_exec( 'db_backup_abort' , { 'params' : { 'db_name' : 'DB1' }}) { 'result_name' : 'OK' , 'result_desc' : 'Success' , 'result_jobid' : '11.82' , 'result_code' : 0}   Example 6.3: list backups Name Description Parameter Returns db_backup_list Lists available backups for the given database db_name       >>> pprint.pprint(conn.job_exec( 'db_backup_list' , { 'params' : { 'db_name' : 'DB1' }})) { 'result_code' : 0, 'result_desc' : 'Success' , 'result_jobid' : '11.99' , 'result_name' : 'OK' , 'result_output' : [{ 'bid' : 11, 'comment' : '', 'dependencies' : '-' , 'expire' : '', 'expire_alterable' : '10001 DB1/id_11/level_0' , 'expired' : False, 'id' : '10001 DB1/id_11/level_0/node_0/backup_202007071405 DB1' , 'last_item' : True, 'level' : 0, 'path' : 'DB1/id_11/level_0/node_0/backup_202007071405' , 'system' : 'DB1' , 'timestamp' : '2020-07-07 14:05' , 'ts' : '202007071405' , 'usable' : True, 'usage' : '0.001 GiB' , 'volume' : 'RemoteVolume1' }]}    other options: Jobs to manage backups     Name description Parameters db_backups_delete Delete given backups of given database db_name, backup list (as returned by 'db_backup_list()') db_backup_change_expiration Change expiration time of the given backup files backup volume ID backup_files: Prefix of the backup files, like exa_db1/id_1/level_0 ) expire_time : Timestamp in seconds since the Epoch on which the backup should expire. db_backup_delete_unusable Delete all unusable backups for a given database db_name db_restore Restore a given database from given backup db_name, backup ID, restore type ('blocking' | 'nonblocking' | 'virtual access') db_backup_add_schedule Add a backup schedule to an existing database db_name, backup_name, volume, level, expire, minute, hour, day, month, weekday, enabled notes: * 'level' must be  int * 'expire' is string (use  common/util.str2sec to convert) 'backup_name' is  string (unique within a DB) db_backup_remove_schedule Remove an existing backup schedule  db_name, backup_name db_backup_modify_schedule Modify an existing backup schedule  db_name, backup_name   optional: hour, minute, day, month, weekday, enabled      We will continue to add more examples and we will add more options to this article. Additional References https://github.com/EXASOL/docker-db https://github.com/exasol/exaoperation-xmlrpc You can find another article about deploying a exasol database as an docker image in https://community.exasol.com/t5/environment-management/how-to-deploy-a-single-node-exasol-database-as-a-docker-image/ta-p/921
View full article
Background This article guides you through the steps required to configure the automatic time synchronization via NTP servers on EXASolution clusters. Prerequisites If the time shift is greater than one hour it is required to set the time at first manually, which is described in Manually setting time via EXAoperation . There is no downtime need if the time shift is less than one hour. ⚠️ Please note that: The cluster nodes constantly exchange configuration and vitality information and depend on proper time synchronization. While it is possible to manually set the time on EXASolution clusters, it is highly recommended to supply NTP servers for time synchronization. The tasks performed in EXAoperation requires a user with at least "Administrator" privileges. Please ensure that the NTP servers provided to EXAoperation are connectible from the cluster (port filtering, firewalling) and can be addressed by name or IP address (i.e.: hostnames must be resolvable through DNS). Check the gap between the currently configured time and the actual time Procedure 1.1 Check currently configured time on the cluster Open 'Services > Monitoring' Check the following value in the field: Open 'Configuration > Network' Check the value in the field "Time Zone" Make a rough estimation of the mismatch. ⚠️ If the mismatch is greater than one hour please set the time manually, which is described in Manually setting time via EXAoperation . 1.2 Configure NTP Server Open 'Configuration > Network' Click on "Edit"  Add the IP-addresses of the server  Apply the new configuration 1.3. Synchronize time on the cluster Open 'Service > Monitoring' Click on "Synchronize Time" Now the cluster will synchronize the time with the configured NTP servers constantly.
View full article
Background Enlarge EXAStorage disk(s) after changing disk size of the ec2 instances Prerequisites To complete these steps, you need access to the AWS Management Console and have the permissions to do these actions in EXAoperation Please ensure you have a valid backup before proceeding. The below approach works only with the cluster installation. How to enlarge disk space in AWS Stop all databases and stop EXAStorage in EXAoperation Stop your EC2 instances, except the license node (ensure they don’t get terminated on shutdown; check shutdown behavior http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html) Modify the disk on AWS console (Select Volume -> Actions -> Modify -> Enter the new size -> Click Modify) Ensure Storage disk size is set to “Rest” <EXAoperation node setting>, if d03_storage/d04_storage is not set to "Rest", set INSTALL flag for all nodes adjust the setting and set the ACTIVE flag for all nodes, otherwise nodes will be reinstalled during boot (data loss)! Start instances Start EXAStorage Enlarge each node device using the “Enlarge Button” in EXAoperation/EXAStorage/n00xx/h000x/ Re-Start database Additional References https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modify-volume.html  
View full article
Synopsis This article depicts the steps required to start a cluster when all nodes are powered off and how to shut a cluster down using the EXAoperation XML-RPC interface. The python snippets are mere examples for the usage of the XML-RPC function calls and provided as-is. Please refer to the EXAoperation manual for details and further information on XML-RPC. Alphabetical list of referenced XML-RPC calls Function Context Description callPlugin Cluster Execute a call to an EXAoperation plugin getEXAoperationMaster Cluster Return the node of the current EXAoperation master node getDatabaseConnectionState Database instance Get connection state of an EXASolution instance getDatabaseConnectionString Database instance Return the connection string of an EXASolution instance as used by EXAplus and EXASolution drivers getDatabaseList Cluster Lists all database instances defined on the cluster getDatabaseOperation Database instance Get current operation of an EXASolution instance getDatabaseState Database instance Get the runtime state of an EXASolution instance getHardwareInformation Cluster Reports information about your system's hardware as provided by   dmidecode getNodeList Cluster Lists all defined cluster nodes except for license server(s) getServiceState Cluster List the cluster services and their current runtime status logEntries Logservice Fetch messages collected y a preconfigured EXAoperation logservice shutdownNode Cluster Shutdown (and power off) a cluster node startDatabase Database instance Start an EXASolution instance startEXAStorage Storage service Start the EXAStorage service startupNode Cluster Cold start a cluster node stopDatabase Database instance Stop an EXASolution instance stopEXAStorage Storage service Stop the EXAStorage service   Establishing the connection to EXAoperation To send XML-RPC requests to EXAoperation, please connect to the EXAoperation HTTP or HTTPS listener and provide the base URL matching to the context of a function call as described in the EXAoperation manual (chapter "XML-RPC interface") and listed in the table below. The code examples in this article are written in Python (tested in versions 2.7 and 3.4). import sys if sys.version_info[0] > 2: # Importing the XML-RPC library in python 3 from xmlrpc.client import ServerProxy else : # Importing the XML-RPC library in python 2 from xmlrpclib import ServerProxy # define the EXAoperation url cluster_url = "https: //user:password@license-server/cluster1" # create a handle to the XML-RPC interface cluster = ServerProxy(cluster_url) Startup of a cluster 1. Power-on the license server and wait for EXAoperation to start License servers are the only nodes able to boot from the local hard disk. All other (database/compute) nodes receive their boot images via PXE. Hence, you need to have at least one license server up and running to kick-start the rest of the cluster. Physically Power-on the license server and wait until the EXAoperation interfaces are connectible. cluster_url = "https: //user:password@license-server/cluster1" while True: try : cluster = ServerProxy(cluster_url) if cluster.getNodeList(): print( "connected\n" ) break except: continue 2. Start the database/compute nodes Please note that The option to power-on the database nodes using startupNode() is only usable if the nodes are equipped with an out-of-band management interface (like HP iLO or Dell iDRAC) and if this interface is configured in EXAoperation. Virtualized environments (such as vSphere) provide means to automate the startup of servers on a sideband channel. for node in cluster.getNodeList(): cluster.startupNode(node) The function getNodeList returns the list of database nodes currently configured in EXAoperation but it does not provide information about the availability in the cluster. You may check if a node is online by querying the node's hardware inventory. for node in cluster.getNodeList(): if 'dmidecode' in cluster.getHardwareInformation(node): print( "node {} is online\n" .format(node)) else : print( "node {} is offline\n" .format(node)) The boot process itself can be monitored by following the messages in an appropriate logservice. Look for messages like 'Boot process finished after XXX seconds' for every node. logservice_url = "https: //user:password@license-server/cluster1/logservice1" logservice = ServerProxy(logservice_url) logservice.logEntries() It is vital that all cluster nodes are up and running before you proceed with the next steps. 3. Start the EXAStorage service EXAStorage provides volumes as persistence layer for EXASolution databases. This service does not start on boot automatically. The startEXAStorage function returns 'OK' on success or an exception in case of a failure. cluster_url = "https: //user:password@license-server/cluster1" storage_url = "https: //user:password@license-server/cluster1/storage" cluster = ServerProxy(cluster_url) storage = ServerProxy(storage_url) # start the Storage service storage.startEXAStorage() # check the runtime state of all services cluster.getServiceState() The getServiceState call returns a list of all cluster services. Ensure that all of them indicate the runtime state 'OK' before you proceed. [[ 'Loggingd' , 'OK' ], [ 'Lockd' , 'OK' ], [ 'Storaged' , 'OK' ], [ 'DWAd' , 'OK' ]] 4. Start the EXASolution instances Iterate over the EXASolution instances and start them: for db in cluster.getDatabaseList(): instance_url = "https: //user:password@license-server/cluster1/db_{}" .format(db) instance = ServerProxy(instance_url) instance.startDatabase() while True: if 'Yes' == instance.getDatabaseConnectionState(): print( "database {} is accepting connections at {}\n" .format( db, instance.getDatabaseConnectionString())) break Again, you may monitor the database startup process by following an appropriate logservice. Wait for messages indicating that the given database is accepting connections. 5. Start services from EXAoperation plugins Some third-party plugins for EXAoperation may require further attention. This example shows how to conditionally start the VMware tools Daemon. plugin = 'Administration.vmware-tools' # Restart the service on the license server # to bring it into the correct PID namespace cluster.callPlugin(plugin, 'n0010' , 'STOP' ) cluster.callPlugin(plugin, 'n0010' , 'START' ) for node in cluster.getNodeList(): if 'vmtoolsd is running' not in cluster.callPlugin(plugin, node, 'STATUS' )[1]: cluster.callPlugin(plugin, node, 'START' ) Shutdown of a cluster The shutdown of a cluster includes all actions taken for the startup in reverse order. To prevent unwanted effects and possible data loss, it's commendable to perform additional checks on running service operations. Example license_server_id = "n0010" exaoperation_master = cluster.getEXAoperationMaster() if exaoperation_master != license_server_id: print( "node {} is the current EXAoperation master but it should be {}\n" .format( exaoperation_master, license_server_id)) If the license server is not the EXAoperation master node, please log into EXAoperation and move EXAoperation to the license server before you continue. 1. Shutdown of the EXASolution instances Iterate over the EXASolution instances, review their operational state and stop them. for db in cluster.getDatabaseList(): instance_url = "https: //user:password@license-server/cluster1/db_{}" .format(db) instance = ServerProxy(instance_url) state = instance.getDatabaseState() if 'running' == state: operation = instance.getDatabaseOperation() if 'None' == operation: instance.stopDatabase() while True: if 'setup' == instance.getDatabaseState(): print( "database {} stopped\n" .format(db)) break else : print( "Database {} is currently in operation state {}\n" .format(db, operation)) else : print( "Database {} is currently in runtime state {}\n" .format(db, state)) 2. Shutdown of the EXAStorage service Please assure yourself that all databases are shut down properly before stopping EXAStorage! cluster_url = "https: //user:password@license-server/cluster1" storage_url = "https: //user:password@license-server/cluster1/storage" cluster = ServerProxy(cluster_url) storage = ServerProxy(storage_url) storage.stopEXAStorage() cluster.getServiceState() The state of the Storaged will switch to 'not running': [[ 'Loggingd' , 'OK' ], [ 'Lockd' , 'OK' ], [ 'Storaged' , 'not running' ], [ 'DWAd' , 'OK' ]] 3. Shutdown of the cluster nodes and of the license server(s) at last for node in cluster.getNodeList(): cluster.shutdownNode(node) license_servers = [ 'n0010' ,] for ls in license_servers: cluster.shutdownNode(ls) The last call triggers the shutdown of the license server(s) and therefore terminate all EXAoperation instances.
View full article
Prerequisites The datadog-agent has one dependency which is '/bin/sh'. It is safe to just install it, also in regards to future updates of Exasol. Installation For CentOS 7.x just run on each machine (as user root): DD_API_KEY=<Your-API-Key> bash -c "$(curl -L https://raw.githubusercontent.com/DataDog/datadog-agent/master/cmd/agent/install_script.sh)" Changing hostnames The hostname can be changed in '/etc/datadog-agent/datadog.yaml', afterward, restart the agent as user root with 'systemctl restart datadog-agent'.
View full article
WHAT WE'LL LEARN? This article will show you how to change your license file in your Docker Exasol environment. HOW-TO NOTE: $CONTAINER_EXA is a variable set before deploying an Exasol database container with persistent storage. For more information, please check our Github repo. 1. Ensure that your Docker container is running with persistent storage. This means that your docker run command should contain a -v statement, like the example below: $ docker run --detach --network=host --privileged --name <container_name> -v $CONTAINER_EXA:/exa exasol/docker-db:6.1.5-d1 init-sc --node-id <node_id> 2. Copy the new license file to the the $CONTAINER_EXA/etc/ folder: $ cp /home/user/Downloads/new_license.xml $CONTAINER_EXA/etc/new_license.xml 3. Log in to your Docker container's BASH environment: $ docker exec -it <container_name> /bin/bash 4. Go to the /exa/etc folder and rename the old license.xml file: $ cd /exa/etc/ $ mv license.xml license.xml.old 5. Rename the new license file: $ mv new_license.xml license.xml 6. Double-check the contents of the directory, to ensure that the newer file is name license.xml: $ ls -l <other files> -rw-r--r-- 1 root root 2275 Jul 15 10:13 license.xml.old -rw-r--r-- 1 root root 1208 Jul 21 07:38 license.xml <other files> 7. Sync file across all nodes if you are using a multi-node cluster: $ cos_sync_files /exa/etc/license.xml $ cos_sync_files /exa/etc/license.xml.old 8. Stop the Database and Storage services: $ dwad_client stop-wait <database_instance> $ csctrl -d 9. Restart the Container: $ docker restart <container_name> 10. Log in to the container and check if the proper license is installed: $ docker exec -it <container_name> /bin/bash $ awk '/SHLVL/ {for(i=1; i<=6; i++) {getline; print}}' /exa/logs/cored/exainit.log | tail -6 You should get an output similar to this: [2020-07-21 09:43:50] stage0: You have following license limits: [2020-07-21 09:43:50] stage0: >>> Database memory (GiB): 50 Main memory (RAM) usable by databases [2020-07-21 09:43:50] stage0: >>> Database raw size (GiB): unlimited Raw Size of Databases (see Value RAW_OBJECT_SIZE in System Tables) [2020-07-21 09:43:50] stage0: >>> Database mem size (GiB): unlimited Compressed Size of Databases (see Value MEM_OBJECT_SIZE in System Tables) [2020-07-21 09:43:50] stage0: >>> Cluster nodes: unlimited Number of usable cluster nodes [2020-07-21 09:43:50] stage0: >>> Expiration date: unlimited Date of license expiration Check the parameters and see if it corresponds to your requested license parameters.
View full article
This article describes the Exasol database backup process.
View full article
In case of using two or more database clusters, backups can be stored "cross-over" for fail-safety reasons. The scenario might look like this: CLUSTER 1 CLUSTER 2 +-------------------+ +---------------------+ | DB(1)---------------+ | DB(2) | | | | | | | | | | | | | | Archive Volume <------------------------------+ | | | | | | | | |------------> Archive Volume | +-------------------+ +---------------------+ To achieve this, two remote volumes must be defined: The first one in cluster 1, referencing an archive volume in cluster 2. In this example, we reference a volume v0002: ftp://{IP address or comma-separated IP addresses in cluster 2}:2021/v0002 For user and password, enter a valid EXAoperation account of cluster 2. The second one in cluster 2 referencing an archive volume in cluster 1. Please note: In case, backups should also be usable by the respective remote database (e.g. database 2 should be able to restore a backup from within its local archive volume written by remote database 1), the remote archive volume option nocompression must be used. Prior to version 6.0, it was not possible to use this approach for creating backups cross-wise.
View full article
Background This article explains how to activate a new license. Scenario: License Upgrade with DB RAM expansion Prerequisites: The valid license file (XML) Short Downtime to stop and start the database EXAoperation User with privilege Level "Master" Explanation Step 1: Upload License file to EXAoperation In EXAoperation navigate to "Software" On the software page, click on the "License" Tab Click on the "Browse" button to open a file upload dialog. Select the new license file and confirm by clicking the "Upload" button Refresh the "License" page and review new license information Step 2: Stop all databases Click on left navigation pane "EXASolution" Select all checkboxes of the listed database instances Click on the "Shutdown" button and wait for all database instances to shut down (Monitoring->Logservice) Step 3: Adjust DB RAM (optional) Click on the DB name Click on "Edit" Adjust "DB RAM (GiB)" according to your license and click "Apply" Step 4: Start all databases Click on left navigation pane "EXASolution" Select all checkboxes of the listed database instances Start all databases and wait for all instances to be up and running (Monitoring->Logservice) Additional References https://docs.exasol.com/administration/on-premise/manage_software/activate_license.htm?Highlight=license
View full article
Background Installation of FSC Linux Agents via XML-RPC   Prerequisites Ask at service@exasol.com for FSC Monitoring plugin.   How to Install FSC Linux Agents via XML-RPC   1. Upload "Plugin.Administration.FSC-7.31-16.pkg" to EXAoperation Login to EXAoperation (User privilege Administrator) Upload pkg Configuration>Software>Versions>Browse>Submit   2. Connect to EXAoperation via XML-RPC (this example uses Python) >>> import xmlrpclib, pprint >>> s = xmlrpclib.ServerProxy("http://user:password@license-server/cluster1")   3. Show plugin functions >>> pprint.pprint(s.showPluginFunctions('Administration.FSC-7.31-16')) { 'INSTALL_AND_START': 'Install and start plugin.', 'UNINSTALL': 'Uninstall plugin.', 'START': 'Start FSC and SNMP services.', 'STOP': 'Stop FSC and SNMP services.', 'RESTART': 'Restart FSC and SNMP services.', 'PUT_SNMP_CONFIG': 'Upload new SNMP configuration.', 'GET_SNMP_CONFIG': 'Download current SNMP configuration.', 'STATUS': 'Show status of plugin (not installed, started, stopped).' }   4. Install FSC and check for return code >>> sts, ret = s.callPlugin('Administration.FSC-7.31-16','n10','INSTALL_AND_START') >>> ret 0   5. Upload snmpd.conf (Example attached to this article) >>> sts, ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'PUT_SNMP_CONFIG', file('/home/user/snmpd.conf').read())   6. Start FSC and check status >>> ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'RESTART') >>> ret >>> ret = s.callPlugin('Administration.FSC-7.31-16', 'n10', 'STATUS') >>> ret [0, 'started']   7. Repeat steps 4-6 have for each node.   Additional Notes For monitoring the FSC agents go to http://support.ts.fujitsu.com/content/QuicksearchResult.asp and search for "ServerView Integration Pack for NAGIOS Additional References -
View full article
Background That article guides you through the procedure to change the network for a cluster without tagged VLAN. It's strongly recommended to contact EXASOL support if you have tagged VLAN. Prerequisites This task requires a maintenance window of at least 30 minutes The tasks performed in EXAoperation require a user with at least "Administrator" privileges The network requirements are described in system_requirements This guide is only for the case that the subnet mask and the host addresses won't be changed Instructions 1. EXAoperation master node The changes must be applied to the real license server since it is the only node that boots from the local hard drive. Check the following:  Is EXAoperation running on the license server? If not, move it to the license server. 2. Shutdown all databases Navigate to the EXAoperation page Services > EXASolution Review the database list and check if the column "Background Restore" indicates "None" on all instances Select all (running) EXASolution instances Click on the button "Shutdown" Reload the page until all instances change their status from "Running" to "Created" You may follow the procedure in an appropriate logservice:  System marked as stopped. Volume 1 has been deleted. EXASolution exa_test is rejecting connections Controller(0.0): Shutdown called. User 0 requested shutdown of system 3. Shutdown EXAStorage Navigate to the EXAoperation page Services > EXAStorage  Click on the button "Shutdown Storage Service" and confirm your choice when prompted. When the storage service is down, it looks like: 4. Suspend Nodes Open the Configuration > Nodes page in EXAoperation  Select all nodes in the tab "Cluster Nodes" Select "Stop cluster services" from actions dropdown menu Confirm with click on the button "Execute"   Reload the page and until all nodes indicate the State/Op. "Suspended"   ⚠️ The 2nd state of every node must be "Active"! Restarting a node that has the "To Install" state will lead to an unrecoverable data loss! 5. Change network settings Navigate to Configuration -> Network Click on the button "Edit" Fill in the characteristics of the new network in Fields "Public Network", "Gateway", "NTP Server 1", "DNS Server 1". If there are no characteristics for NTP or DNS remove the entries, that the field is clear.   *Click on Button "Apply" to save the new configuration  6. Change IP and reboot license server Log in as maintenance user via the console. ⚠️ Connect via iDRAC, vSphere, or change it locally on the terminal. If you're connected via ssh and confirm the IP with "OK" you will get disconnected instantly. Make sure that you're able to reboot the server after the reconfiguration. confirm on "Configure Network" Change IP of the license, subnet mask, and gateway and confirm with "OK" Reboot the license server with the "Reboot" button . ⚠️  Now you are able to reconfigure your own network (eg. local public switch, VLAN, etc.) Wait for the license server to finish its startup procedure and log into EXAoperation again. 7. Reboot the database nodes Navigate to the Configuration > Nodes page Select all nodes in the tab "Cluster Nodes" Choose "Reboot" from actions dropdown menu and confirm with click on the button "Execute"   Wait for the nodes to finish reboot (about 15 to 25 minutes) Reload the nodes page until the State/Op. column changes from "booting" to "running" for all nodes You may watch the boot process (of node n11 in this example) in an appropriately configured logservice: Boot process stages 1 to 3 took 121 seconds. Boot process stage 3 finished. Start boot process stage 3. successfully initialized thresholds for node monitoring successfully unpacked package on client node: JDBC_Drivers-2020-01-22.tar.gz successfully unpacked package on client node: EXASolution-7.0.beta1_x86_64.tar.gz successfully synchronized EXAoperation. successfully unpacked package on client node: EXARuntime-7.0.beta1_x86_64.tar.gz successfully unpacked package on client node: EXAClusterOS-7.0.beta1_CentOS-7.5.1804_x86_64.tar.gz Node does not support CPU power management (requested 'performance'). Prepare boot process stage 3. Hard drives mounted. Mount hard drives. client mac address of public0 matches the expected value (9E:F9:82:38:43:69) client mac address of private0 matches the expected value (2A:A9:8E:89:2F:10) Initialize boot process. client mac adress is '2A:A9:8E:89:2F:10' client version is '7.0.beta1' client ID is '10.17.1.11'Additional Notes 8. Startup EXAStorage Navigate to the EXAoperation page Services > EXAStorage Ensure that all database nodes indicates the state "Running" Click on the button "Startup Storage Service" and confirm your choice when prompted After the EXAStorage page has been reloaded, check the status of all nodes, disks, and volumes 9. Start the database Select all  EXASolution instances Click on the button "Start" Reload the page until all instances change their status from "Created" to "Running" You may follow the procedure in an appropriate logservice: EXASolution exa_test is accepting connections System is ready to receive client connections. System started successfully in partition 44. User 0 requests startup of system. User 0 requests new system setup.  
View full article
Background  This article guides you through the procedure of setting the time on clusters manually as preparation of configuring NTP servers (Configuring NTP servers via EXAoperation) The cluster nodes constantly exchange configuration and vitality information and depend on proper time synchronization. While it is possible to manually set the time on EXASolution clusters, it is highly recommended to supply NTP servers for time synchronization. Prerequisites The update requires a maintenance window of at least half an hour. The tasks performed in EXAoperation requires a user with at least "Administrator" privileges. Procedure 1.1 Shutdown all database Open 'Services > EXASolution' Check the database operations. If the database is stopped while an operation is in progress the operation will be aborted Select all (running) EXASolution instances Click on the button "Shutdown" Reload the page until all instances change their status from "Running" to "Created" You may follow the procedure in an appropriate logservice: System marked as stopped. Successfully send retry shutdown event to system partition 64. EXASolution exa_db is rejecting connections controller-st(0.0): Shutdown called. User 0 requests shutdown of system. 1.2 Shutdown EXAStorage Service Open 'Service > EXAStorage' Check if any operations are currently in progress (if EXAStorage is stopped while an operation is in progress, the operation will be aborted) Click on button "Shutdown Storage Service" After a successful shutdown, the EXAStorage page displays: 1.3 Check NTP configuration Open 'Configuration > Network' Check if there are already NTP servers configured. ⚠️ If yes please remove them by clicking on "Edit". Open 'Service > Monitoring' Change the time Click on "Set Cluster time" Please follow the instruction of Configuring NTP servers via EXAoperation . See "Procedure - 1.2 Configure NTP server & 1.3 Synchronise time on the cluster" 1.4 Startup storage Navigate to the EXAoperation page Services > EXAStorage Ensure that all database nodes indicates the state "Running" Click on the button "Startup Storage Service" and confirm your choice when prompted After the EXAStorage page has been reloaded, check the status of all nodes, disks and volumes 1.5 Startup database Open the Services > EXASolution page and repeat the following steps for all instances: Click on an EXASolution instance name From the "Actions" dropdown menu please select "Startup" and confirm with click on the button "Submit". Navigate back to the Services > EXASolution page and reload until the database indicates the status "Running" You may follow the procedure in an appropriate logservice: EXASolution exa_demo is accepting connections System is ready to receive client connections. System started successfully in partition 44. User 0 requests startup of system. User 0 requests new system setup.
View full article
Adding data nodes to a cluster does not mean that they must be created again. If a data node has already been configured, then by going to the Nodes tab and clicking on the existing node, we can find the Copy and Multiple Copy options. Copy button will use the current node's configuration to create a new one with the same configuration, while the Multiple Copy will create any desired number of nodes using the "master" node's configuration. Multiple Copy Number: Represents the last octet within the private network (CICN) External Number: Represents the last octet within the public network (CSCN) Label: Optional Label MAC Private LAN: MAC address of the private network interface (CICN) MAC Private Failsafety LAN (optional) MAC Public LAN: MAC address of the public network interface (CSCN) MAC Public Failsafety LAN (optional) MAC SrvMgmt: MAC address of the IPMI network interface (LOM) OR IP SrvMgmt: IP address of the IPMI network interface (LOM) The example shows the creation of four new nodes using the configuration of "master" node n0011 Copy Has the same configuration fields as Multiple Copy, but only for one node. The example shows the creation of one new nodes using the configuration of "master" node n0011  
View full article
Exasol logs a multitude of system information in statistical system tables (schema EXA_STATISTICS). This information is kept long-term and provides good insights concerning changes in database behavior. Many times it is easy to spot if a "system is slow" report was triggered by a sudden change or it is a long-standing trend. Those statistics also provide a good starting point for further system analysis concerning sizing and performance. This article will describe how to generate and download the statistics in 2 ways: Manually  Log into EXAoperation Select your database instance Select "Statistics" in the bottom pane Enter valid login credentials for the database and click on "Download" (The given user must have the system privileges CREATE CONNECTION and SELECT ANY DICTIONARY) When prompted by your browser, chose to save the zip-archive      b. Automated (XML-RPC) The XML/RPC interface also provides a function to download these statistics. The following is a minimal parameterized example in python: s = xmlrpclib.ServerProxy(httpstring + '/cluster1' ) t = xmlrpclib.ServerProxy(httpstring + '/cluster1/' + urllib.quote_plus( 'db_' + dbname)) data = base64.b64decode(t.getDatabaseStatistics(dbuser, dbpass, startdate, enddate)) file(filename, 'w' ).write(data) The downloaded zip archive contains a set of CSV files (all unencrypted) with extracts of the most important statistical system tables. All data is in aggregated form, it   does not contain   any of the following: User names Schema names Table names SQL texts  
View full article
Certified Hardware List
The hardware certified by Exasol can be found in the link below:

Certified Hardware List

If your preferred hardware is not certified, refer to our Certification Process for more information on this process.
Top Contributors