To reproduce certain problems, Exasol support may ask you to send DDL statements for required tables and views. At this point you have two choices:
Start drilling around to find each and every required view / table etc. This may be a lot of work and may end up with some communication round trips in case you overlooked a dependency or two.
Skip all that and just send us the full DDL for all schemas in your database instance. Just let us sort out what is needed and what is not. Both options are not optimal.
The attachment of this article contains a procedure script (Lua) that can create DDL statements for recursive dependencies of a view. The DDL are presented as a single-column result-set and are ready for copy/paste into a text editor (or EXAplus) for saving.
execute script meta.create_DDL( 'DUT' , 'TRUNK' );
--DDL created by user SYS at 2017-11-14 09:44:59.554000
-- table dependencies --
CREATE SCHEMA "DUT";
CREATE TABLE "DUT"."TAB1"(
"I" DECIMAL(18,0) IDENTITY NOT NULL,
-- SYSTEM TABLE: SYS.EXA_METADATA
-- function dependencies --
function func( param decimal(3) ) returns decimal(3)
return sqrt(param) * (select max(i) from dut.tab1);
-- script dependencies --
CREATE LUA SCALAR SCRIPT "LUA_SCALAR" () RETURNS DECIMAL(18,0) AS
-- view dependencies --
--> level 2
-- SYSTEM VIEW: SYS.CAT
--> level 1
CREATE VIEW "DUT"."BRANCH"
as ( select * from exa_metadata, cat );
-- final query/view:
CREATE VIEW "DUT"."TRUNK"
select * from dut.tab1, dut.branch
where func(j) > lua_scalar()
This script is work in progress and has only seen minimal testing so far.
Things known not to work:
Virtual Schemas – It is unlikely we would be able to clone your remote system anyway.
Connections and IMPORT inside views – Like virtual schemas, it probably does not make much sense.
Dependencies inside scripts – This is branded 'dynamic SQL' and our engine can not determine those dependencies without actually executing the script with specific input.
SELECT dependencies inside functions – Just don't do that. Like with scripts, these dependencies to not show up in the system.
If your model contains any of the above and it turns out to be relevant for reproduction of a problem, you might have to revert to "Skip all that" above. The "Copy Database" script in create-ddl-for-the-entire-database may be of use then.
SOL: Metadata Backup
How it works
The file contains several scripts which will copy the DDL for all objects in the database. It also contains the necessary scripts to restore the metadata. If the config file is set up correctly, "_backup._sh" performs the following tasks:
Creates the specified directory. This is where all of the files will be stored
Connects to the database using an Exaplus profile (must be created beforehand)
Once connected to the database, the script creates a schema "BACKUP_SCRIPTS" and 2 scripts which are used in the backup process.
EXPORT statements are generated using the database script "BACKUP_SYS" for several system tables which can be referenced later on. The CSV files are saved in the './ddl/' path.
A database "restore_sys.sql" script is created and saved in the './ddl/' path that includes all the commands neccesary to restore the system tables on a new "SYS_OLD" schema.
The script executes the database script "METADATA_BACKUP" and creates DDL for all database objects, including schemas, tables, views, users, roles. Limitations in the script are listed at the end.
Specifically, this script will read data from system tables and create the necessary CREATE statements which can be executed at a later time.
The script creates a new schema, stores the data into a table in this new schema, and then exports the table contents into an SQL file to prevent formatting errors. After the export, the schema is dropped.
The DDL and CSV's of the old tables are compressed and saved as a .tar.gz file in the './backups' directory or on a different location if "EXTERNAL_DIR" is set on the "config" file.
Creating DDL based on system tables is not perfect and has some limitations and imperfections. To safeguard incorrect DDL creation, the script also saves the system tables which are used in the script. If a DDL is not created perfectly or if you discover imperfections/errors, you can query the system tables directly and create your own DDL.
invalid (outdated) views and their DCL will not be created
views and functions for which the schema or object itself was renamed still contain the original object names and may cause an error on DDL execution
comments at the end of view text may cause problems
GRANTOR of all privileges will be the user that runs the SQL code returned by the script
passwords of all users will be 'Start123', except for connections where passwords will be left empty
functions with dependencies will not be created in the appropriate order
UDF scripts with delimited identifiers (in- or output) will cause an error on DDL execution
If authenticating using Kerberos (< 6.0.8), please alter the script BI_METADATA_BACKUPV2 lines 155 and 156
Comments containing apostrophes (single-quote) will cause an error on DDL execution
Note: This solution has been updated to include new permissions/rights in version 6.1
Database is already created and is able to be connected from the system you are running the scripts
Database user, which you will connect to the database with, has the following system privileges:
SELECT ANY DICTIONARY
How to create a Metadata Backup Concept?
Save the attached tar.gz file to a directory of your choice. The only requirements are that the system is Linux-based and that Exaplus command line is installed.
Unzip the file:
tar xf metabackup_vYYYYMMDD.tar.gz
Create an Exaplus profile with all of the connection details, including database user, password, and connection string. Details on the parameters for Exaplus can be found in the user manual. Example:
/usr/opt/EXASuite-6/EXASolution-6.0.10/bin/Console/exaplus -u [YOU_USER] -p [YOUR_PASSWORD] -c [DB IP Address]:8563 -wp metadata_backup_profile
Change directory to the newly created folder:
Edit config file with the help of following information:
EXAPLUS = Path to EXAPLUS excluding the exaplus. For the example above, it would be ' /usr/opt/EXASuite-6/EXASolution-6.0.10/bin/Console'
PROFILENAME = The name of the profile created in the previous step as ( metadata_backup_profile )
EXTERNAL_DIR = The path where the metadata backup will be stored if specified. Can be an external filesystem. Should be mounted or available beforehand
SYSPATH = The path where metabackup_vYYYYMMDD.tar.gz was extracted to
DB_NAME = The Database Name
EXAPLUS_TIMEOUT = Timeout for Exaplus (default 300 seconds). If you want to prevent long-running queries, set the timeout accordingly. Please note, for very large databases, it might take over 5 minutes to run all of the scripts, so please set the timeout higher.
EXAPLUS_RECONNECT = Reconnect tries for Exaplus if the connection fails. Default value is set to '1'
Make .SH files executable
chmod 755 *.sh
./backup.sh or bash backup.sh
SOL: Metadata Restore
This script will import the CSV's created in the Backup and run all of the CREATE statements.
The script opens an Exaplus session, creates a schema called 'SYS_OLD' containing the same system tables that were created in the backup, and then imports the CSV's into these tables.
All of the CREATE statements are executed, which restores the 'structure' of the database, however all tables are empty.
Note: During execution, the owner of all objects is the user running the script. At the end of the script, the owner of all schema changes to match the correct owner.
To monitor errors, profiling is enabled by default. You can search through EXA_DBA_PROFILE_LAST_DAY to find commands which were rolled back
If the database is not empty, some objects may fail to be created if they already exist in the database. Objects will not be overwritten.
Limitations of the create DDL script may cause errors during the CREATE statements. Please check the restore log, profiling or auditing to identify statements which were not created successfully.
If restoring to a database running a different version than the database from the backup, the IMPORT of old sys tables may fail due to different columns.
Exaplus command line
Database is already created and is able to be connected from the system you are running the scripts on
It is recommended to start the database with auditing ENABLED
Database user will need extensive CREATE privileges. It is recommended that the user running the scripts has DBA privileges as the following commands will be carried out:
CREATE SCHEMA, TABLE, VIEW, SCRIPT, CONNECTION, ETC
How to apply a Metadata Restore?
The restore script can be run from the same system you ran the backup on or a different system. If running the restore on a new system, please follow steps 1-4 found in the Backup instructions to set up the files. NOTE: When setting up your exaplus profile, please enter the information for the database you are restoring to.
Unpack the backup tar into the directory of your choice
tar xf ddl-backup-DB_NAME-YYYY-MM-DD-HH-Mi-SS.tar.gz
Edit config file with the following information from:
SYSPATH = The path where metabackup.tar.gz was extracted to
DB_NAME = The Database Name
BACKUP_RESTORE_PATH = The path that you unpacked the backup file to (should end with '/ddls')
RESTORE_SYS_SQL_PATH = The path containing the restore script
./restore.sh or bash restore.sh
We want to replace all NULL values within a certain table with the default value specified for the according column. And we don't want to create the necessary update statements manually.
A small metadata-driven procedure script (Lua) that issues the required update statements. (See Attachment)
performs a rollback after the metadata request, to avoid a read-write conflict scenario.
performs all updates within a single transaction.
will not abort when an update on one of the columns fails.
performs a commit when all columns have been handled, regardless of any errors encountered.
...on column selection
The script includes all columns that do have a DEFAULT value set. It excludes all columns with a NOT NULL constraint (ignoring the actual state of the constraint). Obviously, such a column can not contain any NULL values that need updating.
...on row selection
Due to Exasol's memory management and data processing, the script handles each column separately. This minimizes both the amount of memory required for processing and the amount of data blocks being written. The script does not contain any delta functionality, it will process all rows of the table each time it is called.
Just create the script in any schema you like (CREATE SCRIPT permission required). It does not have any dependencies.
create or replace /* procedure */ script REPLACE_NULL( schema_name , table_name )
When calling the script, it expects two parameters: A schema name and a table name:
execute script REPLACE_NULL( 'my schema ' , 'my table ' );
💡 Both schema and table name are expected as string and will be case-sensitive.
open schema SR9000;
-- Rows affected: 0
create table Invoice( invoice_id int , invoice_date date default date '2017-01-01' );
-- Rows affected: 0
insert into Invoice values (1, null ), (2, null ), (3, '2017-02-01' );
-- Rows affected: 3
execute script REPLACE_NULL( 'SR9000' , 'Invoice' );
--  " No columns found for " SR9000 "." Invoice "" caught in script " SR9000 "." REPLACE_NULL" at line 23 ( Session : 1585944483210400591)
... yes. We created the table using a regular identifier, so it ended up as uppercase...
execute script REPLACE_NULL( 'SR9000' , 'INVOICE' );
2 rows updated
What does a preprocessor script do?
Well, it preprocesses
To be more specific, an active preprocessor script is hooked before Exasol's SQL parser. This allows it to intercept and modify any SQL text sent to the database (or executed within a procedure script) before passing it on to the actual parser.
What can I do with it?
Those scripts are using the Lua language, so basically you can do anything with the preprocessor; here's a few facts and limitations, though:
Exasol's Lua library is stripped and can not be extended with binary libraries for security reasons
Preprocessor scripts do not take parameters; the "current" SQL text can be retrieved through a function call
Preprocessor scripts can execute statements using (p)query
Preprocessor scripts do not return any values; they "return" the modified SQL text through another function call
While often preprocessor scripts are enabled on system level, any user can disable this in his or her session (see (2) below)
Preprocessor scripts are executed in the caller's context and privileges. Also, if user can EXECUTE the script (which is a necessity), he/she can also READ it. Security by obscurity won't work.
Typical Use Cases
Compatibility layer for a frontend that produces SQL not suitable for Exasol
Macro magic: Expanding predefined keywords server-side
"human knows more" optimizations of queries and filters
Row-Level Security (
💡 re-read the last two points above)
Syntax and Semantics
Please see the Exasol User Manual (Section 3.8) for details.
As a preprocessor script is a schema object, you will need to find or create a schema to create the script in:
create schema if not exists PREPROCESSOR;
CREATE SCHEMA privilege or pre-existing schema
How to work with Preprocessor Script?
Step 1: Safety
"CREATE SCRIPT" statements are also preprocessed. As the preprocessor script you are going to (re-)deploy is very likely to contain the keywords it should react on, it is advisable to disable the preprocessor before deployment:
alter session set sql_preprocessor_script = null ;
Step 2: Deploy
Create the preprocessor script. Syntax "around" may depend on the SQL client you are using:
create or replace Lua script MY_PREPROCESSOR()
CREATE SCRIPT privilege
ownership of the schema or CREATE ANY SCRIPT privilege
Step 3: Activate locally
Now activate the preprocessor for your local session:
alter session set sql_preprocessor_script = PREPROCESSOR.MY_PREPROCESSOR;
Step 4: TEST IT!
Run a few statements to verify success. Best done with Auditing or Profiling enabled, so you can see the resulting SQL texts. When things go very wrong, go back to step (2) – This is the only SQL statement not passed through the preprocessor...
Step 5: Activate globally
Now that things went well, we can activate the script for other users (new sessions):
alter system set sql_preprocessor_script = PREPROCESSOR.MY_PREPROCESSOR;
ALTER SYSTEM privilege
Step 6: No wait, we forgot something important!
We just locked out (more or less) everyone else from the database: They don't have EXECUTE permissions on the script!
grant EXECUTE on PREPROCESSOR.MY_PREPROCESSOR to public ;
ownership of the schema or GRANT ANY OBJECT PRIVILEGE privilege
As step (3) replaces the script, all privileges on it are lost in that step. To avoid this problem, the EXECUTE privilege should be put on schema level:
grant EXECUTE on SCHEMA PREPROCESSOR to public ;
Just make sure you don't put anything dangerous/secret into that schema
Preprocessor Scripts Documentation
List of functions in Exasol