This article explains the information found in EXA_ALL_OBJECT_SIZES
View full article
Generally speaking, NULL is not a special value, but it represents an undefined value. This article describes how to work with NULL values and the types of valid comparisons
View full article
This article talks about using DECIMAL datatype to achieve more stringent results than DOUBLE datatype.
View full article
When to use which programming language in Exasol? This article helps you to find the right language for your task.
View full article
Handling geospatial data can be tough, especially if you are working with large, high detail polygons. If you experiencing unusual high query runtime this guide shows you how to speed up the process.
View full article
MERGE is designed to use a small UPDATE table to affect a larger FACT table. This article explains how it works
View full article
In Exasol, the data is automatically evenly distributed among each node. This distribution is random, however.  By specifying distribution keys, you can control how the data is distributed, which can lead to enormous performance improvements. 
View full article
Background This how-to describes the necessary steps to use Kerberos based single sign-on (SSO) in Exasol. EXAoperation The configuration of EXAoperation is explained in the Operational Manual: https://docs.exasol.com/administration/on-premise/user_management/kerberos_single_sign-on.htm Prerequisites Database and used drivers / EXAplus have version 6.0.8 or newer. How to install SSO with Kerberos?   Step 1:   Create Exasol user with Kerberos authentication Create user CREATE   USER   <identifier> IDENTIFIED BY   KERBEROS PRINCIPAL <string-literal> The Kerberos principal is usually formed like this:   < user >@<realm> Alter User ALTER   USER   <identifier> IDENTIFIED BY   KERBEROS PRINCIPAL <string-literal> Rename User Renaming a user with Kerberos authentication will not affect the Kerberos principal specified by CREATE USER or ALTER USER. More details on the creation of users can be found in the user manual. Step 2: Client configuration After these steps, the database is ready to accept users that authenticate via Kerberos. For the client side, different options to use this authentication method exist: ODBC New connection string parameters KERBEROSSERVICENAME Service name part of the Kerberos principal. Default: exasol KERBEROSHOSTNAME Hostname of the Kerberos principal. Default: use hostname from the connection string Kerberos based single sign-on will be used if user and password are not specified and the SQL_DRIVER_PROMPT flag is not set during SQLDriverConnect(). ODBC Single-Sign-On on Linux You can either use your existing ticket for your user or use a keytab file. Use existing ticket: We assume that you already have a ticket for your user from the KDC. If not, obtain a ticket using kinit: kinit user@YOUR.REALM Use keytab file:  To use a keytab file, you have to make sure there are no tickets in the cache using kdestroy. Then you have to tell Kerberos where your keytab file is located: export KRB5_CLIENT_KTNAME=/home/<your_user>/<your_keytab_file_name>          Alternative: add the location of the keytab file to the krb5.conf: [libdefaults] default_client_keytab_name = /home/<your_user>/<your_keytab_file_name> We assume you have a valid krb5.conf Create an odbc.ini configured for Kerberos. Here is an example: [exa_linux] Driver = /home/<your_user>/KerberosTest/EXASolution_ODBC-6.0.8/lib/linux/x86_64/libexaodbc-uo2214lv2.so EXAHOST = exasol-host:8563 EXALOGFILE = odbc.log LOGMODE = Debug Comm KerberosServiceName = exasol KerberosHostName = kerberos.hostname. for .exasol Make sure your environment variables are correctly set (this might vary depending on the environment) export KRB5CCNAME=DIR:/path/to/your/cache/ export KRB5_CONFIG=/path/to/your/krb5.conf export ODBCINI=/path/to/your/odbc.ini You should now be able to use the   exa_linux   DSN isql exa_linux -v ODBC Single-Sign-On on Windows On Windows just install the Exasol ODBC driver. Then in the ODBC Configuration tool from Windows add a new DSN for the Exasol ODBC driver. Now you can setup the connection: Specify any DSN name Specify connection string (host: port) Specify "Additional parameter" in the second tab (Advanced) of the configuration tool to "kerberosServiceName=<yourServiceName>. You may require kerberosHostName too, depending on your Active Directory configuration. For this you have to extend the additional parameters as follows: "kerberosServiceName=<yourServiceName>;kerberosHostName=<yourKrbHostName>". Compatibility Mode If it is not possible to specify the additional parameters, the "compatibility mode" using username and password can be used. For this, you can specify the Kerberos service name in the "User name" field as: Kerberos:<yourServiceName> If the kerberosHostName has to be specified, the "User Name" field has the following format: Linux Kerberos:<yourServiceName>@<yourKrbHostName> Windows Kerberos:<yourServiceName>/<yourKrbHostName> JDBC New connection string parameters If one of the following connection string parameters is set, Kerberos single-sign-on will be used for JDBC. kerberosservicename Service name part of the Kerberos principal. Default: exasol kerberoshostname Hostname of the Kerberos principal. Default: use hostname from the connection string JDBC Single-Sign-On On Linux Using ticket cache We assume that you already have a ticket for your user from the KDC. If not, obtain a ticket using kinit: kinit user@YOUR.REALM If you now use JDBC with the connection string parameters specified above, authentication should work without asking for a username or password. For illustration we provide an example Java application: package com.exasol; import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import java.util.Properties; public class KerberosTest { static Properties props = System .getProperties(); private void testKerberosConnection( String h, String p) { String connectionString= "jdbc:exa:" + h + ":" + p + ";kerberosservicename=exasol;kerberoshostname=kerberos.hostname. for .exasol" ; //Optional connection string parameters: ";debug=1;logdir=/home/<your_user>" try { Class .forName( "com.exasol.jdbc.EXADriver" ); } catch (ClassNotFoundException e) { e.printStackTrace(); } Connection driverConn= null ; try { //Connectiong to the EXASOL Server as the current OS user driverConn = DriverManager.getConnection(connectionString); } catch (SQLException e) { e.printStackTrace(); } try { if (driverConn!= null && !driverConn.isClosed()) System .out.println( "Kerberos connection test success!" ); else System .out.println( "Kerberos connect failed!" ); } catch (SQLException e) { e.printStackTrace(); } } public static void main( String [] args) { ( new KerberosTest()).testKerberosConnection(args[0], args[1]); System .out.println( "Done." ); } } How to run the test application: java -Djavax.security.auth.useSubjectCredsOnly= false -Djava.security.krb5.conf=/path/to/your/krb5.conf -cp exajdbc.jar:/path/to/example com/exasol/KerberosTest <your_exasol_host_name> <your_exasol_port_nr> In case of problems with Kerberos not working as expected, you can enable command line Kerberos debugging by setting the following parameter value to true: -Dsun.security.krb5.debug= true These logs displayed will help you understand the issues related to kerberos not working.  Using keytab file To configure the Exasol GSSAPI connection and to specify the keytab file you can use a gss.config file: com.sun.security.jgss.initiate { com.sun.security.auth.module.Krb5LoginModule required principal= "<your_user>@<your_realm>" useKeyTab= true doNotPrompt= true keyTab= "/home/<your_user>/<your_keytab_file_name>" ; }; How to run the test application: java -Djava.security.auth.login.config=/home/<your_user>/gss.config -Djavax.security.auth.useSubjectCredsOnly= false -Djava.security.krb5.conf=/path/to/your/krb5.conf -cp exajdbc.jar:/path/to/example com/exasol/KerberosTest <your_exasol_host_name> <your_exasol_port_nr> JDBC Single-Sign-On   On   Windows First, install the Exasol JDBC driver. In the Java sample provided above, you can modify the connection string as follows: String connectionString= "jdbc:exa:" + h + ":" + p + ";kerberosservicename=<your_kerberos_service_name>;kerberoshostname=<your_kerberos_hostname>;debug=1;logdir=c:\\Users\\<your_user>" ; Compile the sample and run it java -cp "c:\Program Files (x86)\EXASOL\EXASolution-6.0\JDBC\exajdbc.jar" ;c:\path\to\example;C:\Users\<your_user>\Waffle\Bin\jna-4.0.0.jar;c:\Users\<your_user>\Waffle\Bin\jna-platform-4.0.0.jar;c:\Users\<your_user>\Waffle\Bin\waffle-jna.jar kerberosTest <your_exasol_server_host> <your_exasol_port> Compatibility Mode Similar to ODBC, the "compatibility mode" is supported. If you use a username in the following format, the password will be ignored and it behaves as if the respective Kerberos connection string parameters were set (see table above). Note that only the service name is mandatory. Linux Kerberos:<yourServiceName>@<yourKrbHostName> Windows Kerberos:<yourServiceName>/<yourKrbHostName> EXAplus GUI In the login dialog in EXAplus, the user can choose Username or Kerberos in the dropdown list. If Kerberos is chosen, the GUI will change, so that the user could input service name, host and connection string.   CLI For Login with Kerberos, EXAplus should be started with parameter -k. Then the user can input service name, host and connection string. $ /usr/opt/EXASuite-6/EXASolution-6.0.8/bin/Console/exaplus -k EXAplus 6.0.8 (c) EXASOL AG Service name: exasol Host: vcluster.exasol.com Connection String (localhost:8563): mynode:8563 Thursday, March 8, 2018 12:57:48 PM CET Connected to database exa.mynode.8563 as user null . EXASolution 6.0.8 (c) EXASOL AG SQL_EXA> Manipulation of JVM Properties In the Preferences > Other menu, the user can define JVM Properties, which will be used when Java is called for starting EXAplus. The default value of JVM Properties is -Xmx512m. This feature is only implemented in EXAplus GUI for Linux and Windows. For MacOS, SSO with Kerberos is not available, hence the JVM option is not applicable on MacOS. Windows, Linux     
View full article
Question Is the ROWID always assigned in an ascending order? Is it possible to determine the sequence of inserted records with the ROWID? Answer The ROWIDs of a table are managed by the DBMS. They are assigned in an ascending order per node and stay distinct within a table. For different tables, they could be the same. DML statements such as UPDATE, DELETE, TRUNCATE or MERGE might internally reorder data storage, invalidating and reassigning all the ROWIDs. Contrary to that, structural table changes such as adding a column leave the ROWIDs unchanged. Altering   distribution keys   of or   reorganizing   a table will certainly reassign ROWID values. Therefore the ROWID can't be used to determine the exact sequence or age of a record, it is designed to be a   short term   identifier for rows to be used for duplicate elimination. The ROWID pseudo column can only be used on   table   objects, not on views or subselects. If you try it on a view, you will receive an appropriate error message Additional References ROWID Syntax
View full article
This article will show you how you can create a synchronization of LDAP groups with Roles and Users in the Database
View full article
This article describes a few approaches to dealing with multi-tenancy databases
View full article
Background You can access to database backups virtually. The scenario will be like this: Extract tables/data from backup In case the database is configured to use all available memory, the DB RAM needs to be decreased before creating a VR instance. If your system has enough free memory available proceed with step 2 Reducing the DB RAM requires a short downtime of the production database Prerequisites Online backup, Virtual Restore access is not supported for remote backups. Enough free disk space for an EXAStorage DATA volume (used for metadata) Unused Main Memory (DB RAM) The VR instance needs the same amount of active nodes as the backup VR instance database port, e.g. TCP 9563 (please be aware this port should be allowed through the firewall) Recommendation at least 4GiB per node for a small amount of data and max. 5% of the memory of the nodes Amount of VR DB RAM depends on the amount of data to be restored Calculate DB RAM for the VR instance: In this example: each node has 31GiB of Main Memory The amount of data is very small (small table), thus 4GiB DB RAM per node is sufficient (see Prerequisites) The database is running on 4 active nodes and has 112GiB DB RAM   Amount of active nodes * 4GiB = DB RAM VR instance: 4 * 4GiB = 16GiB DB RAM VR instance   How to virtual-access on database backups Step 1: Reducing the DB RAM to allocate RAM for the VR instance DB RAM - DB RAM VR instance = reduced DB RAM: 112GiB - 16GiB = 96GiB   Step 1.1: Shutdown database Select the database and click Shutdown from Services >  EXASolution . Skip this step if the database is already shut down.         Step 1.2: Edit/Adjust DB RAM Wait for the database to be shut down. Then edit database settings: Click on the "Edit" Button Set the DB RAM to 96GiB Apply changes Start the database   Step 2: Create DATA volume for the VR instance Go to the database properties:   Note down amount of active database nodes: Step 2.1: Create EXAStorage DATA volume Click on EXAStorage and then click on "Add Volume": Redundancy 1 Allowed Users e.g. Admin Read-only Users: None Priority 10 Volume Type: DATA Volume Size: 20GiB Nodes List: Active database nodes Number of Master Nodes: Same amount as active database nodes Block Size: None Disk: e.g. d03_storage   Step 3: Create EXAStorage DATA volume   EXASolution - Add DB name: VR Active Nodes: Same amount as active database nodes Node List: Active database nodes EXAStorage DATA Volume: Choose the previously created volume ID   Network Interfaces: Same as the active database Connection Port: e.g. 9563 it must differ from production   DB RAM: 16GiB Step 4: Create EXAStorage DATA volume Click on the newly created database Click on "Backups" Click on "Show foreign database backups" Select backup you want to restore (Ensure backup dependencies are fulfilled) Restore Type: Virtual Access Click on Restore, this will start the VR instance Wait for the database to become online Step 5: Example Import/Export data IMPORT: CREATE OR REPLACE TABLE SCHEMA . TABLE AS SELECT * FROM ( import from EXA at ' CONNECTION - STRING :PORT' USER 'myuser' IDENTIFIED BY "mypass" table SCHEMA . TABLE ); EXPORT: EXPORT SCHEMA . TABLE INTO EXA at ' CONNECTION - STRING :PORT' USER "myuser" IDENTIFIED BY "mypass" TABLE SCHEMA . TABLE ; Step 6: Cleanup VR database instance Shutdown VR instance Delete VR instance Delete EXAStorage DATA volume of the VR instance Afterward, increase DB RAM for production
View full article
This article explains the difference between temporary tables, views, and CTE's.
View full article
NULL and nil have two different meanings in the context of Lua Programming. This article explains the difference
View full article
SOL: Metadata Backup Background How it works The file contains several scripts which will copy the DDL for all objects in the database. It also contains the necessary scripts to restore the metadata. If the config file is set up correctly, "_backup._sh" performs the following tasks: Creates the specified directory. This is where all of the files will be stored Connects to the database using an Exaplus profile (must be created beforehand) Once connected to the database, the script creates a schema "BACKUP_SCRIPTS" and 2 scripts which are used in the backup process. EXPORT statements are generated using the database script "BACKUP_SYS" for several system tables which can be referenced later on. The CSV files are saved in the './ddl/' path. A database "restore_sys.sql" script is created and saved in the './ddl/' path that includes all the commands neccesary to restore the system tables on a new "SYS_OLD" schema. The script executes the database script "METADATA_BACKUP" and creates DDL for all database objects, including schemas, tables, views, users, roles. Limitations in the script are listed at the end. Specifically, this script will read data from system tables and create the necessary CREATE statements which can be executed at a later time. The script creates a new schema, stores the data into a table in this new schema, and then exports the table contents into an SQL file to prevent formatting errors. After the export, the schema is dropped. The DDL and CSV's of the old tables are compressed and saved as a .tar.gz file in the './backups' directory or on a different location if "EXTERNAL_DIR" is set on the "config" file. Limitations Creating DDL based on system tables is not perfect and has some limitations and imperfections. To safeguard incorrect DDL creation, the script also saves the system tables which are used in the script. If a DDL is not created perfectly or if you discover imperfections/errors, you can query the system tables directly and create your own DDL. invalid (outdated) views and their DCL will not be created views and functions for which the schema or object itself was renamed still contain the original object names and may cause an error on DDL execution comments at the end of view text may cause problems GRANTOR of all privileges will be the user that runs the SQL code returned by the script passwords of all users will be 'Start123', except for connections where passwords will be left empty functions with dependencies will not be created in the appropriate order UDF scripts with delimited identifiers (in- or output) will cause an error on DDL execution If authenticating using Kerberos (< 6.0.8), please alter the script BI_METADATA_BACKUPV2 lines 155 and 156 Comments containing apostrophes (single-quote) will cause an error on DDL execution Note: This solution has been updated to include new permissions/rights in version 6.1 Prerequisites Linux system Exaplus installed Database is already created and is able to be connected from the system you are running the scripts Database user, which you will connect to the database with, has the following system privileges: CREATE SCHEMA CREATE TABLE CREATE SCRIPT SELECT ANY DICTIONARY How to create a Metadata Backup Concept? Step 1 Save the attached tar.gz file to a directory of your choice. The only requirements are that the system is Linux-based and that Exaplus command line is installed. Step 2 Unzip the file: tar xf metabackup_vYYYYMMDD.tar.gz Step 3 Create an Exaplus profile with all of the connection details, including database user, password, and connection string. Details on the parameters for Exaplus can be found in the user manual. Example: /usr/opt/EXASuite-6/EXASolution-6.0.10/bin/Console/exaplus -u [YOU_USER] -p [YOUR_PASSWORD] -c [DB IP Address]:8563 -wp metadata_backup_profile Step 4 Change directory to the newly created folder: cd metabackup Step 5 Edit config file with the help of following information: Global Section: EXAPLUS = Path to EXAPLUS excluding the exaplus. For the example above, it would be ' /usr/opt/EXASuite-6/EXASolution-6.0.10/bin/Console' PROFILENAME = The name of the profile created in the previous step as ( metadata_backup_profile   ) Backup Section: EXTERNAL_DIR = The path where the metadata backup will be stored if specified. Can be an external filesystem. Should be mounted or available beforehand SYSPATH = The path where metabackup_vYYYYMMDD.tar.gz was extracted to DB_NAME = The Database Name EXAPLUS_TIMEOUT = Timeout for Exaplus (default 300 seconds). If you want to prevent long-running queries, set the timeout accordingly. Please note, for very large databases, it might take over 5 minutes to run all of the scripts, so please set the timeout higher. EXAPLUS_RECONNECT = Reconnect tries for Exaplus if the connection fails. Default value is set to '1' Step 6 Make .SH files executable chmod 755 *.sh Step 7 Run backup.sh ./backup.sh or bash backup.sh SOL: Metadata Restore Background This script will import the CSV's created in the Backup and run all of the CREATE statements. The script opens an Exaplus session, creates a schema called 'SYS_OLD' containing the same system tables that were created in the backup, and then imports the CSV's into these tables. All of the CREATE statements are executed, which restores the 'structure' of the database, however all tables are empty. Note: During execution, the owner of all objects is the user running the script. At the end of the script, the owner of all schema changes to match the correct owner. To monitor errors, profiling is enabled by default. You can search through EXA_DBA_PROFILE_LAST_DAY to find commands which were rolled back Limitations If the database is not empty, some objects may fail to be created if they already exist in the database. Objects will not be overwritten. Limitations of the create DDL script may cause errors during the CREATE statements. Please check the restore log, profiling or auditing to identify statements which were not created successfully. If restoring to a database running a different version than the database from the backup, the IMPORT of old sys tables may fail due to different columns. Prerequisites Linux system Exaplus command line Database is already created and is able to be connected from the system you are running the scripts on It is recommended to start the database with auditing ENABLED Database user will need extensive CREATE privileges. It is recommended that the user running the scripts has DBA privileges as the following commands will be carried out: CREATE SCHEMA, TABLE, VIEW, SCRIPT, CONNECTION, ETC GRANT How to apply a  Metadata Restore? Step 1 The restore script can be run from the same system you ran the backup on or a different system. If running the restore on a new system, please follow steps 1-4 found in the Backup instructions to set up the files. NOTE: When setting up your exaplus profile, please enter the information for the database you are restoring to. Step 2 Unpack the backup tar into the directory of your choice tar xf ddl-backup-DB_NAME-YYYY-MM-DD-HH-Mi-SS.tar.gz Step 3 Edit config file with the following information from: Backup Section: SYSPATH = The path where metabackup.tar.gz was extracted to DB_NAME = The Database Name Restore Section: BACKUP_RESTORE_PATH = The path that you unpacked the backup file to (should end with '/ddls') RESTORE_SYS_SQL_PATH = The path containing the restore script Step 4 Run restore.sh ./restore.sh or bash restore.sh Additional References https://www.exasol.com/support/secure/attachment/78805/metadatabackup_v20190418.tar.gz  https://community.exasol.com/t5/database-features/create-ddl-for-the-entire-database/ta-p/1417 https://www.exasol.com/support/browse/IDEA-371
View full article
As Exasol only supports the transaction isolation level "SERIALIZABLE", this article looks at transactions and potential transaction conflicts.
View full article
This article give suggestions on how to minimize transaction conflicts.
View full article
Background Exasol R SDK offers functionality to interact with the Exasol database out of R programs. It extends the ORDBC and offers fast data transfer between Exasol and R ( exa.readData()   and   exa.writeData() ) and it makes it convenient to run parts of your R code in parallel in Exasol using R UDF functions behind the scenes ( exa.createScript() ) Example (Digit recognition) We will use a well-known machine learning algorithm for classifying handwritten digits. It is using a randomforest library, which is used to create decision trees for any kind of image. For a given training set and the actual result for each element, the library returns a decision tree that can be applied to other similar images, the test data. The data is stored pixel-wise (one value for each pixel). This How-To will focus on how to use the EXASolution R SDK and not focus on details of the algorithm or R. Further information on the algorithm and the data can be found   here. The example was chosen to show how easy it is to read or write data from or to EXASolution and how easy it is to deploy a given algorithm to an Exasol UDF. Prerequisites Exasol R SDK and randomForest needs to be installed on the used system. The attached files can be used to setup your Exasol database for the example: Run   01_DDL.sql      (Creates a schema, a table for train data set and a table for test data set) Place csv files ( train.csv     and   test.csv   ) at an appropriate place and change the path to those files in   02_IMPORT.sql   Run   02_IMPORT.sql     (Imports the train and test data set) How to use ExaSolution R SDK R implementation (reading the data sets from an EXASolution database) The attached file   03_RandomForest.r   shows how to use   exa.readData()   and   exa.writeData()   to interact with the database in R. As of Exasol 6.2.x, we needed to modify 1 line in 03_RandomForest.r in order to account for R version differences. Specifically, we are adding a parameter to the save command - version = 2. You are welcome to skip this modification and come back to revisit if you run into issues. Before: save(rf, file = "< path to rf.dat>") After: save(rf, version = 2, file = "< path to rf.dat>")  Additional installation notes for 03_RandomForest.r - installing needed packages. Continuing on, the general procedure is: Step 1. Connect to EXASolution and read the train and test data This requires an ODBC DSN (named "solo" in this example) Step 2. Read train and test data from the database and prepare them for further processing Test data was enriched by an image id Train data was enriched by a label showing the actual handwritten digit Step 3. Use randomForest() to create a decision tree and print prediction In this example, the decision forest is kept and tested on the test data Extract the predictions for test data and print them Step 4. Write decision tree to Exasol We will use a bucket called binary to store the decision forest Detailed information on EXABucket FS and buckets can be found in https://community.exasol.com/t5/environment-management/how-to-create-an-exabucketfs-service-and-bucket/ta-p/1515** This item does not apply to Exasol 6+ and is only left in for archival purposes.   Important Deprecation Notice:  In Version 5 it is not possible to store binary data in EXASOL database. One possibility is to serialize the decision forest and store it in a table. SInce VARCHAR type in EXASOL is limited to 2 million characters, the resulting string should not exceed 2 million characters.  curl -v -X PUT -T rf.dat http: //w:<write_pw>@<ip of one cluster node>:<port>/binary/rf.dat Step 5. Deploying a UDF The other file ( 05_PREDICTINDB.sql ) deploys the functionality to a UDF in EXASOL. We create a variadic UDF receiving an arbitrary amount of input parameters (all pixels) The output is the predicted digit. Load forest from the bucket Create a dataframe for all input columns (pixels) Use   predict()   and emit the predictions Step 6. Using UDF within EXASolution Now it is possible to use the deployed function within EXASolution like any other built-in function: select rf.predict(Model_str, PIXEL0,PIXEL1,PIXEL2,PIXEL3,PIXEL4,PIXEL5,PIXEL6,PIXEL7,PIXEL8,PIXEL9,PIXEL10,PIXEL11,PIXEL12,PIXEL13,PIXEL14,PIXEL15,PIXEL16,PIXEL17,PIXEL18,PIXEL19,PIXEL20,PIXEL21,PIXEL22,PIXEL23,PIXEL24,PIXEL25,PIXEL26,PIXEL27,PIXEL28,PIXEL29,PIXEL30,PIXEL31,PIXEL32,PIXEL33,PIXEL34,PIXEL35,PIXEL36,PIXEL37,PIXEL38,PIXEL39,PIXEL40,PIXEL41,PIXEL42,PIXEL43,PIXEL44,PIXEL45,PIXEL46,PIXEL47,PIXEL48,PIXEL49,PIXEL50,PIXEL51,PIXEL52,PIXEL53,PIXEL54,PIXEL55,PIXEL56,PIXEL57,PIXEL58,PIXEL59,PIXEL60,PIXEL61,PIXEL62,PIXEL63,PIXEL64,PIXEL65,PIXEL66,PIXEL67,PIXEL68,PIXEL69,PIXEL70,PIXEL71,PIXEL72,PIXEL73,PIXEL74,PIXEL75,PIXEL76,PIXEL77,PIXEL78,PIXEL79,PIXEL80,PIXEL81,PIXEL82,PIXEL83,PIXEL84,PIXEL85,PIXEL86,PIXEL87,PIXEL88,PIXEL89,PIXEL90,PIXEL91,PIXEL92,PIXEL93,PIXEL94,PIXEL95,PIXEL96,PIXEL97,PIXEL98,PIXEL99,PIXEL100,PIXEL101,PIXEL102,PIXEL103,PIXEL104,PIXEL105,PIXEL106,PIXEL107,PIXEL108,PIXEL109,PIXEL110,PIXEL111,PIXEL112,PIXEL113,PIXEL114,PIXEL115,PIXEL116,PIXEL117,PIXEL118,PIXEL119,PIXEL120,PIXEL121,PIXEL122,PIXEL123,PIXEL124,PIXEL125,PIXEL126,PIXEL127,PIXEL128,PIXEL129,PIXEL130,PIXEL131,PIXEL132,PIXEL133,PIXEL134,PIXEL135,PIXEL136,PIXEL137,PIXEL138,PIXEL139,PIXEL140,PIXEL141,PIXEL142,PIXEL143,PIXEL144,PIXEL145,PIXEL146,PIXEL147,PIXEL148,PIXEL149,PIXEL150,PIXEL151,PIXEL152,PIXEL153,PIXEL154,PIXEL155,PIXEL156,PIXEL157,PIXEL158,PIXEL159,PIXEL160,PIXEL161,PIXEL162,PIXEL163,PIXEL164,PIXEL165,PIXEL166,PIXEL167,PIXEL168,PIXEL169,PIXEL170,PIXEL171,PIXEL172,PIXEL173,PIXEL174,PIXEL175,PIXEL176,PIXEL177,PIXEL178,PIXEL179,PIXEL180,PIXEL181,PIXEL182,PIXEL183,PIXEL184,PIXEL185,PIXEL186,PIXEL187,PIXEL188,PIXEL189,PIXEL190,PIXEL191,PIXEL192,PIXEL193,PIXEL194,PIXEL195,PIXEL196,PIXEL197,PIXEL198,PIXEL199,PIXEL200,PIXEL201,PIXEL202,PIXEL203,PIXEL204,PIXEL205,PIXEL206,PIXEL207,PIXEL208,PIXEL209,PIXEL210,PIXEL211,PIXEL212,PIXEL213,PIXEL214,PIXEL215,PIXEL216,PIXEL217,PIXEL218,PIXEL219,PIXEL220,PIXEL221,PIXEL222,PIXEL223,PIXEL224,PIXEL225,PIXEL226,PIXEL227,PIXEL228,PIXEL229,PIXEL230,PIXEL231,PIXEL232,PIXEL233,PIXEL234,PIXEL235,PIXEL236,PIXEL237,PIXEL238,PIXEL239,PIXEL240,PIXEL241,PIXEL242,PIXEL243,PIXEL244,PIXEL245,PIXEL246,PIXEL247,PIXEL248,PIXEL249,PIXEL250,PIXEL251,PIXEL252,PIXEL253,PIXEL254,PIXEL255,PIXEL256,PIXEL257,PIXEL258,PIXEL259,PIXEL260,PIXEL261,PIXEL262,PIXEL263,PIXEL264,PIXEL265,PIXEL266,PIXEL267,PIXEL268,PIXEL269,PIXEL270,PIXEL271,PIXEL272,PIXEL273,PIXEL274,PIXEL275,PIXEL276,PIXEL277,PIXEL278,PIXEL279,PIXEL280,PIXEL281,PIXEL282,PIXEL283,PIXEL284,PIXEL285,PIXEL286,PIXEL287,PIXEL288,PIXEL289,PIXEL290,PIXEL291,PIXEL292,PIXEL293,PIXEL294,PIXEL295,PIXEL296,PIXEL297,PIXEL298,PIXEL299,PIXEL300,PIXEL301,PIXEL302,PIXEL303,PIXEL304,PIXEL305,PIXEL306,PIXEL307,PIXEL308,PIXEL309,PIXEL310,PIXEL311,PIXEL312,PIXEL313,PIXEL314,PIXEL315,PIXEL316,PIXEL317,PIXEL318,PIXEL319,PIXEL320,PIXEL321,PIXEL322,PIXEL323,PIXEL324,PIXEL325,PIXEL326,PIXEL327,PIXEL328,PIXEL329,PIXEL330,PIXEL331,PIXEL332,PIXEL333,PIXEL334,PIXEL335,PIXEL336,PIXEL337,PIXEL338,PIXEL339,PIXEL340,PIXEL341,PIXEL342,PIXEL343,PIXEL344,PIXEL345,PIXEL346,PIXEL347,PIXEL348,PIXEL349,PIXEL350,PIXEL351,PIXEL352,PIXEL353,PIXEL354,PIXEL355,PIXEL356,PIXEL357,PIXEL358,PIXEL359,PIXEL360,PIXEL361,PIXEL362,PIXEL363,PIXEL364,PIXEL365,PIXEL366,PIXEL367,PIXEL368,PIXEL369,PIXEL370,PIXEL371,PIXEL372,PIXEL373,PIXEL374,PIXEL375,PIXEL376,PIXEL377,PIXEL378,PIXEL379,PIXEL380,PIXEL381,PIXEL382,PIXEL383,PIXEL384,PIXEL385,PIXEL386,PIXEL387,PIXEL388,PIXEL389,PIXEL390,PIXEL391,PIXEL392,PIXEL393,PIXEL394,PIXEL395,PIXEL396,PIXEL397,PIXEL398,PIXEL399,PIXEL400,PIXEL401,PIXEL402,PIXEL403,PIXEL404,PIXEL405,PIXEL406,PIXEL407,PIXEL408,PIXEL409,PIXEL410,PIXEL411,PIXEL412,PIXEL413,PIXEL414,PIXEL415,PIXEL416,PIXEL417,PIXEL418,PIXEL419,PIXEL420,PIXEL421,PIXEL422,PIXEL423,PIXEL424,PIXEL425,PIXEL426,PIXEL427,PIXEL428,PIXEL429,PIXEL430,PIXEL431,PIXEL432,PIXEL433,PIXEL434,PIXEL435,PIXEL436,PIXEL437,PIXEL438,PIXEL439,PIXEL440,PIXEL441,PIXEL442,PIXEL443,PIXEL444,PIXEL445,PIXEL446,PIXEL447,PIXEL448,PIXEL449,PIXEL450,PIXEL451,PIXEL452,PIXEL453,PIXEL454,PIXEL455,PIXEL456,PIXEL457,PIXEL458,PIXEL459,PIXEL460,PIXEL461,PIXEL462,PIXEL463,PIXEL464,PIXEL465,PIXEL466,PIXEL467,PIXEL468,PIXEL469,PIXEL470,PIXEL471,PIXEL472,PIXEL473,PIXEL474,PIXEL475,PIXEL476,PIXEL477,PIXEL478,PIXEL479,PIXEL480,PIXEL481,PIXEL482,PIXEL483,PIXEL484,PIXEL485,PIXEL486,PIXEL487,PIXEL488,PIXEL489,PIXEL490,PIXEL491,PIXEL492,PIXEL493,PIXEL494,PIXEL495,PIXEL496,PIXEL497,PIXEL498,PIXEL499,PIXEL500,PIXEL501,PIXEL502,PIXEL503,PIXEL504,PIXEL505,PIXEL506,PIXEL507,PIXEL508,PIXEL509,PIXEL510,PIXEL511,PIXEL512,PIXEL513,PIXEL514,PIXEL515,PIXEL516,PIXEL517,PIXEL518,PIXEL519,PIXEL520,PIXEL521,PIXEL522,PIXEL523,PIXEL524,PIXEL525,PIXEL526,PIXEL527,PIXEL528,PIXEL529,PIXEL530,PIXEL531,PIXEL532,PIXEL533,PIXEL534,PIXEL535,PIXEL536,PIXEL537,PIXEL538,PIXEL539,PIXEL540,PIXEL541,PIXEL542,PIXEL543,PIXEL544,PIXEL545,PIXEL546,PIXEL547,PIXEL548,PIXEL549,PIXEL550,PIXEL551,PIXEL552,PIXEL553,PIXEL554,PIXEL555,PIXEL556,PIXEL557,PIXEL558,PIXEL559,PIXEL560,PIXEL561,PIXEL562,PIXEL563,PIXEL564,PIXEL565,PIXEL566,PIXEL567,PIXEL568,PIXEL569,PIXEL570,PIXEL571,PIXEL572,PIXEL573,PIXEL574,PIXEL575,PIXEL576,PIXEL577,PIXEL578,PIXEL579,PIXEL580,PIXEL581,PIXEL582,PIXEL583,PIXEL584,PIXEL585,PIXEL586,PIXEL587,PIXEL588,PIXEL589,PIXEL590,PIXEL591,PIXEL592,PIXEL593,PIXEL594,PIXEL595,PIXEL596,PIXEL597,PIXEL598,PIXEL599,PIXEL600,PIXEL601,PIXEL602,PIXEL603,PIXEL604,PIXEL605,PIXEL606,PIXEL607,PIXEL608,PIXEL609,PIXEL610,PIXEL611,PIXEL612,PIXEL613,PIXEL614,PIXEL615,PIXEL616,PIXEL617,PIXEL618,PIXEL619,PIXEL620,PIXEL621,PIXEL622,PIXEL623,PIXEL624,PIXEL625,PIXEL626,PIXEL627,PIXEL628,PIXEL629,PIXEL630,PIXEL631,PIXEL632,PIXEL633,PIXEL634,PIXEL635,PIXEL636,PIXEL637,PIXEL638,PIXEL639,PIXEL640,PIXEL641,PIXEL642,PIXEL643,PIXEL644,PIXEL645,PIXEL646,PIXEL647,PIXEL648,PIXEL649,PIXEL650,PIXEL651,PIXEL652,PIXEL653,PIXEL654,PIXEL655,PIXEL656,PIXEL657,PIXEL658,PIXEL659,PIXEL660,PIXEL661,PIXEL662,PIXEL663,PIXEL664,PIXEL665,PIXEL666,PIXEL667,PIXEL668,PIXEL669,PIXEL670,PIXEL671,PIXEL672,PIXEL673,PIXEL674,PIXEL675,PIXEL676,PIXEL677,PIXEL678,PIXEL679,PIXEL680,PIXEL681,PIXEL682,PIXEL683,PIXEL684,PIXEL685,PIXEL686,PIXEL687,PIXEL688,PIXEL689,PIXEL690,PIXEL691,PIXEL692,PIXEL693,PIXEL694,PIXEL695,PIXEL696,PIXEL697,PIXEL698,PIXEL699,PIXEL700,PIXEL701,PIXEL702,PIXEL703,PIXEL704,PIXEL705,PIXEL706,PIXEL707,PIXEL708,PIXEL709,PIXEL710,PIXEL711,PIXEL712,PIXEL713,PIXEL714,PIXEL715,PIXEL716,PIXEL717,PIXEL718,PIXEL719,PIXEL720,PIXEL721,PIXEL722,PIXEL723,PIXEL724,PIXEL725,PIXEL726,PIXEL727,PIXEL728,PIXEL729,PIXEL730,PIXEL731,PIXEL732,PIXEL733,PIXEL734,PIXEL735,PIXEL736,PIXEL737,PIXEL738,PIXEL739,PIXEL740,PIXEL741,PIXEL742,PIXEL743,PIXEL744,PIXEL745,PIXEL746,PIXEL747,PIXEL748,PIXEL749,PIXEL750,PIXEL751,PIXEL752,PIXEL753,PIXEL754,PIXEL755,PIXEL756,PIXEL757,PIXEL758,PIXEL759,PIXEL760,PIXEL761,PIXEL762,PIXEL763,PIXEL764,PIXEL765,PIXEL766,PIXEL767,PIXEL768,PIXEL769,PIXEL770,PIXEL771,PIXEL772,PIXEL773,PIXEL774,PIXEL775,PIXEL776,PIXEL777,PIXEL778,PIXEL779,PIXEL780,PIXEL781,PIXEL782,PIXEL783) from rf.test cross join rfmodels.models; PRED 0 0 3 0 3 3 0 0 1 7
View full article
This article summarizes Exasol's basic resource management which includes CPU, RAM, Network, Disks and Runtime, Priority Groups.
View full article
If you want to implement Column Level Security in addition to Row Level Security, this example shows you how this can work using Virtual Schemas
View full article