Team Exasol
Team Exasol



Other, maybe better suited solutions for concurrency testing:

  • JMeter
  • CN's EXAloadgenerator (check cprep repository / ask presales)

Advantages of this solution:

  • No extra software required
  • Fully scriptable / customizable
  • Uses exaplus and file input for SQL:
    • autocommit on/off
    • batch mode
    • multiple statements per connection
  • Synchronization after login possible


  • no loops or time-based (#queries in fixed time) analysis after synchronization
  • exaplus (non-gui) uses dedicated memory per instance...


  • Use multiple EXAplus instances (will put memory-load on local machine)
  • Use a FIFO for manual "start" triggering
  • IMPORT from that local FIFO from all EXAplus instances for synchronization
  • Use screen for easy / automated parallel execution
  • Use TIMER in EXAplus for query measurement (or go auditing)

Step by Step

a - FIFO creation

#> mkfifo the_fifo.csv

This will create a special file with no storage backend. You can read from the file (blocking) and will receive anything that gets written into the file (using echo or cat ...)
However, most apps will also send a 'EOF' / 'file closed' event when they are done writing, and those events get transmitted to the reading process(es) also.
This makes for an excellent semaphore / mutex to wait on.

b - screen setup

#> screen

This will open a new screen session for you, with default command key A (can be changed using the -e option)
Enter CTRL+A-" (double quote) to get a list of windows in this screen.

Within screen, you can open a new window (+shell) using CTRL+A-C, or by using the screen command itself:

#> screen exaplus

This will create a new window running the given command, and return control to the local shell. Windows created this way will auto-close when the application exits, so don't test it with echo/cat/...

This will allow us to just "spam" many exaplus instances (in one screen window each) without having to go for some job control.

c - The "sync.sql" file

-- We expect the actual SQL file as parameter
define FILE_NAME=&1;
-- start logging
spool &FILE_NAME..txt;

-- change whatever settings you need

-- synchronize all clients here
IMPORT INTO ( tmp varchar(20) ) -- output to console
   FROM LOCAL CSV FILE 'the_fifo.csv';

-- Time actual file execution

The fun part here is that the IMPORT will try to read from our fifo and will block until data is inserted. And it sends received data to the client, so no database table / resource will be required. No transaction conflicts or locking.

Except for load on the local machine and time required for SQL parsing, this will really start all streams at the same time, without staggered delays.

d - Sample shell execution

#> # enter screen environment
#> screen
#> # create fifo
#> mkfifo the_fifo.csv
#> # spam exaplus instances
#> for FILE in <some wildcard>; do
>       screen exaplus --profile db_login -f sync.sql -- "$FILE"
>   end
#> # check log files and wait for all exaplus instances to "get stuck" on the import statement
#> # add some data to our fifo, it will trigger all the imports (but only one of them gets data)
#> echo "start" >> fifo.csv
#> # now just wait for all exaplus instances to exit (can use screen window list)
#> # cleanup
#> rm fifo.csv

Additional References

Here I link to other sites/information that may be relevant.