Fully managed cloud. 30-day FREE full featured trial. Start Now
cancel
Showing results for 
Search instead for 
Did you mean: 

Open Resultsets exceeds 250 in Query Wrapper Script

martijn_dib
Contributor

Hi Exasol-Experts!

 

At the moment I am experiencing some issues with Using the Query Wrapper to Execute in a script that basically just loops over the Delta Import script using qw:query([[ execute script schema.delta_import(params)]]). 

since a few weeks now I am getting the error: SQL Error [43000]: "[p]query function: number of open resultsets exceeds 250" caught in script "ETL"."QUERY_WRAPPER" at line 19 (Session: XXX) which indicates perhaps an infinitely running loop or the not flushing of any global variables. However, it only fires this error when I run it over a specific schema. Any other schema and the script runs just fine (other schemas include more tables and more rows per table). 

 

I have been running numerous test trying to reset variables. But cannot seem to find where it goes wrong. Any smart people around that know what it might be?

 

thanks!

1 ACCEPTED SOLUTION

Accepted Solutions

martijn_dib
Contributor

Hi @exa-Aleksandr,

This has been the help I needed. My conclusion ends up being that the query wrapper faces a timeout with one specific table in that schema. This timeout triggers some logging in the query wrapper which is then also cancelled due to the timeout. This results in a loop that overloads a variable in the logging proces rather than telling me the query has timed out. The error message ended up throwing me off course. 

 

Hopefully this thread will prevent others from facing similar issues.

 

Cheers and thanks for the help

 

 

View solution in original post

7 REPLIES 7

SusanneK
Contributor

Hi @martijn_dib, we also had this issue a lot in the past. In my experience, the error message is misleading. We could solve the issue by checking all the priviliges of the executing user. In our case we had this issue only with one dedicated user, a user with more priviliges could run the script without issues. So maybe you could check, if your user needs more priviliges. Since we have many users for single applications we created a query_wrapper role to prevent this issue in the future. Every user who needs to use the query_wrapper should be granted to this role:

 

CREATE ROLE etl_query_wrapper;

GRANT EXECUTE ON etl.query_wrapper TO etl_query_wrapper;
GRANT SELECT, INSERT, UPDATE ON etl.job_log TO etl_query_wrapper;
GRANT SELECT, INSERT ON etl.job_details TO etl_query_wrapper;

GRANT etl_query_wrapper TO xxxxx;

 I am excited, if this works for you as well 🙂

kind regards, Susanne

 

martijn_dib
Contributor

Hi Susanne, 

Thanks for the response! Unfortunately, my user has DBA rights in our instance and multiple other users checked my findings to arrive at the same conclusions. 

 

This also would not explain why the error only occurs for a specific source schema, if I run the script for the full source it fails. If I run the script for all schema's except one all runs smoothly. 

Your comments did get me thinking that maybe some rights have been changed in the source schema leading to this error. I will go and check this with the DBAs of the source database. I will get back to you once I get word from them. In the meantime, any other suggestions are welcome!

 

Cheers

exa-Aleksandr
Team Exasol
Team Exasol

Hi @martijn_dib ,

You could also try to check the EXA_DBA_AUDIT_SQL contents for the particular Query Wrapper session to maybe get a hint on which statement is getting executed more frequently then expected.

martijn_dib
Contributor

Hi @exa-Aleksandr,

This has been the help I needed. My conclusion ends up being that the query wrapper faces a timeout with one specific table in that schema. This timeout triggers some logging in the query wrapper which is then also cancelled due to the timeout. This results in a loop that overloads a variable in the logging proces rather than telling me the query has timed out. The error message ended up throwing me off course. 

 

Hopefully this thread will prevent others from facing similar issues.

 

Cheers and thanks for the help

 

 

ikozlovskiy
Padawan

That was exactly the case with our main ETL process - an overloaded variable caused by transaction blocks and timeouts. This can happen, if different jobs run at the same time and use query wrapper with the same logging tables. We solved the problem by creating dedicated query wrapper logging tables for the most demanding jobs.

exa-Aleksandr
Team Exasol
Team Exasol

Hi @martijn_dib ,

Do you have a kind of simple example of how to achieve such error similar to your case?

I mean a simple example like

create table test.some_tab(
col1 number
)
;
/

CREATE OR REPLACE LUA SCRIPT etl.example_script RETURNS TABLE AS
    import('etl.query_wrapper','qw')
    wrapper = qw.new('etl.job_log', 'etl.job_details', 'test_script')
    wrapper:query([[insert into test.some_tab values ('abc')]])
    return wrapper:finish()
/
;

EXECUTE SCRIPT etl.example_script();

 The example above doesn't fail with desired "Open Resultsets exceeds 250". 

martijn_dib
Contributor

Hi Aleksandr, 

We use the a script to run the delta_import script in a loop. Which basically is just a mass production version of your script here. Big difference is that we have two tables that take considerable time to be import from src to exa. These large tables resulted in queries that exceeded the query timeout. 

This timeout cancellation of the query is then followed by a rather large list of log wrappers in the sql audit table. If you would run the script as follows a similar error should occur.

create table test.some_tab(
col1 number
)
;
/

CREATE OR REPLACE LUA SCRIPT etl.example_script RETURNS TABLE AS
    import('etl.query_wrapper','qw')
    wrapper = qw.new('etl.job_log', 'etl.job_details', 'test_script')
    wrapper:query([[insert into test.some_tab select col1 from some_large_table]])
    return wrapper:finish()
/
;

EXECUTE SCRIPT etl.example_script();

I now started working on different methods to import the two erroneous tables. Having taken those two tables out of the loop everything works just fine.