Exasol Storage Reporting

Xpert

I wanted to start a thread on understanding Exasol storage reporting. I had a slight panic attack when I glanced at our storage usage in the Exasol Admin Console and it showed 1 GiB free on each active node:

ugamarkj_0-1601569829508.png

I believe this shows how much of the disk is actually allocated to the software usage and not necessarily what is being used. I then looked at the EXA_STATISTICS.EXA_DB_SIZE_DAILY table and noticed a dramatic increase in storage size on the 26th (screenshot below). We had a table modification that resulted in several of our views generating a massive cross-join situation that day. We have jobs that generate tables from these views. They were killed before they completed, but generated massive temp tables in the background. In response, Exasol automatically allocated more of the physical storage to the software and so the USE_AVG percent dropped dramatically since that is computed against the allocated space utilized. I believe restarting the database will resolve this oddity by clearing out the temp storage and re-computing the storage allocation. So if I'm correct, I need a different way to to determine actual storage usage than looking at the console. If you have a RAW data based license, then you should keep an eye on RAW_OBJECT_SIZE_AVG field versus your license limit. STORAGE_SIZE_AVG includes includes temp storage that may not be cleared out immediately. USE_AVG isn't super helpful since the denominator isn't a fixed value based on either your license limit or physical storage capacity. Instead it is based on a moving value that can scale automatically based on storage demand. 

ugamarkj_1-1601570109643.png

Please reply if I've misunderstood anything and if you have any techniques you use to keep an eye on storage utilization compared to license limits and physical storage limits. 

 

 

 

3 REPLIES 3

Xpert

Yep, the current disk space usage available in stat tables is not useful. Interestingly enough, the reporting in ExaOperation is usually correct.

This is the formula / function in PyEXASOL which gets as close as possible to disk space usage reported by ExaOperation: https://github.com/badoo/pyexasol/blob/master/pyexasol/ext.py#L237-L262

 

            SELECT measure_time,
                   (committed_size * redundancy + temp_swap_data) AS occupied_size,
                   (device_size * redundancy + hdd_free) AS total_size
            FROM "$EXA_STATS_DB_SIZE"
            ORDER BY measure_time DESC
            LIMIT 1

 

Xpert

Nice. Definitely building a report on this since it gives the data over time. Also, I love the "local." feature of Exasol.

 

SELECT measure_time,
(committed_size * redundancy + temp_swap_data) AS occupied_size,
(device_size * redundancy + hdd_free) AS total_size
, local.occupied_size / local.total_size AS Percent
FROM "$EXA_STATS_DB_SIZE"
ORDER BY measure_time DESC
LIMIT 10  

 

Xpert

Hi Mark,

you might want to check out EXA_VOLUME_USAGE for storage volume related infos, perhaps you´ll find something of use to you.

As for the TEMP that "clogged up" the info in ExaOperation: we also had a few occasions were some rouge MicroStrategy-report started creating big TEMP-segments, resulting in the same "1 GB free shock moment" - a restart will set that record straight.

 

Cheers,
Malte