Home > Miscellaneous, XML DB > ExcelTable 1.2 : introducing streaming support for large files

ExcelTable 1.2 : introducing streaming support for large files

November 15, 2016 Leave a comment Go to comments

ExcelTable is my attempt at building a SQL query interface to read MS Excel files in xlsx (or xlsm) format.
Version 1.2 is now available with the following new features :

  • Streaming read support for large Excel files
  • setFetchSize() routine to limit the number of rows processed per request

 
/mbleron/oracle/ExcelTable

 

Streaming read method

The getRows() function has been extended with an additional optional argument p_method.
Allowed values are as follows :

-- Read methods  
DOM_READ     constant binary_integer := 0;
STREAM_READ  constant binary_integer := 1;

The default is 0 (DOM_READ).

The streaming method requires Java (StAX API) and is much more scalable than the DOM_READ method when accessing large files.
Please see the README for more details about what to install depending on the target database version.

Example on a 500,000-row file (bigfile.xlsx) :

select * 
from table(
       ExcelTable.getRows(
         ExcelTable.getFile('TMP_DIR','bigfile.xlsx')
       , 'data'      
       , q'{ 
            "ID"           number
          , "FIRST_NAME"   varchar2(15)
          , "LAST_NAME"    varchar2(20)
          , "EMAIL"        varchar2(30)
          , "GENDER"       varchar2(10)
          , "IP_ADDRESS"   varchar2(16)
          , "COMMENT"      varchar2(4000)
          , "IMAGE"        varchar2(4000)
          , "DT"           date  format 'DD/MM/YYYY'

         }'
       , 'A2'
       , 1
       )
     ) t ;

 

setFetchSize() procedure

The setFetchSize() procedure has been introduced to limit the number of rows processed and returned per fetch call. The default value is 100.
It can be useful to tweak this value in case we have no control over request settings.
A good example of that is the CREATE TABLE AS SELECT statement, which fetches from the SELECT in batch of 32,767 rows. Building and transferring such a dataset may be quite memory-intensive and negate the very purpose of a pipelined function.

 

Advertisements
  1. Cyryl
    November 17, 2016 at 11:39

    As always a great source of inspiration !

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: