Archive

Archive for the ‘JSON’ Category

How To : Load JSON Data into Multiple Tables

June 20, 2018 Leave a comment

Lately, I’ve been revisiting old posts about miscellaneous XML techniques to see if they could be adapted to similar situations involving JSON data.

One of them is an approach to load a multi-level document into multiple tables :
How To : Load XML data into multiple tables

Here’s the same approach applied to JSON data.
 

Set up

For this test case, I’ll build a JSON document containing the famous HR hierarchy, starting from locations down to employees :

insert into json_documents (id, doc)
select 2
     , json_arrayagg(
         json_object(
           'id' value l.location_id
         , 'address' value l.street_address
         , 'city' value l.city
         , 'country' value l.country_id
         , 'departments' value (
             select json_arrayagg(
                      json_object(
                        'id' value d.department_id
                      , 'name' value d.department_name
                      , 'manager' value d.manager_id
                      , 'employees' value (
                          select json_arrayagg(
                                   json_object(
                                     'id' value e.employee_id
                                   , 'lastname' value e.last_name
                                   , 'hiredate' value e.hire_date
                                   , 'salary' value e.salary
                                   )
                                   order by e.last_name
                                 )
                          from hr.employees e
                          where e.department_id = d.department_id
                        )
                        absent on null
                      )
                      order by d.department_id
                    )
             from hr.departments d
             where d.location_id = l.location_id
           )
           absent on null         
         )
         returning clob
       ) 
       as result
from hr.locations l
;

Below is an excerpt of the generated content (manually formatted for clarity) :

[
  {"id":1000,"address":"1297 Via Cola di Rie","city":"Roma","country":"IT"},
  {"id":1100,"address":"93091 Calle della Testa","city":"Venice","country":"IT"},
  {"id":1200,"address":"2017 Shinjuku-ku","city":"Tokyo","country":"JP"},
  {"id":1300,"address":"9450 Kamiya-cho","city":"Hiroshima","country":"JP"},
  {
    "id":1400,
    "address":"2014 Jabberwocky Rd",
    "city":"Southlake",
    "country":"US",
    "departments":[
      {
        "id":60,
        "name":"IT",
        "manager":103,
        "employees":[
          {"id":105,"lastname":"Austin","hiredate":"1997-06-25T00:00:00","salary":4800},
          {"id":104,"lastname":"Ernst","hiredate":"1991-05-21T00:00:00","salary":6000},
          {"id":103,"lastname":"Hunold","hiredate":"1990-01-03T00:00:00","salary":9000},
          {"id":107,"lastname":"Lorentz","hiredate":"1999-02-07T00:00:00","salary":4200},
          {"id":106,"lastname":"Pataballa","hiredate":"1998-02-05T00:00:00","salary":4800}
        ]
      }
    ]
  },
  {
    "id":1500,
    "address":"2011 Interiors Blvd",
    "city":"South San Francisco",
    "country":"US",
    "departments":[
      ...
    ]
  },
  ...
]

The JSON document possesses three logical levels of entities organized in a master-details structure : locations > departments > employees.

The goal is now to load each entity back into its own table :

create table my_locations as 
select location_id, street_address, city, country_id from hr.locations where 1 = 0;

create table my_departments as
select department_id, department_name, manager_id from hr.departments where 1 = 0;

create table my_employees as 
select employee_id, last_name, hire_date, salary from hr.employees where 1 = 0;

 

Example

Just like the technique referenced in introduction using chained XMLTABLEs, we can follow the same logic here using JSON_TABLE and NESTED PATH clauses to flatten out all levels :

select x.location_id
     , x.street_address
     , x.city
     , x.country_id
     , nvl(x.dept_rnum, 1) as dept_rnum
     , x.department_id
     , x.department_name
     , x.manager_id
     , nvl(x.emp_rnum, 1) as emp_rnum
     , x.employee_id
     , x.last_name
     , x.hire_date
     , x.salary
from json_documents t
   , json_table(t.doc,
       '$[*]'
       columns location_id     number(4)    path '$.id'
             , street_address  varchar2(40) path '$.address'
             , city            varchar2(30) path '$.city'
             , country_id      varchar2(2)  path '$.country'
             , nested path '$.departments[*]'
               columns (
                 dept_rnum        for ordinality
               , department_id    number(4)    path '$.id'
               , department_name  varchar2(30) path '$.name'
               , manager_id       number(6)    path '$.manager'
               , nested path '$.employees[*]'
                 columns (
                   emp_rnum     for ordinality
                 , employee_id  number(6)    path '$.id'
                 , last_name    varchar2(25) path '$.lastname'
                 , hire_date    date         path '$.hiredate'
                 , salary       number(8,2)  path '$.salary'
                 )
               )
     ) x
where t.id = 2
;
 
LOCATION_ID STREET_ADDRESS           CITY                  COUNTRY_ID  DEPT_RNUM DEPARTMENT_ID DEPARTMENT_NAME  MANAGER_ID   EMP_RNUM EMPLOYEE_ID LAST_NAME   HIRE_DATE    SALARY
----------- ------------------------ --------------------- ---------- ---------- ------------- ---------------- ---------- ---------- ----------- ----------- ----------- -------
       1000 1297 Via Cola di Rie     Roma                  IT                  1                                                    1                                     
       1100 93091 Calle della Testa  Venice                IT                  1                                                    1                                     
       1200 2017 Shinjuku-ku         Tokyo                 JP                  1                                                    1                                     
       1300 9450 Kamiya-cho          Hiroshima             JP                  1                                                    1                                     
       1400 2014 Jabberwocky Rd      Southlake             US                  1            60 IT                      103          1         105 Austin      25/06/1997     4800
       1400 2014 Jabberwocky Rd      Southlake             US                  1            60 IT                      103          2         104 Ernst       21/05/1991     6000
       1400 2014 Jabberwocky Rd      Southlake             US                  1            60 IT                      103          3         103 Hunold      03/01/1990     9000
       1400 2014 Jabberwocky Rd      Southlake             US                  1            60 IT                      103          4         107 Lorentz     07/02/1999     4200
       1400 2014 Jabberwocky Rd      Southlake             US                  1            60 IT                      103          5         106 Pataballa   05/02/1998     4800
       1500 2011 Interiors Blvd      South San Francisco   US                  1            50 Shipping                121          1         130 Atkinson    30/10/1997     2800
       1500 2011 Interiors Blvd      South San Francisco   US                  1            50 Shipping                121          2         192 Bell        04/02/1996     4000
       1500 2011 Interiors Blvd      South San Francisco   US                  1            50 Shipping                121          3         129 Bissot      20/08/1997     3300
 ...
 

Now we just have to insert this data set into MY_LOCATIONS, MY_DEPARTMENTS and MY_EMPLOYEES using a multitable INSERT statement.
In order to load each location only once, we’ll only take those rows having DEPT_RNUM = 1 and EMP_RNUM = 1.
Likewise for departments with EMP_RNUM = 1, and excluding rows with actually no departement data (DEPARTMENT_ID IS NOT NULL).
Finally, employees data will be loaded from rows where EMPLOYEE_ID is not NULL :

insert all 
when dept_rnum = 1 and emp_rnum = 1 
     then into my_locations (location_id, street_address, city, country_id)
                     values (location_id, street_address, city, country_id)
when department_id is not null and emp_rnum = 1
     then into my_departments (department_id, department_name, manager_id)
                       values (department_id, department_name, manager_id)
when employee_id is not null
     then into my_employees (employee_id, last_name, hire_date, salary)
                     values (employee_id, last_name, hire_date, salary)
select x.location_id
     , x.street_address
     , x.city
     , x.country_id
     , nvl(x.dept_rnum, 1) as dept_rnum
     , x.department_id
     , x.department_name
     , x.manager_id
     , nvl(x.emp_rnum, 1) as emp_rnum
     , x.employee_id
     , x.last_name
     , x.hire_date
     , x.salary
from json_documents t
   , json_table(t.doc,
       '$[*]'
       columns location_id     number(4)    path '$.id'
             , street_address  varchar2(40) path '$.address'
             , city            varchar2(30) path '$.city'
             , country_id      varchar2(2)  path '$.country'
             , nested path '$.departments[*]'
               columns (
                 dept_rnum        for ordinality
               , department_id    number(4)    path '$.id'
               , department_name  varchar2(30) path '$.name'
               , manager_id       number(6)    path '$.manager'
               , nested path '$.employees[*]'
                 columns (
                   emp_rnum     for ordinality
                 , employee_id  number(6)    path '$.id'
                 , last_name    varchar2(25) path '$.lastname'
                 , hire_date    date         path '$.hiredate'
                 , salary       number(8,2)  path '$.salary'
                 )
               )         
     ) x
where t.id = 2
;

156 rows inserted.
 

SQL> select * from my_locations;
 
LOCATION_ID STREET_ADDRESS                           CITY                           COUNTRY_ID
----------- ---------------------------------------- ------------------------------ ----------
       1000 1297 Via Cola di Rie                     Roma                           IT
       1100 93091 Calle della Testa                  Venice                         IT
       1200 2017 Shinjuku-ku                         Tokyo                          JP
       1300 9450 Kamiya-cho                          Hiroshima                      JP
       1400 2014 Jabberwocky Rd                      Southlake                      US
       1500 2011 Interiors Blvd                      South San Francisco            US
...
       2800 Rua Frei Caneca 1360                     Sao Paulo                      BR
       2900 20 Rue des Corps-Saints                  Geneva                         CH
       3000 Murtenstrasse 921                        Bern                           CH
       3100 Pieter Breughelstraat 837                Utrecht                        NL
       3200 Mariano Escobedo 9991                    Mexico City                    MX
 
23 rows selected.
 
SQL> select * from my_departments;
 
DEPARTMENT_ID DEPARTMENT_NAME                MANAGER_ID
------------- ------------------------------ ----------
           60 IT                                    103
           50 Shipping                              121
           10 Administration                        200
           30 Purchasing                            114
           90 Executive                             100
          100 Finance                               108
          110 Accounting                            205
          120 Treasury                       
...                  
          270 Payroll                        
           20 Marketing                             201
           40 Human Resources                       203
           80 Sales                                 145
           70 Public Relations                      204
 
27 rows selected.
 
SQL> select * from my_employees;
 
EMPLOYEE_ID LAST_NAME                 HIRE_DATE       SALARY
----------- ------------------------- ----------- ----------
        105 Austin                    25/06/1997     4800.00
        104 Ernst                     21/05/1991     6000.00
        103 Hunold                    03/01/1990     9000.00
        107 Lorentz                   07/02/1999     4200.00
        106 Pataballa                 05/02/1998     4800.00
        130 Atkinson                  30/10/1997     2800.00
        192 Bell                      04/02/1996     4000.00
...
        159 Smith                     10/03/1997     8000.00
        171 Smith                     23/02/1999     7400.00
        157 Sully                     04/03/1996     9500.00
        176 Taylor                    24/03/1998     8600.00
        150 Tucker                    30/01/1997    10000.00
        155 Tuvault                   23/11/1999     7000.00
        162 Vishney                   11/11/1997    10500.00
        149 Zlotkey                   29/01/2000    10500.00
        204 Baer                      07/06/1994    10000.00
 
106 rows selected.
 

23 + 27 + 106 = 156
All rows accounted for :)

 

Advertisements

Hierarchical JSON Aggregation : the JSONNest function

June 17, 2018 Leave a comment

This post echoes one of my previous work on the topic, this time applied to JSON data.

The idea is to build a user-defined aggregate function to automatically nest a hierarchically-ordered set of JSON objects, and return a single JSON document containing the whole hierarchy.

For example, from this adjacency list :

        ID  PARENT_ID NAME
---------- ---------- ------
         1            Item_1
         2          1 Item_2
         3          1 Item_3
         4          3 Item_4
         5          4 Item_5

to :

{
  "id": 1,
  "name": "Item_1",
  "children": [
    {
      "id": 2,
      "name": "Item_2"
    },
    {
      "id": 3,
      "name": "Item_3",
      "children": [
        {
          "id": 4,
          "name": "Item_4",
          "children": [
            {
              "id": 5,
              "name": "Item_5"
            }
          ]
        }
      ]
    }
  ]
}

 

The implementation I’ll present here requires the following :

  • An object type to hold the definition of each node (JSONNestItem)
  • A PL/SQL package for the core routines and context maintenance (JSONNestUtil)
  • A object type for ODCI routines (JSONNestImpl)
  • The aggregate function (JSONNest)

Source code available on GitHub :

/mbleron/oracle/JSONNest

Installation :

@JSONNestItem.tps
@JSONNestUtil.pks
@JSONNestUtil.pkb
@JSONNestImpl.tps
@JSONNestImpl.tpb
@JSONNest.fnc

 

Implementation

JSONNest requires at least Oracle 12.2.
It uses PL/SQL JSON object types introduced in that release, allowing fine-grained manipulation of JSON data.
The implementation is a little more complicated than expected because of a bug (I reported) in 12.2, related to accessing JSON objects by reference :
ORA-7445 Or Corrupted Result When Nesting PL/SQL JSON Objects (Doc ID 2361514.1)

The bug is announced to be fixed in 18c (18.1), but I don’t have the opportunity to check yet.
The workaround requires maintaining a stack of JSON objects. When I get to work on a 18c instance, I’ll rewrite the code to use object references instead.

The function :

function JSONNest (input in JSONNestItem) 
return clob
aggregate using JSONNestImpl;

where JSONNestItem is an object type describing a node of the hierarchy :

create or replace type JSONNestItem as object (
  item_level    integer
, json_content  varchar2(32767)
, wrapper       varchar2(4000)
)
  • item_level : node level in the hierarchy, typically derived from LEVEL pseudocolumn in a CONNECT-BY query, or computed in a recursive CTE.
  • json_content : actual JSON content of this node. Must be a JSON object.
  • wrapper : name of the field containing the array of child nodes.

 

Example

I’ll use the usual HR.EMPLOYEES table to generate the hierarchy of employees :

select JSONNest( 
         JSONNestItem(
           level
         , json_object(
             'id' value employee_id
           , 'name' value first_name || ' ' ||last_name
           )
        , 'employees'
        )
      ) as result
from hr.employees
connect by prior employee_id = manager_id
start with manager_id is null
order siblings by last_name
;

Output (formatted) :

{
  "id": 100,
  "name": "Steven King",
  "employees": [
    {
      "id": 101,
      "name": "Neena Kochhar",
      "employees": [
        {
          "id": 108,
          "name": "Nancy Greenberg",
          "employees": [
            {
              "id": 109,
              "name": "Daniel Faviet"
            },
            {
              "id": 110,
              "name": "John Chen"
            },
            {
              "id": 111,
              "name": "Ismael Sciarra"
            },
            {
              "id": 112,
              "name": "Jose Manuel Urman"
            },
            {
              "id": 113,
              "name": "Luis Popp"
            }
          ]
        },
        {
          "id": 200,
          "name": "Jennifer Whalen"
        },
        {
          "id": 203,
          "name": "Susan Mavris"
        },
        {
          "id": 204,
          "name": "Hermann Baer"
        },
        {
          "id": 205,
          "name": "Shelley Higgins",
          "employees": [
            {
              "id": 206,
              "name": "William Gietz"
            }
          ]
        }
      ]
    },
    {
      "id": 102,
      "name": "Lex De Haan",
      "employees": [
        {
          "id": 103,
          "name": "Alexander Hunold",
          "employees": [
            {
              "id": 104,
              "name": "Bruce Ernst"
            },
            {
              "id": 105,
              "name": "David Austin"
            },
            {
              "id": 106,
              "name": "Valli Pataballa"
            },
            {
              "id": 107,
              "name": "Diana Lorentz"
            }
          ]
        }
      ]
    },
    {
      "id": 114,
      "name": "Den Raphaely",
      "employees": [
        {
          "id": 115,
          "name": "Alexander Khoo"
        },
        {
          "id": 116,
          "name": "Shelli Baida"
        },
        {
          "id": 117,
          "name": "Sigal Tobias"
        },
        {
          "id": 118,
          "name": "Guy Himuro"
        },
        {
          "id": 119,
          "name": "Karen Colmenares"
        }
      ]
    },
    {
      "id": 120,
      "name": "Matthew Weiss",
      "employees": [
        {
          "id": 125,
          "name": "Julia Nayer"
        },
        {
          "id": 126,
          "name": "Irene Mikkilineni"
        },
        {
          "id": 127,
          "name": "James Landry"
        },
        {
          "id": 128,
          "name": "Steven Markle"
        },
        {
          "id": 180,
          "name": "Winston Taylor"
        },
        {
          "id": 181,
          "name": "Jean Fleaur"
        },
        {
          "id": 182,
          "name": "Martha Sullivan"
        },
        {
          "id": 183,
          "name": "Girard Geoni"
        }
      ]
    },
    {
      "id": 121,
      "name": "Adam Fripp",
      "employees": [
        {
          "id": 129,
          "name": "Laura Bissot"
        },
        {
          "id": 130,
          "name": "Mozhe Atkinson"
        },
        {
          "id": 131,
          "name": "James Marlow"
        },
        {
          "id": 132,
          "name": "TJ Olson"
        },
        {
          "id": 184,
          "name": "Nandita Sarchand"
        },
        {
          "id": 185,
          "name": "Alexis Bull"
        },
        {
          "id": 186,
          "name": "Julia Dellinger"
        },
        {
          "id": 187,
          "name": "Anthony Cabrio"
        }
      ]
    },
    {
      "id": 122,
      "name": "Payam Kaufling",
      "employees": [
        {
          "id": 133,
          "name": "Jason Mallin"
        },
        {
          "id": 134,
          "name": "Michael Rogers"
        },
        {
          "id": 135,
          "name": "Ki Gee"
        },
        {
          "id": 136,
          "name": "Hazel Philtanker"
        },
        {
          "id": 188,
          "name": "Kelly Chung"
        },
        {
          "id": 189,
          "name": "Jennifer Dilly"
        },
        {
          "id": 190,
          "name": "Timothy Gates"
        },
        {
          "id": 191,
          "name": "Randall Perkins"
        }
      ]
    },
    {
      "id": 123,
      "name": "Shanta Vollman",
      "employees": [
        {
          "id": 137,
          "name": "Renske Ladwig"
        },
        {
          "id": 138,
          "name": "Stephen Stiles"
        },
        {
          "id": 139,
          "name": "John Seo"
        },
        {
          "id": 140,
          "name": "Joshua Patel"
        },
        {
          "id": 192,
          "name": "Sarah Bell"
        },
        {
          "id": 193,
          "name": "Britney Everett"
        },
        {
          "id": 194,
          "name": "Samuel McCain"
        },
        {
          "id": 195,
          "name": "Vance Jones"
        }
      ]
    },
    {
      "id": 124,
      "name": "Kevin Mourgos",
      "employees": [
        {
          "id": 141,
          "name": "Trenna Rajs"
        },
        {
          "id": 142,
          "name": "Curtis Davies"
        },
        {
          "id": 143,
          "name": "Randall Matos"
        },
        {
          "id": 144,
          "name": "Peter Vargas"
        },
        {
          "id": 196,
          "name": "Alana Walsh"
        },
        {
          "id": 197,
          "name": "Kevin Feeney"
        },
        {
          "id": 198,
          "name": "Donald OConnell"
        },
        {
          "id": 199,
          "name": "Douglas Grant"
        }
      ]
    },
    {
      "id": 145,
      "name": "John Russell",
      "employees": [
        {
          "id": 150,
          "name": "Peter Tucker"
        },
        {
          "id": 151,
          "name": "David Bernstein"
        },
        {
          "id": 152,
          "name": "Peter Hall"
        },
        {
          "id": 153,
          "name": "Christopher Olsen"
        },
        {
          "id": 154,
          "name": "Nanette Cambrault"
        },
        {
          "id": 155,
          "name": "Oliver Tuvault"
        }
      ]
    },
    {
      "id": 146,
      "name": "Karen Partners",
      "employees": [
        {
          "id": 156,
          "name": "Janette King"
        },
        {
          "id": 157,
          "name": "Patrick Sully"
        },
        {
          "id": 158,
          "name": "Allan McEwen"
        },
        {
          "id": 159,
          "name": "Lindsey Smith"
        },
        {
          "id": 160,
          "name": "Louise Doran"
        },
        {
          "id": 161,
          "name": "Sarath Sewall"
        }
      ]
    },
    {
      "id": 147,
      "name": "Alberto Errazuriz",
      "employees": [
        {
          "id": 162,
          "name": "Clara Vishney"
        },
        {
          "id": 163,
          "name": "Danielle Greene"
        },
        {
          "id": 164,
          "name": "Mattea Marvins"
        },
        {
          "id": 165,
          "name": "David Lee"
        },
        {
          "id": 166,
          "name": "Sundar Ande"
        },
        {
          "id": 167,
          "name": "Amit Banda"
        }
      ]
    },
    {
      "id": 148,
      "name": "Gerald Cambrault",
      "employees": [
        {
          "id": 168,
          "name": "Lisa Ozer"
        },
        {
          "id": 169,
          "name": "Harrison Bloom"
        },
        {
          "id": 170,
          "name": "Tayler Fox"
        },
        {
          "id": 171,
          "name": "William Smith"
        },
        {
          "id": 172,
          "name": "Elizabeth Bates"
        },
        {
          "id": 173,
          "name": "Sundita Kumar"
        }
      ]
    },
    {
      "id": 149,
      "name": "Eleni Zlotkey",
      "employees": [
        {
          "id": 174,
          "name": "Ellen Abel"
        },
        {
          "id": 175,
          "name": "Alyssa Hutton"
        },
        {
          "id": 176,
          "name": "Jonathon Taylor"
        },
        {
          "id": 177,
          "name": "Jack Livingston"
        },
        {
          "id": 178,
          "name": "Kimberely Grant"
        },
        {
          "id": 179,
          "name": "Charles Johnson"
        }
      ]
    },
    {
      "id": 201,
      "name": "Michael Hartstein",
      "employees": [
        {
          "id": 202,
          "name": "Pat Fay"
        }
      ]
    }
  ]
}

In addition, I’ve also published a function taking a weakly-typed ref cursor and returning the same JSON output.
The cursor must be based on a query returning exactly three columns matching the attributes of JSONNestItem object type :

DECLARE

  rc      sys_refcursor;
  output  clob;
  
BEGIN
 
  open rc for
  select level
       , json_object(
           'id' value employee_id
         , 'name' value first_name || ' ' ||last_name
         )
       , 'employees'
  from hr.employees
  connect by prior employee_id = manager_id
  start with manager_id is null
  order siblings by last_name;
  
  output := JSONNestUtil.getDocument(rc);
   
END;
/

 

How To : Nest Multiple JSON Path Expression Filters

July 9, 2017 Leave a comment

A short post about JSON Path Expressions.
I will focus on filter expressions (now officially documented as of Oracle 12.2) we can use in conjunction with JSON_EXISTS function, and more specifically how to nest multiple relative filter steps.

Here’s an example of what I mean :

{
  "_id":"53e3c6ed-9bfc-2730-e053-0100007f6afb",
  "content":{
    "name":"obj1", 
    "type":1, 
    "values":[
      {"name":"x", "v":1},
      {"name":"y", "v":2},
      {"name":"z", "v":1.5}
    ]
  }
}

and say we want to find documents (like the one above) where content.type = 1 and content.values contains an x item having v > 0.

 

Read more…

JSON Flattening Part 1 – The JSONFlatten function

April 30, 2017 Leave a comment

This article presents an approach to flatten out a JSON structure into an “Edge” table.
That’s something I’ve been exploring multiple times in the past for XML data, see for example : Yet Another XML Flattening Technique.

I’ll apply the same technique used for XMLFlattenDOM function to JSON data, leveraging the new JSON PL/SQL types introduced in the 12.2 release : JSON_ELEMENT_T, JSON_OBJECT_T, JSON_ARRAY_T.
As those object types implement a DOM-like API (based on the internal JDOM_T object), it is quite easy to adapt my previous approach to JSON.

Given the following (very) simple JSON object :

{
  "key":"ABC",
  "arr":[1,2,3,null]
}

This is the output we’re looking for :

 ID TYPE    PARENT_ID NAME     POS      VALUE
--- ------ ---------- ------- ---- -----------
  1 object          0
  2 string          1 key               ABC
  3 array           1 arr
  4 number          3            1      1
  5 number          3            2      2
  6 number          3            3      3
  7 null            3            4

 

Read more…

Simple CSV parsing using XMLTABLE or JSON_TABLE

April 22, 2016 2 comments

Oracle does not currently provide any built-in functionality to parse a flat file stored in the database as CLOB (or BLOB), like the External Table facility for external files.
Mike Kutz submitted the idea a couple of years ago : parse CLOBS/BLOBS using “external table” capability.

Parsing a simple CSV content, with no quoted field, is however possible with minimum effort using XMLTABLE or JSON_TABLE functions.
Both approaches are based on a little preprocessing to convert the data to the required format and pass it to the corresponding function.

The last part of this post will focus on the limitations.

Read more…

Categories: JSON, SQL, XML DB Tags: , ,

New JSON Path Expression Predicates

September 20, 2015 1 comment

With the release of ORDS 3.0 in May this year, Oracle also included a new set of API called SODA dedicated to schemaless application development based on JSON. That enables us to use the Oracle database as a NoSQL document store.

The purpose of this post is to focus on the filter specification – aka query-by-example (QBE) – used to select JSON documents out of a collection with SODA for REST (and SODA for Java).

As explained in the documentation, a QBE may use a set of relational and boolean operators to define complex filters. Those operators include : $in, $exists, $regex, $startsWith, $and, $or etc.

Now comes the interesting part…

Each collection is actually backed up by a database table storing the JSON documents. When a query on a collection is issued via the SODA API, and specifies a QBE, Oracle translates it to a SQL query with a JSON_EXISTS predicate.
In order to handle the different kind of operators used in a QBE, the JSON Path expression syntax has been extended to support predicates.

Although it is part of the ORDS 3.0 suite, we don’t actually need to install ORDS in the database in order to use the new syntax, the prerequisite patch is sufficient :

https://support.oracle.com/rs?type=patch&id=20885778

I’ll present below an overview of this extended syntax, with some examples.
(Note that at the time I’m writting this, predicate support is still not officially documented)

 

Read more…

Categories: JSON, SQL Tags: , ,

JSON Update for the Impatient

July 30, 2015 8 comments

Waiting for a future(?) Oracle implementation in the database, this post presents a JSON Update API for SQL and PL/SQL.

The API comprises two parts :

  • A JSON model, using the org.json reference implementation written in Java.
    The package is used as is, except for the additional org.json.JSONArray.insert() method I’ve added to handle insert operations in arrays.
  • A JSON Path Expression parser (strict syntax), following the specifications edicted here : http://docs.oracle.com/database/121/ADXDB/json.htm#ADXDB6255

All implemented updating methods are made available in SQL and PL/SQL via a dedicated package that interfaces the Java methods.

The PL/SQL package, JAR and Java source code are available here : json_util.zip

Read more…

Categories: JSON Tags: , , ,