ETL+ Load

Recommend reading

The ETL+ Load brings data from a source table onto its target data warehouse table.

A data load is triggered by the Load Now button on the https://dataself.atlassian.net/wiki/spaces/ETL202208/pages/2032239184 or when a step on the Job Steps panel of https://dataself.atlassian.net/wiki/spaces/ETL202208/pages/2032242007 is executed.

Steps in a Data Load

Data is loaded into a target table (the destination of the load process) by ETL+ in a two step process.

  1. Temp Table: ETL+ loads the source data into a temporary data warehouse table.
    The temporary table allow users to continue to query the target table during the load (especially important when loading large tables). The target table is a table in the destination data warehouse.

    • The temp table content is defined by the source table and the ETL SQL Statement. The temp becomes a mirror of the source table when the ETL SQL Statement has no transformations.

    • If TableName_Temp already exists, it’s content is overwritten.

  2. Target Table: The ETL process for step two is determined by the Load Type specified on the https://dataself.atlassian.net/wiki/spaces/ETL202208/pages/2032239184. Once the temp table load step is completed, ETL+ processes the temp table according to the Load Type assigned for the ETL source to destination mapping.

Load Types

  • Load All:

    • Without Design page transformations: the Target table is deleted, and the temp is renamed to target.

    • With Design page transformations: the Target table is deleted, and the temp table rows are inserted into the new target while executing the transformations.

  • Replace: Delete the rows meeting the Replace criteria from the target table, insert the temp table’s rows onto the target table, delete the temp table. The insert includes Design page transformations.
    See also

  • Upsert: Delete the rows meeting the Upsert criteria from the target table, inserts the temp table’s rows onto the target table, delete the temp table. The insert includes Design page transformations.

  • Append

If a load failure happens, the temp table might stay in the data warehouse until the next load.

For more on Load types see

Table Load Conflicts

ETL+ does not allow two or more data loading into the same target table at the same time.

At the start of a table load, ETL+ flags the metadata its load session for that particular table, and a date time stamp of the beginning of the load process. Every 15 seconds, ETL+ will update the metadata date-time stamp flag indicating that the load continues. ETL+ will clear those two settings at the end of the table load.

If an ETL+ session tries to load a target table while another ETL+ session is already loading data into, the former will show or/and log “Another ETL+ is loading this table. Waiting for table release…”. This ETL+ session will wait for 15 seconds and try to load the table again, repeating this cycle until the table is released by the latter ETL+ session.

If ETL+ is loading a table but then dies or stops working, the metadata date-time stamp will no longer be updated every 15 seconds. An ETL+ session waiting for a table release will take over that table load if the table’s metadata date-time stamp is 90 seconds or older.

Acumatica OData Load

Aside from the processes above, when ETL+ extracts 100 pages from an OData table (for instance, each page with 1,000 rows), ETL+ moves/inserts the TableName_Temp rows into TableName_Temp2, and then proceeds to extract and load the remaining source table’s rows onto TableName_Temp.

This process repeats until all of the source table rows have been extracted. And then ETL+ processes the TableName_Temp2 move/insert into the target table according to the load type described above.

Click here to learn more about ETL+ for Acumatica OData v3 and v4.

What is a Target Table?

Otherwise known as a destination table. A target table can also be thought of as the destination table of a ETL load process.
The destination of the data loaded is always a table in the data warehouse.

Archiving Historical / Old Data

WIP documentation

What Happens to Rows Deleted from the Data Source for Tables without Load All?

WIP documentation

 

v2022.08