This article serves as a Frequently Asked Questions (FAQ) guide for managing Data pipelines from Oracle Fusion Cloud to Destination Database using BI Connector. It covers common challenges encountered during pipeline execution and provides step-by-step solutions to ensure smooth and efficient pipeline operations.
FAQ 1: What should I do if I don’t have a pipeline template? How can I get it?
Please Contact us or email support@biconnector.com to get our ready-to-deploy pipeline template.
FAQ 2: How can I create a Lookup Table for a Data Model Pipeline?
To create a Lookup Table for a Data Model Pipeline, follow the steps below:
Step 1: Create a New Lookup Table in BI Connector
Navigate to the Data Pipelines section in BI Connector.
Click on + New Lookup Table.
Enter the Lookup Table Name.
Select the Fusion Data Source (Data Model) created in BI Connector.
Enter the Destination Schema Name.
Select the Destination Warehouse.
Click Next.
Step 2: Select Base Tables and BIC Tables
In the Base Tables and BIC Tables section:
You can:
Manually select required tables from the searchable dropdown
ORBulk paste table names
For Base Tables:
BI Connector will automatically:
Fetch the Primary Key
Retrieve all Column Names
Select LAST_UPDATE_DATE as the watermark column (if available)
Assign the correct Watermark Format
For BIC Tables (Custom SQL Queries):
You must manually provide:
Primary Key
Watermark Column
Watermark Format (if required)
Click Save after completing the configuration.
Step 3: Download and Execute Table Scripts
In the Actions column, choose the required option:
Download CREATE TABLE statement
Download INSERT statement
Execute the downloaded queries in your database to:
Create the lookup table
Populate it with required data
Step 4: Configure Lookup Table in Pipeline
In your Pipeline, set the following parameters:
schema_name → Enter the schema where the lookup table is created
table_name → Enter the lookup table name
Click Publish to apply the changes.
FAQ 3: How can I create a Lookup Table for an Analysis Report Pipeline?
To create a Lookup Table for an Analysis Report (OTBI Report) Pipeline, follow the steps below:
Step 1: Create a New Lookup Table in BI Connector
Navigate to the Data Pipelines section in BI Connector.
Click on + New Lookup Table.
Enter the Lookup Table Name.
Select the Data Source (Analysis Report) created in BI Connector.
Enter the Destination Schema Name.
Select the Destination Warehouse.
Click Next.
Step 2: Select Analysis Reports
In the Analysis Report section:
You can:
Manually select required reports from the searchable dropdown
ORBulk paste report names
For Analysis Reports, you must manually provide:
Primary Key
Watermark Column
IS_WATERMARK_COL_TIMESTAMP
Set to
'Y'if the watermark column is a timestampSet to
'N'if it is not a timestamp
Click Save after completing the configuration.
Step 3: Download and Execute Table Scripts
In the Actions column, choose the required option:
Download CREATE TABLE statement
Download INSERT statement
- Execute the downloaded queries in your database to:
Create the lookup table
Populate it with required data
Step 4: Configure Lookup Table in Pipeline
In your Pipeline, set the following parameters:
schema_name → Enter the schema where the lookup table is created
table_name → Enter the lookup table name
Click Publish to apply the changes.
FAQ 4: Can we use an OTBI (Analysis Report) lookup table for a Data Model pipeline template?
No, OTBI (Analysis Report) lookup tables cannot be used for Data Model pipeline templates
Correct Approach
Use the Data Model lookup tables with Data Model pipeline template.
Use the OTBI lookup tables with OTBI (Analysis Report) pipeline template.
FAQ 5: Do we need to assign a Primary Key for all tables? When is it required?
Primary Key requirement depends on the pipeline type and use case.
For Data Model Pipeline Template
Fusion Raw Tables (Base Tables):
Primary Key is mandatory
BI Connector automatically fetches and assigns it during lookup table generation
Existing Data Models / Custom SQL Queries:
Add if available; required for incremental load. Without it, only full load is supported.
- Primary Key must be added manually
For OTBI (Analysis Report) Pipelines
Primary Key is required only if you want to perform incremental load
- Primary Key must be added manually
Key Takeaway
Fusion Raw Tables (Base Tables) → Primary Key auto-generated and required
Data Model / Custom SQL → Primary Key manual + Required only for incremental load
OTBI Reports → Primary Key manual + required only for incremental load
FAQ 6: How to trigger a one-time FULL_LOAD for all or specific pipeline tables?
To perform a full load for pipeline tables, you need to update the REFRESH_TYPE in the lookup table.
Option 1: Set FULL_LOAD for all tables
Run the following SQL query to apply full load for all rows in the lookup table:
UPDATE <your_lookup_table_name>
SET
REFRESH_TYPE = 'FULL_LOAD'Option 2: Set FULL_LOAD for specific tables
Run the following SQL query to apply full load only for selected tables:
UPDATE <your_lookup_table_name> SET REFRESH_TYPE = 'FULL_LOAD' WHERE BI_CONNECTOR_ID IN (<id1>, <id2>, <id3>);
FAQ 7: How to always perform FULL_LOAD for a specific table?
To ensure a table always runs in FULL_LOAD mode during every pipeline execution, configure it in the lookup table.
Important Configuration
Set REFRESH_TYPE = 'FULL_LOAD'
Do not configure any watermark details, such as:
Watermark Column
Watermark Format
High Watermark
If these values are already configured, set them to NULL
Key Behavior
- The pipeline will perform a full load on every run for this table
Incremental (watermark-based) logic will be completely ignored
FAQ 8: What happens if REFRESH_TYPE is not set to FULL_LOAD?
When REFRESH_TYPE is not set to FULL_LOAD, the pipeline follows incremental load logic based on the watermark configuration.
How it works
The pipeline checks the HIGH_WATERMARK value stored in the lookup table.
This value represents the maximum value of the watermark column from the last successful load.
Based on the HIGH_WATERMARK:
If HIGH_WATERMARK is NULL (e.g., table has zero rows or no previous load):
→ The pipeline performs a FULL LOADIf HIGH_WATERMARK has a value:
→ The pipeline performs an INCREMENTAL LOAD
Incremental Load Logic
The pipeline compares each source record’s watermark column (e.g.,
LAST_UPDATE_DATE) with the HIGH_WATERMARKOnly records with newer or updated values are fetched and loaded into the destination
Key Behavior
First run (or empty destination) → FULL LOAD
Subsequent runs → INCREMENTAL LOAD
HIGH_WATERMARK is updated after each successful pipeline execution
FAQ 9: If BI Connector Desktop Edition and Server Edition are on different machines, how can a Desktop user load data into ADF?
When BI Connector Desktop Edition and Server Edition are installed on different machines, Desktop users can still load data into ADF by integrating with the Server Edition and publishing tables.
Step 1: Create BIC Server Integration in Desktop Edition
In BI Connector Desktop Edition, navigate to the left panel and click on Settings and go to BIC Server Section.
Create a new Server Integration:
Provide a meaningful Integration Name
Enter the BI Connector Server Edition URL
Add the API Key
To get the API Key:
Open BI Connector Server Edition
Navigate to Settings from left menu and then go to API Key Section
Copy the key and paste it into Desktop Edition
Step 2: Publish Tables to Server Edition
In Desktop Edition, go to the Tables section.
In the Actions column:
Click the Publish icon for a single table
Map it to the created BIC Server Integration and publish
To publish multiple tables:
Select tables using the checkboxes
Click the Publish button at the top right
This will publish the Tables and DSN to the BI Connector Server Edition.
Step 3: Set Up the Lookup Table
Insert new rows in the lookup table for the tables published to the Server Edition
FAQ 10: How to configure pipeline frequency based on hourly and daily scheduling?
Pipeline frequency should be configured based on how table refresh intervals are defined in the lookup table.
Scenario 1: Hour-Based Scheduling (e.g., 2 hours, 3 hours, 4 hours)
If all tables are scheduled based on hours:
Set FREQUENCY (in hours) in the lookup table as:
2 → runs every 2 hours
3 → runs every 3 hours
4 → runs every 4 hours
Configure the pipeline trigger interval = 1 hour
Why?
The pipeline runs every hour
On each run, it checks which tables meet their frequency condition
Only eligible tables are processed
Scenario 2: Day-Based Scheduling (e.g., Daily, 3 Days, Weekly)
If all tables are scheduled based on days:
Convert frequency into hours and store in the lookup table:
Daily → 24
Every 3 days → 24 × 3 = 72
Weekly → 24 × 7 = 168
Configure the pipeline trigger interval = 1 day
Key Rule
Hourly-based tables → Trigger every 1 hour
Day-based tables → Trigger every 1 day
FAQ 11: How can I schedule the ADF/Synapse pipeline run?
You can schedule pipeline execution using a trigger in Azure Data Factory (ADF) or Synapse.
Open the pipeline → Click Add Trigger → New/Edit
Click + New under Choose Trigger
Enter Trigger Name and select Type = Schedule
Set Start Time and Frequency based on your requirement.
Click OK → Publish
FAQ 12: Do we need to create a BIC table for data pipelines?
The requirement to create a BIC table depends on the pipeline type.
For OTBI (Analysis Report) Pipelines
Yes, you need to create a single BIC table
In this table, you must include all folder paths where the reports exist in Fusion
This is required to fetch all reports under those folder paths
During lookup table creation, these reports will be available for selection, allowing you to choose the required reports
For Data Model Pipelines
Using Raw Tables (Base Tables):
No BIC table is required
You can directly select base tables while creating the lookup table
Using Existing Data Models or Custom SQL Queries:
BIC table is required
- Create a BIC table for the required Data Model or SQL query
- During lookup table creation, select this BIC table to create the corresponding entry (row)
FAQ 13: How can I run a specific table manually without waiting for schedule or frequency?
You can trigger a manual run for a specific table by updating its STARTED_ON Column in the lookup table using BI_CONNECTOR_ID.
UPDATE <your_lookup_table_name> SET STARTED_ON = NULL WHERE BI_CONNECTOR_ID = <table_bi_connector_id>;