Our Blog

Blogarrow Solution Articles

The Ultimate Guide to Bringing Oracle Fusion Data Into Your Data Warehouse

Introduction

Oracle Fusion applications help companies smoothen their day-to-day operations. The core aspects of Enterprise businesses, from Financials to Supply Chain to Human Capital to Projects to Customer Relationship Management, are all covered by the Oracle Fusion ERP, SCM, HCM, and CX apps, which is why they are widely adopted across industries and geographies.

In this blog post, we’ll examine why companies are moving Fusion data to their Data Warehouse (DW) and the challenges faced in the available tools and processes. We’ll also cover the one-stop solution for moving Fusion data to your warehouse and automatically syncing your DW with your Fusion data securely and smartly!

Why do Companies Move Oracle Fusion Data to Data Warehouse?

In this section, we’ll see the top reasons why companies want to move Fusion data to their DW, listed below:

  • Maintain a single source of truth – Having all the data from across different sources (with Oracle Fusion being one of them) in one place is essential for effective data management.
  • Simplify reporting and analytics – The Data Analysts can quickly source the required Oracle Fusion data from the Warehouse and spend more time on the actual analysis rather than wrestling with manual processes with CSV/Excel files.
  • Create an Enterprise-grade semantic model – This would enable Analysts to directly join multiple tables and issue a query to the DW from their BI tool instead of pulling each table individually and modeling it manually in the BI tool.
  • Feed Fusion data to AI and ML workloads – The mission-critical data lives in Oracle for most companies using Oracle Fusion. Hence, moving the data to a DW makes it easy for them to feed their AI and ML projects for predictive and prescriptive analytics.
  • Reduce direct BI workloads on Fusion production systems – Routing concurrent end-user queries directly to Fusion could cause the system to timeout or hit memory constraints.
  • Achieve compliance – Oracle stores historical data only up to a specific duration in the production instances. However, for most companies using Oracle Fusion, storing the historical data for longer durations is necessary to meet federal compliance and audit requirements.
  • Facilitate integrations with homegrown and downstream applications – Some companies still conduct their critical back-office operations using legacy, homegrown applications heavily tied to the DW.

Oracle Fusion ETL Problems

In this section, we’ll see the list of challenges that companies wrestle with in Oracle Fusion ETL projects:

  • Oracle Database connector doesn’t work with Fusion applications – This often comes as a shocker to DW teams that thought of using the Oracle Database connector for moving Fusion data to Warehouse when they face the reality. The DW teams in most companies are unaware of the fact that Oracle does not provide the underlying database’s connection string for Fusion applications for security purposes. So, it’s impossible to use the standard Oracle Database connector to pull the Base Tables and Views data out of their Fusion instances.
  • Fusion ecosystem’s timeout and memory constraints – Oracle handles concurrencies seamlessly and merited with high performance metrics in OLTP operations, but its OLAP lags in performance, and is prone to stringent timeout and memory limits, and concurrency issues.
  • Complex BICC PVO setup and absent columns – DW teams find it difficult to understand BICC’s PVO schemas, as a number of required columns are often unavailable in them. Companies work with Oracle to expose the absent columns in PVOs by raising an SR, which has long turnaround times to get each field exposed. Further, the setup is complex and prone to errors that constantly demand user attention to manually fix the failed jobs.
  • Rest API Row Limitations and Historic Data Fetch Issues—Oracle Fusion’s Rest APIs are for real-time sync with external systems. They have a 500-row limit per fetch and are not suitable for pulling historic data with millions of rows.
  • Labor-intensive BI Publisher Reports – Unlike the Rest APIs, the BI Publisher reports (which primarily consume data from the BI Publisher Data Model objects) are not prone to row limits, but are frequently prone to timeout errors. So the users are forced to split the data in the source and manually append it in the DW. Further, DW teams find it extremely difficult to fix the failed jobs in an automated setup, often forcing them to start all over again.
  • Constraints with Oracle alternatives – Some companies use the following Oracle tools to move Fusion data to DW:
    • Oracle Data Integrator (ODI) and Oracle Integration Cloud (OIC) are heavily tied to the BICC PVOs and the standard views available in HCM Extracts (which is tied solely to HCM applications). 
    • Fusion Data Intelligence (FDI), formerly known as Fusion Analytics Warehouse (FAW), is technically a bundle of Oracle Analytics Cloud (OAC) and Autonomous Data Warehouse (ADW), with some prebuilt Subject Areas (a different set of schemas used to create OTBI reports) and business objects like Invoices, Suppliers etc but it doesn’t help users with accessing the base tables directly. Further, the OAC part of it is generally tied to an OCPU capacity, thus limiting the number of rows you can access in the reports.
    • Oracle GoldenGate Replication Tool – Since Fusion is in the cloud, the base tables are not directly accessible via the Oracle GoldenGate tool. The approach here is heavily tied to FDI, so you can pull the Subject Area schemas and business objects from the FDI to your DW. It comes with the Change Data Capture (CDC) functionality, but it’s not of great value as the base tables are inaccessible!
    • Oracle’s OAC connector for Power BI – Some companies try this option by mistake only to later find out this works just with OAC’s reporting objects and not the reporting objects existing in the Fusion BI ecosystem. Further, it doesn’t capture the actual column names (even when connecting to the OAC in FDI), and it’s meant for Power BI reporting and not Data Warehousing.
  • Inability to tap into existing OTBI reports and BI Publisher Data Models – There are companies that know exactly what data they want and have already created the view they need in the form as OTBI Reports or as BI Publisher Data Models. The speciality of Data Models, though, is that it allows users to write SQL queries and fetch data from the base tables and views. Hence, companies create a lot of Data Models with the views they need. However, they cannot make the connection and automate the data sync with their DW using these reporting objects due to a lack of a connector.
  • Third-party tool limitations – The top third-party tools such as CData, Fivetran, ExoInsight, Oracle’s OAC connector for Power BI, SQLConnect that companies use for moving data from Fusion to DW. None of these tools connect directly to both BI Publisher Data Models and OTBI reports. Let’s take a quick look at their limitations:
    • CData works with Rest APIs and is prone to its associated limitations
    • Fivetran works with PVOs and BI Publisher reports, and is limited by the limitations of those two options
    • ExoInsight is prone to BI Publisher reports limitations and is complex to setup
    • SQLConnect is a Data Extraction tool. It doesn’t help with moving the data to your DW automatically. 

There are many other lesser-known third-party tools that are prone to the limitations faced in the tools mentioned above.

The One-stop Solution: BI Connector

Now, let’s dive into the one-stop solution, BI Connector, which helps companies effectively serve all the reasons listed above while overcoming all the challenges of moving data from Fusion to DW. 

Below is a crisp answer on how BI Connector helps companies move data from Fusion to DW:

  • Supported Fusion data extract sources: Base tables/Views, OTBI Reports and Custom Data Models (or User-defined views).
    • All Fusion data is accessible via the Base Tables/Views.
    • Users can create customized views on top of base tables and pre-defined views by joining them as required.
    • No need to gain an understanding of BICC PVOs or HCM Extracts
    • Eliminates the pain of using labor-intensive BI Publisher reports, Rest APIs, Oracle alternatives, and other third-party tools
  • Row limits applied: None. All the rows are fetched into the DW, even if:
    • Oracle applies row limits to consume reports directly from within Fusion apps
    • OAC in FDI is tied to an OCPU-based row limit
  • Data Orchestration platforms supported: Microsoft Fabric and Azure Data Factory (ADF)
  • DW compatibility: All DWs with a native connector in Microsoft Fabric/ADF
  • Ready-to-deploy Oracle Fusion pipeline templates for Fabric and ADF via BI Connector
    • Data warehouses supported:
      • SQL Server 
      • Azure SQL
      • Oracle Database 
      • Snowflake
    • Automated DDL statements executed for non-existent tables in DW – The templates automatically execute the DDL statements for non-existent tables, while skipping the step for ones that already exist.
    • Update of existing rows: Templates are configured for UPSERT operations based on primary key
    • Works in the absence of primary keys: Supports concatenated columns as composite primary keys
    • Data loads flexibility: Both full and incremental loads through Watermark columns (which can be a Date column or a Numeric primary key column)
    • Quick setup – Get the pipelines up and running in hours, not months or years
    • Oracle Timeout and memory constraints – Eliminated as pipelines are optimized for sequential runs and data chunking
    • Dynamic refresh schedules for each table in the same pipeline template – Want each table to refresh at different frequencies? The same pipeline templates handle all the refresh frequency dynamics.
    • How are pipeline run failures handled? The BI Connector’s pipeline templates are:
      •  Idempotent – Any number of pipeline reruns on the pipeline would still ensure the data is not duplicated in the DW, and not cause deviation, including for UPSERT operations
      • Self-healing – In case of failed run, the failed data load would get auto-fixed in the next run or can also be immediately fixed by manually triggering the pipeline to run.
    • Deployment and maintenance support – Not having the resources to create or monitor pipelines. The BI Connector team will use the template with the connector and help with your pipeline deployment and maintenance as a service
  • Bonus: BI Connector has a Custom SQL Editor to quickly write and finetune Data Model queries for Fusion. BI Connector also connects to OAC/FDI, and OBIEE/OAS data sources and supports direct connection from Power BI Desktop/Service/Report Server and Tableau Desktop/Prep/Server/Cloud.

In the following sections, we’ll look at the architecture for using the BI Connector with Fabric and Azure Data Factory to move your Fusion data into DW.

Moving Fusion Data to DW Using BI Connector with Microsoft Fabric

Below is a quick architecture depiction showing how the data flows from Oracle Fusion to your DW via BI Connector with Fabric.

Fig 1: BI Connector Architecture for Moving Oracle Fusion Data to DW via Fabric

  • Install BI Connector on the On-Premise Gateway Server – The BI Connector and Fabric interaction happens via the On-premise Gateway Server. Hence, the BI Connector must be installed on the Gateway Server. 
  • Configure a Data Source in BI Connector to Oracle Fusion – Create a BI Connector Data Source to your Oracle Fusion instance on the Gateway Server. Its as simple as entering your instance URL, username and password, and saving the connection.
  • Configure a Connection to BI Connector Data Source and DW in Fabric – create a connection to your BI Connector Data Source from the Power BI Service via the Gateway Server, and create another connection to your DW.
  • Setup the Lookup table for referencing in the Fabric Pipeline –  A Lookup table is a simple table containing the info like Fusion Base Table names to pull data from and their corresponding DW Destination table and schema names, primary key, watermark column, frequency of refresh, and type of load (say incremental or full load), and load this lookup table as a separate table in your DW. The Fabric pipeline will just refer to this table for moving the data.
  • Configure the BI Connector Pipeline Template for Fabric – Next, pull the BI Connector’s Fabric pipeline template for your DW, and map your Gateway connections to the source and destination. Now run the pipeline. 
  • Add a Trigger for Automated Pipeline runs in Fabric – Finally, setup a trigger on the Fabric pipeline for automated data loads henceforth. The entire process can be completed in a couple of hours!

Moving Fusion Data to DW Using BI Connector with Azure Data Factory (ADF)

The steps for ADF are mostly the same steps used in Fabric. The only major difference is that the Gateway Server (for Fabric) is replaced by a Self-Hosted Integration Runtime (SHIR) Server. Below is a simple depiction of the architecture:

Fig 2: BI Connector Architecture for Moving Oracle Fusion Data to DW via ADF

  • Install BI Connector on the SHIR Server – The BI Connector and ADF interaction happens via the SHIR Server. Hence, the BI Connector must be installed on the SHIR Server. 
  • Configure a Data Source in BI Connector to Oracle Fusion – Create a BI Connector Data Source to your Oracle Fusion instance on the SHIR Server. Its as simple as entering your instance URL, username and password, and saving the connection.
  • Configure a Linked Service to BI Connector Data Source and DW in ADF – Create a Linked Service connection each for your BI Connector Data Source and the DW in ADF.
  • Setup the Lookup table for referencing in the ADF Pipeline –  A Lookup table is a simple table containing the info like Fusion Base Table names to pull data from and their corresponding DW Destination table and schema names, primary key, watermark column, frequency of refresh, and type of load (say incremental or full load), and load this lookup table as a separate table in your DW. The ADF pipeline will just refer to this table for moving the data.
  • Configure the BI Connector Pipeline Template for ADF – Next, pull the BI Connector’s ADF pipeline template for your DW, and map your Linked Service connections to the source and destination. Now run the pipeline. 
  • Add a Trigger for Automated Pipeline runs in ADF – Finally, setup a trigger on the ADF pipeline for automated data loads henceforth. You can complete the entire process in a couple of hours!

In essence, once you install the BI Connector in the SHIR, you follow pretty much the same steps as in Fabric!

The BI Connector Benefits

Moving your Fusion data to DW using BI Connector has multifold benefits:

  • Set up in hours, not months or years – With BI Connector, moving data from Fusion to DW is no longer a project with multiple milestones that span months or years. The entire setup with BI Connector is finished in hours, and you start near the finish line.
  • Jumpstart with the Fabric & ADF pipeline templates – While you’re convinced that BI Connector can help you get the job done, are you worried about building the pipelines from scratch? That’s why we have pre-built the ADF and Fabric pipeline templates for different DW platforms – SQL Server, Azure SQL, Oracle Database, or Snowflake!
  • White-glove service by BI Connector Team – Is your team running short of time to deploy and maintain the pipelines? BI Connector has got you covered with the white-glove service! The time to deploy is also shortened tremendously as the BI Connector Team would use the proprietary Fabric and ADF templates.
  • Quicker time to insight – Moving Fusion data to DW in an automated way using BI Connector helps teams to combine Fusion with other data sources faster, while reducing the load on Oracle significantly. Its a win on all sides!
  • Save time and money – With BI Connector, companies save the time and money they burn on setting up manual dataflows from Fusion to DW, and fixing the broken ones frequently.

BI Connector – Quick Customer Case Studies

In this section, let’s take a look at some of the real-world scenarios that companies tackled using the BI Connector with Fabric or ADF to move Fusion data to their DW.

Case Study 1: Cutting Data Costs & Complexity – A Big IT Service Firm’s Fusion-to-Azure Migration Using BI Connector With Microsoft Fabric

Problem Scenario: 

  • The DW team relied on a complex BICC + ODI setup. Data moved through multiple staging layers, causing delays, inefficiencies, and higher costs.
  • Team members were comfortable with Base Tables but not with BICC PVOs. This made it harder to manage data flows and maintain consistency.
  • The DW lacked Fusion data integration. Without it, the warehouse failed as a single source of truth, and analytics were not fully trusted.
  • The staging environment needed frequent manual fixes to broken dataflows. This added significant overhead and drove up IT costs.

BI Connector Solution

  • Using BI Connector with Fabric, the team quickly set up a streamlined pipeline driven by ETL logic. They used a lookup table, replacing the complex legacy setup.
  • The first load of all 212 tables, including multi-million row datasets and hundreds of columns, was completed in just a day and a half.
  • For complex tables with composite or hash keys, the BI Connector handled the UPSERT operations logic seamlessly. The customer just entered the composite keys with each column separated by a comma on the Primary key column of the Lookup table.
  • Deleted records were automatically identified in Fusion and removed in Azure SQL, ensuring consistency between source and destination systems.
  • Subsequent incremental refreshes were scheduled to run twice a day, with the entire batch of 212 tables refreshed in roughly two hours per run.

Number of tables: 

A total of 212 Base Tables containing critical Projects data were migrated. And this ensured the warehouse could fully support enterprise-level reporting needs.

Setup Turnaround time:

The complete pipeline was configured and ready for production within a few hours, dramatically reducing setup time compared to traditional methods.

Benefits: 

  • Data movement from Fusion to Azure SQL was simplified, eliminating multiple staging hops and technical complexity.
  • The removal of ODI and staging costs, along with reduced manual fixes, generated substantial savings for the company.
  • With BI Connector’s white-glove pipeline management in Fabric, the DW team can now redirect effort from operational maintenance to strategic, transformative initiatives.
  • The new setup established a reliable, cost-effective, and scalable single source of truth for analytics across the business.

Case Study 2: Unlocking Fusion HCM Insights with BI Connector & ADF at the Largest Metro Organization of Southern California

Problem Scenario: 

  • The Metro Organization relied on an on-prem Oracle database (tied to EBS) as their data warehouse. They struggled to bring Fusion HCM data into it.
  • Complexities around BICC PVOs and HCM Extracts created constant challenges. This made the data integration difficult for the DW team to manage effectively.
  • Frequent errors in the existing dataflow setup demanded manual intervention, consuming valuable time and resources.
  • These issues also impacted the Power BI team, who could not deliver timely, accurate HR insights to decision-makers.

BI Connector Solution:

  • Using the BI Connector with ADF, the customer moved data from critical Base Tables into the Oracle on-prem database. Some tables used full load while others leveraged incremental refresh.
  • The BI Connector team guided the customer in building the ADF pipeline from scratch, ensuring a smooth setup process.
  • Even for tables without a LAST_UPDATE_DATE column, the BI Connector team implemented incremental load strategies to optimize performance.
  • Power BI was directly connected to Oracle Fusion Data Models via BI Connector. This enabled automated report refreshes in Power BI Service.

Number of tables:

Over 70 Base Tables migrated from Fusion HCM to Oracle on-prem database.

Setup Turnaround time:

The end-to-end ADF pipeline was built and production-ready in under two weeks, including incremental refresh setup.

Benefits:

  • Removed the high cost and effort of maintaining and fixing error-prone legacy dataflows.
  • Established a trusted single source of truth in the on-prem Oracle DW, extending visibility to Fusion HCM data.
  • Empowered the Power BI team to deliver automated, accurate HR reporting directly from Fusion, significantly improving decision-making for HR leaders.

Conclusion

Moving Fusion data to your DW is no longer a project that spans months or years across multiple milestones! With BI Connector’s battle-tested ADF and Fabric templates, companies can now set up Fusion data pipelines to DW and go-live in a few hours!

Subscribe to Our Blog

Stay up to date with the latest news and data visualisation
tips from the world of BI Connector

© 2026 Guidanz
  |  
  |