Data processing in today's companies is marked by heterogeneous data storage (SQL, NoSQL, unstructured data, etc.) and processing components (databases, Big Data processors, etc.). Data in a company often passes through complex paths from generation or receipt of the data, through various data processing components, to storage or distribution of the data to various recipients. With Data Factory, local data such as that from SQL Server can be processed together with cloud-related data from Azure SQL Database, Blobs, and Tables. These data processing streams can be created, processed, and monitored through simple, highly available data pipelines. Data sources and data recipients can be defined, and the movement of the data in the company can be traced and monitored from a central location.
Introduction to Azure Data Factory
Orchestrating Data and Services with Azure Data Factory
New capabilities for data integration in the cloud
Cortana Intelligence Suite End-to-End
Intro to Data Factory Deep Dive
Building Hybrid Big Data Pipelines with Azure Data Factory