Udemy Courses enrolled for ADF/SSIS/Azure SQL Server VMs & more
The following was tried out on 20th June and then till 25th/26th june. Then documentation below was updated on 26th/27th june.
Since, then most time away for health. Today, 1st July, was all about trying to get blog post Updated time to show .. no luch, so to be done manually only
1. Good ADF starter and free - - Good quick starter. Moving json from blob storage to data lake storage with ADF (20th June)
1. Create ResourceGroup (naming conventions explained)
1. Create a dashboard with appropriate naming convention and pin the resource group above to the dashboard
2. Create storage account , for source data - this is done in a new resource group in the udemy class.
1. in this account, create a container instance
2. Upload json file (sales.json provided by tutor) in this container
3. Create another storage account to host Azure data lake storage - that is, to host target data - , so need to select appropriate setting when creating the account (hierarachical folder support)
1. Here, create a container
2. Create directory structure to upload files post processing
Note: Resource Groups, Containers pinned to Dashboard
4. Download Storage Explorer
1. Looks like this is linked to a storage account. To further check.
5. The main part of the course - create azure data pipeline. This is where ADF begins to be used.
1. Open adf rg. This opens up the adf rg UI. Select Author and Monitor to create the pipeline
2. Open monitor and author button. On left panel on the page opened, 'author' option exists. Click on this to create pipeline.
3. New Datasets option. In this, select the type of dataset. In this example, it is Az Blob storage type. But there are other types like Amazon S3, Az Data Lake Storage and so on..
4. Select Set Properties of this Dataset. In that, create Linked Service, or use existing one.
a. while doing so, among other things, select storage account for this Linked Service. Also, select data source (file) location
5. Next, now that dataset created, to go on to create transformation logic. To do this, click on 'move and transform' button
a. next select 'copy data' option. There is also a 'data flow' option.
b. set properties such as source / sink for this 'copy data' operation.
6. since Sink data set not created, create a new one from azdatalake storage account.
7. Next, publish the pipeline
8. Next, need to 'add trigger'. Select 'trigger now' to run the pipeline
9. Once Published, can choose monitor to look at the pipeline run information.
2, Azure SQL Server for Beginners part 1 of 2
- DBA perspective. Small sections on IAAS Azure VM installation / deletion
3. Mastering SQL Server 2016 Integration Services (SSIS)-Part 1
- SSIS all the way, though with no Azure at all
4. A 18 Hour SQL/SQL Server 2014/Visual Studio 2017 Course
- On Premise only. Step-by-step walk thru of app development on SQL Server
Comments
Post a Comment