In this exercise, you will explore data using the engine of your choice (SQL or Spark).
Understanding data through data exploration is one of the core challenges faced today by data engineers and data scientists as well. Depending on the underlying structure of the data as well as the specific requirements of the exploration process, different data processing engines will offer varying degrees of performance, complexity, and flexibility.
In Azure Synapse Analytics, you can use either the SQL Serverless engine, the big-data Spark engine, or both.
The tasks you will perform in this exercise are:
- Explore the Data Lake with SQL On-demand and Spark
- Task 1 - Explore the Data Lake with Synapse SQL On-demand
- Task 2 - Explore the Data Lake with Synapse Spark
In this task, you will browse your data lake using SQL On-demand.
-
In a Microsoft Edge web browser, navigate to the Azure portal (
https://portal.azure.com
) and login with your credentials. Then select Resource groups. -
Select the Synapse Analytics resource group.
-
Select SQLPool01 and resume it before starting the exercise.
-
Return to the resource group, then select the Synapse Analytics workspace.
-
On the Synapse workspace blade, open Synapse Analytics Studio by navigating to the Workspace web URL from the overview page.
You can also Open synapse studio by clicking on Open under Getting started->Open synapse studio
-
In Synapse Analytics Studio, from the left panel click on the expand icon and navigate to the
Data
hub. -
Switch to the
Linked
tab (1). UnderAzure Data Lake Storage Gen2
(2), expand theasaworkspace<UniqueId>
primary data lake storage account (3), and then select thewwi
file system (4). -
Inside the selected file system, double-click to navigate to
factsale-parquet
->2012
->Q1
->InvoiceDateKey=2012-01-01
(5). -
Once you are in
InvoiceDateKey=2012-01-01
right-click the Parquet file and selectNew SQL script - Select TOP 100 rows
.A script is automatically generated. Run this script to see how SQL on demand queries the file and returns the first 100 rows of that file with the header, allowing you to easily explore data in the file
-
Ensure the newly created script is connected to the
Built-in
pool and selectRun
. Data is loaded by the built-in SQL pool and processed as if it was coming from any regular relational database.Note: SQL on demand is now named as Built-in
-
Let us change the initial script to load multiple Parquet files at once.
- In line 2, replace
TOP 100 *
withCOUNT(*)
. - In line 5, replace the path to the individual file with
https://<yourdatalake storage account name>.dfs.core.windows.net/wwi/factsale-parquet/2012/Q1/*/*
Note: Replace 'yourdatalakestorageaccountname' with the Storage Account Name provided in the environment details section on Lab Environment tab on the right.
- In line 2, replace
-
Select
Run
to re-run the script. You should see a result of2991716
, which is the number of records contained in all the Parquet files within thefactsale-parquet/2012/Q1
directory. -
In Azure Synapse Analytics Studio, navigate to the
Develop
hub. -
Select the
Exercise 1 - Read with SQL on-demand
SQL script. Connect to Built-in and select SQLOnDemand01 as the database. Select Run to execute the script..This query demonstrates the same functionality, except this time, it loads CSV files instead of Parquet ones (notice the
factsale-csv
folder in the path). Parquet files are compressed and store data in columnar format for efficient querying, as compared to CSV files which are raw representations of data, but easily processed by a large number of systems. Oftentimes, you can encounter many file types stored in a data lake and must know how to access and explore those files. When you access CSV files, for instance, you need to specify the format, field terminator, and other properties to let the query engine understand how to parse the data. In this case, we specify a value of2
for FIRSTROW. This indicates that the first row of the file must be skipped because it contains the column header, for instance.Here we use WITH to define the columns in the files. You must use WITH when using a bulk rowset (OPENROWSET) in the FROM clause. Also, defining the columns enables you to select and filter the values within.
-
Navigate to the
Data
hub, browse to the data lake storage account folderwwi/factsale-parquet/2012/Q1/InvoiceDateKey=2012-01-01
, then right-click the Parquet file and selectNew notebook->Load Data frame
-
This will generate a notebook with PySpark code to load the data in a dataframe and display 10 rows with the header.
-
Attach the notebook to a Spark pool.
-
Select Run all on the notebook toolbar to execute the notebook.
Note: The first time you run a notebook in a Spark pool, Synapse creates a new session. This can take approximately 3 minutes.
-
As you can see, the output of the dataframe is displayed with 10 rows. To display 100 rows with the header replace the last line of code with the following:
display(df.limit(100))
-
Rerun the notebook again to see the result.
-
Notice the included charting capabilities that enable visual exploration of your data. Switch to Chart view. Select View Options and change the Key to
CustomerKey
and Values toCityKey
and then click on Apply button. -
Collapse the output as illustrated below.
-
Hover over the area just below the cell in the notebook, then select {} Add code to add a new cell. {} Add code won't be visible untill you Hover the area in front of arrow.
-
Paste the following into the cell and replace
YOUR_DATALAKE_NAME
with the name of your Storage Account Name provided in the environment details section on Lab Environment tab on the right. You can also copy it from the first cell of the notebook above.data_path = spark.read.load( 'abfss://wwi@YOUR_DATALAKE_NAME.dfs.core.windows.net/factsale-csv/2012/Q1/*/*', format='csv', sep="|", header=True) display(data_path.limit(100))
-
Select the Run cell button to execute the new cell and then select the Table view in output section.
This notebook demonstrates the same functionality, except this time, it loads CSV files instead of Parquet ones (notice the
factsale-csv
folder in the path). -
Important: Close the notebook by selecting the X in the top right of the tab and then select Close + discard changes. Closing the notebook will ensure you free up the allocated resources on the Spark Pool.