Azure Machine learning designer training and automate batch inference using Azure Synapse and Azure data bricks

Balamurugan Balakreshnan
2 min readMay 21, 2022

Use end to end batch inference using synapse, azure data bricks and AML batch inference pipeline

Prerequisites

  • Azure account
  • Azure Machine learning account
  • Azure storage account
  • Azure databricks account
  • Azure synapse workspace account

Architecture

  • Using AML Designer to create a batch inference pipeline
  • Automate Batch inferencing

Designer Training

  • Create a experiment in designer
  • Choose computer cluster
  • Use open source dataset
  • Click Sumbit and train the model
  • Select Create batch inference pipeline
  • Create a data store to ADLS gen2 with new dataset with empty file.
  • Then add export data
  • Save the output as parquet and give a filename
  • after submit and wait for the run to complete
  • then click publish
  • Wait for the batch inference endpoint to publish

End to End automated batch inference

  • Now go to azure synapse analytics
  • Now create a pipeline
  • Drag Azure databricks and connect to ADB workspace
  • Select the notebook — this creates input batch dataset and stores in batchinput container as parquet file
  • Then Drag Azure ML and Select the publish pipeline
  • Then drag another Azure databricks and select the notebook to consume batch output and store back in delta table

Output

  • Finalize the batch inference pipeline run

Original article — Samples2022/designerdeploy.md at main · balakreshnan/Samples2022 (github.com)

--

--