David Smith

Hi! I'm David Smith
Data Scientist and Cloud Developer Advocate at Microsoft.
Libraries (View all)

by David M Smith (), Developer Advocate at Microsoft

Last updated: November 2, 2018

Presented at:

  • ODSC West, November 2018

This library includes three notebooks to support the workshop:

  1. The AI behind Seeing AI. Use the web-interfaces to Cognitive Services to learn about the AI services behind the "Seeing AI" app
  2. Computer Vision API with R. Use an R script to interact with the Computer Vision API and generate captions for random Wikimedia images.
  3. Custom Vision with R. An R function to classify an image as a "Hot Dog" or "Not Hot Dog", using the Custom Vision service.

These notebooks are hosted on Azure Notebooks at https://notebooks.azure.com/davidsmi/libraries/aiforgood, where you can run them interactively. You can also download them to run them using Jupyter.

If you get stuck or just have other questions, you can contact me here:

David Smith

Modified on: Nov 13, 2018
EARL: Not Hotdog
0 clones

David M Smith (@revodavid), Developer Advocate at Microsoft

Last updated: September 5, 2018 Clone this library to get the latest updates

This self-guided workshop provides R scripts to demonstrate various AI techniques using the Microsoft Cognitive Services APIs in Azure. The scripts are provided as Jupyter Notebooks within the Azure Notebooks service. You don't need a Microsoft Account to view the scripts, but you will need to set one up and generate keys in Azure to run the examples. All of the examples use free Azure services.

If you're new to Notebooks, check out the Jupyter Notebook documentation.

If you're new to R, you might want to start with this Introduction to R notebook to get a sense of the language.

You will need:

  1. A Microsoft account. You can use an existing Outlook 365 or Xbox Live account, or create a new one.

  2. A Microsoft Azure subscription. If you don't already have an Azure subscription, you can visit https://cda.ms/kT and also get $200 in credits to use with paid services. You'll need to provide a credit or debit card, but everything we'll be doing is free to use. If you're a student, you can also register at https://cda.ms/kY without a credit card for a $100 credit.

You'll also need a few other things specific to this workshop. Follow the instructions below to set up everything you need.

  1. Visit https://portal.azure.com
  2. Sign in with your Microsoft Account. If you don't have a Microsoft account, use the links above to create one for free.

In Azure, a Resource Group is a collection of services you have created. It groups services together, and makes it easy to bulk-delete things later on. We'll create one for this lab.

  1. Visit https://portal.azure.com (and sign in if needed)
  2. Click "Resource Groups" in the left column
  3. Click "+ Add"
    • Resource Group Name: qcon
    • Subscription: there should be just one option
    • Resource Group Location: East US
  4. Click "Create"

A notification will appear in the top right. Click the button "Pin to Dashboard" to pin this resource group to your home page in the Azure portal, as you'll be referring to it frequently.

  1. Visit https://portal.azure.com (and sign in if needed)
  2. Click "+ Create a Resource" (top-left corner)
  3. Click "AI + Cognitive Services"
  4. Click "Computer Vision API"
    • Name: qcon-vision
    • Subscription: there should be just one option
    • Location: East US
    • Pricing Tier: F0 (free, 20 calls per minute)
    • Resource Group: Use existing "qcon" group
  5. Click "Create""
  1. Visit https://portal.azure.com (and sign in if needed)
  2. Click "+ Create a Resource" (top-left corner)
  3. With the "Search the Marketplace" box, search for "Custom Vision Service"
  4. Select "Custom Vision Service (preview)" and click "Create"
    • Name: qcon-customvision
    • Subscription: there should be just one option
    • Location: South Central US
    • Prediction Pricing Tier: F0 (free, 2 transactions per second)
    • Training pricing Tier: F0 (2 projects)
    • Resource Group: Use existing "qcon" group
  5. Click "Create"
  1. Visit https://notebooks.azure.com/davidsmi/libraries/qcon
    • Sign in with your Microsoft account if needed
  2. click Clone in the toolbar, to create a copy of the workshop materials

Download the file and provide the keys listed from the Azure Portal.

(To download this file, highlight it in the Library view and then press or click the download icon in the toolbar.)

For the first line of the file, , change it to . For the remaining keys, visit your resource group in the Azure Portal and then:

  1. Click on the API resource for Computer Vision

  2. In the menu, click on "keys"

  3. Click the "copy to clipboard" next to KEY 1. (You can ignore KEY 2).

  4. Paste the key into the entry in keys.txt

  5. Click on the API resource for Custom Vision

  6. In the menu, click on "keys"

  7. Click the "copy to clipboard" next to KEY 1. (You can ignore KEY 2).

  8. Paste the key into the entry in keys.txt

  9. Click on the API resource for Custom Vision (this resource was created automatically for you).

  10. In the menu, click on "keys"

  11. Click the "copy to clipboard" next to KEY 1. (You can ignore KEY 2).

  12. Paste the key into the entry in keys.txt

Your final file will look like this, but with different (working) keys:

Once you've done this for all the cognitive services, save the file and upload it to replace the existing file in Azure Notebooks. To upload the modified file, go to the Library and press the "+" (New File) icon or press , click "From Computer" > "Choose Files" and select the file on your hard drive, and click "Upload". A box saying "File Exists, Overwrite?" will appear; click it to continue.

The R scripts are provided as Jupyter Notebook files (with the extension). You can tackle the files in any order, but we recommend the following sequence:

  1. : Explore analyzing images from Wikimedia Commons using the Microsoft Vision API
  2. : Create the "Not Hotdog" image recognition application featured in Silicon Valley

For more examples of using the Cognitive Services APIs from R, take a look a the following blog posts. R code is included in the posts or in linked Github repositories.

If you get stuck or just have other questions, you can contact me here:

David Smith

Modified on: Sep 6, 2018
This introduction to R provides a first look into a number of R functionalities, inluding reading data, processing data, and modeling. It includes dives into extremely useful packages such as dplyr and caret.
Modified on: Apr 9, 2018

This repository contains example notebooks demonstrating the Azure Machine Learning Python SDK which allows you to build, train, deploy and manage machine learning solutions using Azure. The AML SDK allows you the choice of using local or cloud compute resources, while managing and maintaining the complete data science workflow from the cloud.

Read more detailed instructions on how to set up your environment.

You should always run the Configuration notebook first when setting up a notebook library on a new machine or in a new environment. It configures your notebook library to connect to an Azure Machine Learning workspace, and sets up your workspace and compute to be used by many of the other examples.

If you want to...

The Tutorials folder contains notebooks for the tutorials described in the Azure Machine Learning documentation

The How to use Azure ML folder contains specific examples demonstrating the features of the Azure Machine Learning SDK

  • Training - Examples of how to build models using Azure ML's logging and execution capabilities on local and remote compute targets.
  • Training with Deep Learning - Examples demonstrating how to build deep learning models using estimators and parameter sweeps
  • Automated Machine Learning - Examples using Automated Machine Learning to automatically generate optimal machine learning pipelines and models
  • Machine Learning Pipelines - Examples showing how to create and use reusable pipelines for training and batch scoring
  • Deployment - Examples showing how to deploy and manage machine learning models and solutions
  • Azure Databricks - Examples showing how to use Azure ML with Azure Databricks

Visit following repos to see projects contributed by Azure ML users:

Modified on: Aug 19, 2019

Simple MNIST Sample

Modified on: Feb 20, 2019

Minimal implementation of YOLOv3 in PyTorch.

Joseph Redmon, Ali Farhadi

Abstract We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that’s pretty swell. It’s a little bigger than last time but more accurate. It’s still fast though, don’t worry. At 320 × 320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 AP50 in 51 ms on a Titan X, compared to 57.5 AP50 in 198 ms by RetinaNet, similar performance but 3.8× faster. As always, all the code is online at https://pjreddie.com/yolo/.

[Paper] [Original Implementation]

Uses pretrained weights to make predictions on images. Below table displays the inference times when using as inputs images scaled to 256x256. The ResNet backbone measurements are taken from the YOLOv3 paper. The Darknet-53 measurement marked shows the inference time of this implementation on my 1080ti card.

Evaluates the model on COCO test.

Data augmentation as well as additional training tricks remains to be implemented. PRs are welcomed!

Modified on: Feb 15, 2019
Starred Libraries (View all starred libraries)