Language Understanding solution with Cognitive Services

Rating & reviews (0 reviews)
Study notes

Language Understanding solution means to use Natural language processing (NLP) and that is creating models able to interpret the semantic meaningof written or spoken language.
Language Service provides various features for understanding human language.

Language service features:
  • Language detection - pre-configured
  • Sentiment Analysis and opinion mining pre-configured
  • Named Entity Recognition (NER) - pre-configured
  • Personally identifying (PII) and health (PHI) information detection -pre-configured
  • Summarization - pre-configured
  • Key phrase extraction - pre-configured - pre-configured
  • Entity linking - pre-configured
  • Text analytics for health - pre-configured
  • Custom text classification - learned
  • Orchestration workflow
  • Question answering - learned
  • entity recognition ?
  • intent recognition ?
  • text classification ?
  • Conversational language understanding - require a model to be built for prediction - learned
    Core custom features
    Helps users to build custom natural language understanding models to predict overall intent and extract important information from incoming utterances.
  • Custom named entity recognition (Custom NER) - require a model to be built for prediction- learned
    Core custom features

Language service features fall into two categories:
  • Pre-configured features
  • Learned features. Learned features require building and training a model to correctly predict appropriate labels.
A client application can use each feature to better communicate with users, or use them together to provide more insight into what the user is saying, intending, and asking about.

Building model:

All start with creating a Language Resource (you may use Cognitive service as well)
  • Azure portal
  • New resource -> Language resource
  • Create (you will have keys and end point, like usual)
By default, Azure Cognitive Service for Language comes with several pre-built capabilities like sentiment analysis, key phrase extraction, pre-built question answering, etc. Some customizable features below require additional services like Azure Cognitive Search, Blob storage, etc. to be provisioned as well. Select the custom features you want to enable as part of your Language service.

Select resource, give a name and select FREE price tier (5k transactions in 30 days)

Once done, go to Keys and Endpoints

1. Create model via REST API

  1. create your project
  2. import data
  3. train model
  4. deploy model
  5. query model
All steps are done asynchronously, for each:
  • Submit a request to the appropriate URI
  • Send a request to get the status of that job. You will get it in JSON format.
For each call you must authenticate the request by providing a specific header
Ocp-Apim-Subscription-Key -> language resource key (see above)

2. Create model usingLangauge Studio
  1. Open Language Studio
    Language Studio - Microsoft Azure
  2. Select Directory, subscription and Resource type (language) and Resource (the one you just created above)
  3. From Create New, select what project you wish to create and go through the process.

Does not matter how you create and train the model, the final step is to query the model.
To query your model for a prediction, create a POST request to the appropriate URL with the appropriate body specified.
Authentication is necessary for any request you send (via header)
Body is JSON format.
Result is in JSON format
Example to detect language:
"kind": "LanguageDetection",
"parameters": {
"modelVersion": "latest"
"text": "This is a document written in English."

"kind": "LanguageDetectionResults",
"results": {
"documents": [{
"id": "1",
"detectedLanguage": {
"name": "English",
"iso6391Name": "en",
"confidenceScore": 1.0
"warnings": []
"errors": [],
"modelVersion": "String"

Terms in NLP

  • Utterance
    The user input. Phrase that a user enters when interacting with the model (via an app that uses your Language Understanding model).
  • Intent
    Task or action the user wants to perform (meaning of an utterance).
    That something you want model to do and model has to understand and 'execute'
    There is "None" intend - default, when there is no action defined for what you ask.
  • Entity
    Add specific context to intents.

  • Define intents model must support.
  • Define all possible utterance for every single intent (all possible a user may input to request an intend)
    • Capture multiple different examples, or alternative ways of saying the same thing
    • Vary the length of the utterances from short, to medium, to long
    • Vary the location of the noun or subject of the utterance. Place it at the beginning, the end, or somewhere in between
    • Use correct grammar and incorrect grammar in different utterances to offer good training data examples
    • The precision, consistency and completeness of your labeled data are key factors to determining model performance.
      • Precisely: Label each entity to its right type always. Only include what you want extracted, avoid unnecessary data in your labels.
      • Consistently: The same entity should have the same label across all the utterances.
      • Completely: Label all the instances of the entity in all your utterances.
  • Add specific context to utterances using entities.
    Example: Utterance: What time is it in London? / Intent: GetTime/ Entities: Location (London)
    • Learned entities
      Flexible, use them in most cases. You define a learned component with a suitable name, and then associate words or phrases with it in training utterances.
      When you train your model, it learns to match the appropriate elements in the utterances with the entity.
    • List entities
      Useful when you need an entity with a specific set of possible values - for example, days of the week.
      Ex: DayOfWeek entity that includes the values "Sunday", "Monday", etc
    • Prebuilt entities
      Ueful for common types such as numbers, datetimes, and names.
      Ex: when prebuilt components are added, you will automatically detect values such as "6" or organizations such as "Microsoft".
      Let the Language service automatically detect the specified type of entity, and not have to train your model with examples of that entity
      You can have up to five prebuilt components per entity.
Create, Train and deploy Model:
  1. Train it to learn intents and entities from sample utterances.
  2. Test it interactively or using a testing dataset with known labels
  3. Deploy a trained model to a public endpoint (users can use it)
  4. Review predictions and iterate on utterances to train your model
The NLP model predict the user Intent from user Input (Natural language)
To use model in application you must publish a Language Understanding app for the created model.
Any app need an endpointused to query a specific feature varies.
    The endpoint for authenticating your API request. For example,
    The version number of the service you want to call. For example, v3.0
    The feature you're submitting the query to. For example, keyPhrases for key phrase detection
Pre-configured features
The Language service provides certain features without any model labeling or training.
Once you create your resource, you can send your data and use the returned results within your app. See Language service features(top this page)

Learned features
Require you to label data, train, and deploy your model to make it available to use in your application. See Language service features(top this page)

Consume a model via an App (Process predictions)
  • Use REST APIs
  • A programming language-specific SDKs.
Parameters to be sent:
  • kind
    Indicates language feature you're requesting.
    Ex: Conversationfor conversational language understanding, EntityRecognition to detect entities.
  • parameters
    Indicates the values for various input parameters.
    Ex: projectName and deploymentName are required for conversational language understanding
  • analysis input
    Specifies the input documents or text strings to be analyzed by the Language service.
Result consits of a hierarchy of information that your application must parse. JSON format.
The prediction results include the query utterance, the top (most likely) intent, other potential intents with their respective confidence score, and the entities that were detected.
Each entity includes a category and subcategory (when applicable) in addition to its confidence score (for example, "Edinburgh", which was detected as a location with confidence of 1.0).
The results may also include any other intents that were identified as a possible match, and details about the location of each entity in the utterance string.

The Language Understanding service can be deployed as a container, running in a local Dockerhost, an Azure Container Instance (ACI), or in an Azure Kubernetes Service (AKS) cluster.
  1. Container imagefor the specific Cognitive Services API you want to use is downloaded and deployed to a container host(local Docker server, ACI or AKS)
    Ex. Get image:docker pull
  2. Client applications submit data to the endpointprovided by the containerized service, and retrieve resultsjust as they would from a Cognitive Services cloud resource in Azure.
    Ex. run container
    docker run --rm -it -p 5000:5000
    --memory 4g
    --cpus 1

    Ex. Query service
  3. Periodically, usage metricsfor the containerized service are sent to a Cognitive Services resource in Azure in order to calculate billing for the service.

Hands-On Build a Language Understanding model, Login to view

Hands-On Create a Language Understanding app, Login to view


Create a Language Understanding solution with Azure Cognitive Services - Training | Microsoft Learn
Quickstart - create a conversations language understanding project - Azure Cognitive Services | Microsoft Learn
Language Studio - Microsoft Azure
Supported prebuilt entity components - Azure Cognitive Services | Microsoft Learn
What is Azure Cognitive Service for Language - Azure Cognitive Services | Microsoft Learn
How to tag utterances in Conversational Language Understanding - Azure Cognitive Services | Microsoft Learn
Custom text classification evaluation metrics - Azure Cognitive Services | Microsoft Learn
Use Azure Cognitive Services Containers on-premises - Azure Cognitive Services | Microsoft Learn
Conversational Language Understanding evaluation metrics - Azure Cognitive Services | Microsoft Learn
Conversational Language Understanding - Azure Cognitive Services | Microsoft Learn
Azure SDK for Python (May 2022) | Azure SDKs
Azure Conversational Language Understanding client library for Python | Microsoft Learn
Developer resources - Language Understanding - Azure | Microsoft Learn