Cognitive service terms

Rating & reviews (0 reviews)
Personalizer
Cognitive service for a decision support solution.
Analyzes user's real-time behavior, for example, online shopping patterns, and then help your app to choose the best content items to show.

Spatial Analysis
Cognitive service for a computer vision solution.
Ingest a live video stream, detect and track people, monitor specific regions of interest on the video, and generate an event when a specific trigger occurs.
Can monitor an area in front of the checkout counter and trigger an event when the count of people exceeds a defined number.

QnA Maker
Cognitive service for a Natural Language Processing (NLP) solution.
Helps to build a custom knowledge base to provide a natural conversation layer over your common questions and answers.

Anomaly Detector
Finds data that is an outlier or is out of trend in time-series data. It does not analyze content supplied by users for offensive content such as adult, racy, or
gory Images.

Content Moderator
Cognitive service that identifies content that is potentially offensive, including images that may contain adult, racy, or gory content. It flags such content automatically for a human to review in a portal.

Smart Labeler
Used in training models (object identification).
Tag uploaded images automatically, reducing the manual effort needed to improve the model. You must check and adjust tags.

Normalizepunctuation
Language Understanding - remove punctuations such as dots, commas, brackets, and others from your utterances before your model gets trained or your app's endpoint queries get predicted. However, it does not eliminate the effect of accented characters.

NormalizeDiacritics
Language Understanding - replace accented characters, also known as diacritics, with regular characters.
Allows you to ignore the effect of diacritics during your app's training and prediction.
Only available for the supported languages like Spanish, Portuguese, Dutch, French, German, and Italian.

Phrase list
Language Understanding - list of similar words or phrases that can be used by your model as a domain-specific vocabulary.
Example in travel industry: Single, Double, Queen, King, and Twin as a phrase list feature, so the app can recognize from the utterances preferences for a hotel room type.
The phrase list feature can improve the quality of your NLU app understanding of intents and entities

filterable
QnA - Property specifies if the field in the index can be used as a filter and can restrict the list of documents returned by the search.
The filterable property is a Boolean value defined on a field in the index.

sortable
QnA - allows other fields to be used for sorting results
By default, search results are ordered by score.

facetable
QnA - returns a hit count by category, for example the number of results by test type.

retrievable
QnA - Boolean; if set to true, includes the field in the results of the search, if set to false the field will not be included in the search result.
Does not allow the user search on the field or restrict by the value of the field.

Knowledge store
Azure Cognitive Search - place in Azure Storage, where is stored data created by a Cognitive Search enrichment pipeline.
It is used for independent analysis or downstream processing in non-search scenarios like knowledge mining.
It is an enriched (created/generated by the skillset) content stored:
  • tables TAB3 (key phrase extraction)
    table projections require the data to be mapped to the knowledge store using outputFieldMappings with:
    • sourceFieldName
    • targetFieldName
  • blob containers in storageContainer.
When you save enrichments as a table projection you need to specify:
  • source - path to projection
  • tableName - name of table in Azure Table storage

Projections
Enriched documents from Cognitive Services that are stored in a knowledge store.
Enhance and shape the data.
Type of projections:
  • File (images )
  • Object (JSON)
  • Table (dictionary)
Skillset
Enrichment process of Cognitive Search enrichment pipeline.
Move a document through a sequence of enrichments that invoke atomic transformations, such as recognizing entities or translating text.
Output:
  • always a search index.
  • can be projections in a knowledge store.
Search index and knowledge store are mutually exclusive products of the same pipeline.
They are derived from the same inputs but their content is structured, stored, and used in different applications.

encryptionKey
Azure Cognitive Search, enrichment - optional and used to reference an Azure Key Vault for the skillset, not for the knowledge store.

referenceKeyName
Azure Cognitive Search, enrichment - used to relate data across projections.
If it is not specified, then the system will use generatedKeyName

fieldMappings
Property is used to map key fields.
It is optional. By default, the metadata_storage_path property is used.

storageConnectionString
Required when storing the skillset's output data into a knowledge store (not required for the indexer)

cognitiveServices
Defines the Cognitive Services resource to use to enrich the data
Required when defining the skillset (not required for the indexer).

LUISGen
Command-line tool that can generate C# and Typescript classes for your LUIS intents and entities.

Lodown
Command-line tool that you use to parse .lu files.
If the file contains Intents, entities, and patterns, LUDown tool can generate a (LUIS) model in JSON format.
If the file contains question and answer pairs, then the LUDown can generate a knowledge base in JSON format.

Chatdown
Command-line tool that can be used to generate mock conversations between a user and a bot.
Expects input in a chat file format to generate conversation transcripts in .transcript format that can be consumed by the Bot Framework Emulator.
->Work with:Bot Framework Emulator and ngrok

ngrok
1. Tool that allows you to expose a web server running on your local computer to the internet.
It helps you to locally debug a bot from any channel.
2. It is integrated with Bot Framework Emulator and allows Bot Framework Emulator to connect to remote endpoints such as the production Web app running in Azure.
Enables Bot Framework Emulator to bypass the firewall (tunnel) on your computer and connect to the Azure bot service and intercept the messages to and from the bot.
->Work with: Bot Framework Emulator and Chatdown

Bot Framework Emulator
Tool to debug your bot.
Bot Framework Emulator is a Windows desktop application that can connect to a bot and inspect the messages sent and received by the bot.
Framework Emulator to connect to remote endpoints such as the production Web app running in Azure.
Can view conversation transcripts (.transcript files) and can use these transcript files to test the bot.
Conversations between a user and a bot can be mocked up as text files in markdown in the .chat file format.
->Work with: Bot Framework Emulator and Chatdown

Active learning
If enabled, QnA Maker will analyze user queries and suggest alternative questions that can improve the quality of your knowledge base.
If you approve those suggestions, they will be added as alternative questions to the knowledge base, so that you can re-train your QnA Maker model to serve your customers better.

Chit-chat
Pre-built data sets.
The chit-chat feature will add a predefined personality to your bot to make it more conversational and engaging.

Precise answering
The precise answering feature uses a deep learning model to identify the intent in the customer question and match it with the best candidate answer passage from the
knowledge base.

Managed keys
Encryption keys managed by Microsoft or the customer used to protect QnA Maker's data at rest.

Regular expression type
Uses a regular expression (Regex) to search for a pattern. It is used to match fixed patterns in a string such as numbers with two decimal places.

Default recognizer type
It includes LUIS and QnA Maker recognizers.

Custom recognizer type
Allows you to define your own recognizer - JSON format
It may be possible to perform regular expressions using a custom recognizer in JSON, but this requires additional effort to define and test the JSON to extract the numbers.

Orchestrator recognizer type
Allows you to link other bots to your bot as skills. It may help to find patterns in text strings but with more effort and resources.

Computer Vision
Only identifies well-known brands

Object Detection
Can locate and identify logos in images. Custom Vision resource in Azure must exist.

Classification project
Classification model for Custom Vision analyze and describe images.

Partitions
Cognitive search - Control the distribution of index across the physical storage.
Partitions split data across different computing resources. This has the effect of improving the performance of slow and large queries.
For example, with three partitions, you divide your index into three slices. To meet the

Replicas
Primarily used for load balancing, and so assist with the response for all queries under load from multiple users.
Adding a replica will not make an individual query perform faster.
Cognitive search - Microsoft guarantees 99.9% availability of read-write workloads for queries and Indexing if your Azure Cognitive Search resource has three or more replicas.

Sample labeling tool
Tool for training custom Form Recognizer.

Azure Files
Are fully managed file shares that you can mount in Windows, Linux, or macOS machines.

Custom Vision service
Allows you to train image classification and object detection algorithms that you can use in your image recognition solutions.

Azure Video Analyzer for Media
Accessible athttps://wwwvideoindexer_ai.
With a free trial, you can use up to 600 minutes of free video indexing using the Video Analyzer for Media website or up to 2,400 minutes when accessing it through API.
The Content model customization option allows you to manage Person, Brands, and Language models, for example, to add custom faces or exclude certain brands.

endpoint.microsoft.com
Microsoft Intune portal address. Manage your mobile devices, deploy device policies, and monitor them for compliance.

azure-cognitive-services/decision repository
Common storage location for Azure Cognitive Services container images in the Decision domain, for example Anomaly Detector.

azure-cognitive-services/textanalytics repository
Common storage location for Text Analytics container images such as Key Phrase Extraction or Text Language Detection.

food(compact) domain
Allows you to classify photographs of fruit and vegetables.
Compact models are lightweight and can be exported to run on edge devices.

food domain
Will help you classify photographs of fruit and vegetables.
It is not optimized to run on edge devices.
Cannot be exported from the Custom Vision portal for offline use.

retail(compact) domain
Optimized for images that are found in a shopping catalog or shopping website.
Us it for high precision classification between dresses, pants, shirts, etc.

products on shelves domain
Object detection domain that can detect and classify products on shelves.
It is not optimized to run on edge devices.
It cannot be exported from the Custom Vision portal for offline use.

Adaptive expressions
Used by language generation to evaluate conditions described in language generation templates.

Language generation
Enables your bot to respond with varying text phrases, creating a more natural conversation with the user.

Language understanding
Enables your bot to understand user input naturally and to determine the intent of the user.

Skills
Allow you to call one bot from another and create a seamless bot experience for the user.

Orchestrator
Combines your bot with other bots as skills and determines which bot to use to respond to the user.

Verify API
Allow a person automated entry to the premises when showing their face to the entrance gate camera.
Determines whether the face belongs to that same person.
The Face Verify API will compare the person's image against the enrolled database of persons' images and provide them access, creating a one-to-one mapping between the two images to verify if both images belong to the same person.

Detect API
Generate engagement reports for using student emotions and head poses during period of time.
Face detection captures a number of face-related attributes like head pose, gender, age, facial hair, etc.
Admins can use this JSON output from the images to study the people engagement.

Identify API
Identify persons who are attending a specific event.
Face identification allow admins to compare given event photos against all previous photographs of the
same subject and do a one-to-many mapping.

Group API
Face grouping divides a set of unknown faces into smaller sets of groups based on similarity.
Additionally, it also returns a set of face IDs having no similarities.
A single person can have multiple groups, although each returned group is likely to belong to the same person.
These different groups of the same person are differentiated due to additional factors like expression.

Precision
It is about predicted. Indicates what is the proportion of true positives (TP) over the sum of all TP and False Positives (FP).
It uses the formula: TP/ (Tp Fp)

Recall
Indicates the fraction of actual classifications that were correctly identified.
It uses the formula: TP / (TP + FN)

Roles
Are added to entities to distinguish between different contexts.
Example: flight origin and flight destinatiom - you can add roles to the prebuilt geographyV2 entity.

Features
Provide hints for interchangeable words. Features act as synonyms for words when training a LUIS app.

Pattern
Patterns are used with entities and roles to extract information from an utterance.

synonym map
Resource that supplements the content of a search index with equivalent terms.
It cannot be used to enable a search-as-you-type functionality

Knowledge store - object or file projections
With object and file projections, you can write the content from source documents, skills output, and enrichments to blob storage.

Bot Framework Composer
Ca be published as web app to the Azure App Service
Run as a serverless application In the Azure Functions.

Skill bot
Boot that can perform tasks for another bot.
Skill manifest is required for bots that are skills

Skill consumer bot
Bot that can call other bots as skills

skill manifest
JSON file that describes the actions that it exposes to the skill consumer and the parameters it requires.

Billable Cognitive Search
Must be used for the Azure scenarios where you expect high or frequent load.
As an all-in-one resource, Azure Cognitive Service provides access to Computer Vision, Text Analytics, and Text Translation services through the relevant endpoints.
The Computer Vision service particularly provides OCR the capability to identify and extract text from given images.

Image Analysis
Can analyze the image content and generate captions and tags or identify celebrities and landmarks.
For the low-volume text extraction, you can use built-in OCR skills, as it allows the processing of a limited number of documents for free.

built-in Key Phrase Extraction
Can evaluate given input text and return a list of detected key phrases.

Faceted navigation in Azure Cognitive Search
Filtering feature that enables drill-down navigation in your search-enabled application.
Rather than typing your search expression, you can use faceted navigation to filter your search results through the exposed search criteria such as range or counts within your Azure Cognitive Search index.

Brands model
Identify mentions of products, services, and companies.
By using content model customization, you can configure the Brands model to identify Bing suggested brands or any other brands that you add manually.
For example, if Microsoft is mentioned in video or audio content or if it shows up in visual text in a video
Video Analyzer for Media detects it as a brand in the content.

Person model
Recognize celebrities from video content.
By using content model customization, you can configure the Person model to detect celebrity faces from a database of over one million faces powered by various data sources like Internet Movie Database (IMDB), Wikipedia, and top Linkedln influencers.

Language model
Ability to determine industry terms or specific vocabulary.
By using content model customization, you can configure the Language model to add your own vocabulary or industry terms that can be recognized by the Video Analyzer for Media.


cognitiveservices_azure.com domain
Can be used to access the Computer Vision API and generate the required image thumbnails. Example:
https://MYAPPLOCATION.cognitiveservices_azure.com/vision/v3_1/generateThumbnail
https://MYAPPNAME.api_cognitive_microsoft.com/vision/v3_1/generateThumbnail

azurewebsites_net
Subdomain reserved for the use of Azure web apps.
This subdomain is assigned to your custom web apps that you deploy in Azure.

azure-api.net
Subdomain is reserved for use by Azure API Management instances.
API Management can potentially hide Computer Vision endpoints to act as a frontend Interface.

Azure Application Insights
Helps in monitoring your container across multiple parameters like availability, performance, and usage. It is a recommended, but optional, setting when configuring the docker container.
It is not a required setting that must be configured for telemetry support of your container.

Direct Line speech
Channel that allows users to interact with your bot via voice.
Uses the Cognitive Service Speech-to-Text service.

Custom commands
Used with voice assistants for more complex task-orientated conversations.
Use speech-to-text to transcribe the user's speech, then take action on the natural language understanding of the text.

Language understanding
Enables your bot to understand user input naturally and to determine the intent of the user.

Language generation
Use templates enable you to send a variety of simple text messages to users.

Telephony
Channel that allows users to interact with the bot over the phone.
Only enables voice over the telephone not on other channels.

Indexer
Crawler that extract searchable text and respective metadata from an external Azure data source.
It populates a search index mapping between source data and your Azure Cognitive Search index.
Supported as a data source:
  • Azure Table Storage
  • Azure Data Lake Gen2
  • Azure Cosmos DB
Azure File Storage
Provides files shares in cloud that are fully managed and accessible through the SMB or NFS protocol.

Azure Data Lake Gen1
Designed for big data analytic workloads and acts as an enterprise-wide hyper-scale repository.

Azure Bastion
Enables secure RDP (Remote Desktop Protocol) or SSH (Secure Shell) connectivity to your VM, without the need for the VM to have a public IP address.

HeroCard
Single large image and one or more buttons with text.
It has an array of card images and an array of card buttons.
The buttons can either return the selected value to the bot or open a url.

CardCarousel
Collection of cards that allows your user to view horizontally, scroll through, and select from.
The code would need to specify an attachmentLayout of carousel in order to display as a carousel.

ReceiptCard
Contains a list of Receiptltem objects with a button at the end.
You should not use SuggestActions_ SuggestActions displays a list of suggested actions. It uses an array of
card actions.

ImBack
Shows the accept button. The Imback activity type sends a message to the bot when the user selects the button containing the value specified in the parameters.

openlJrl
Show the a webpage.
An openlJrl activity type Opens the URL specified in the value parameters, which in this case is the organization's privacy policy.
You should not use Signin. The Signin activity type uses OAuth to connect securely with other services.

displayText
Used with the messageBack activity type to display text in the chat. It is not sent to the bot.

Azure Form Recognizer
Service that allows you to analyze and extract text from structured documents like invoices.
Azure Cognitive Search does not provide any built-in skills to apply the Form Recognizer's functionality in an Al enrichment pipeline.
For this reason, you need to create a custom skill for it.

Bing Entity Search
Used to describe given geographical locations.
The Bing Entity Search functionality is not available in Azure Cognitive Search as a built-in cognitive skill.
For this reason, you need to build a custom skill and programmatically call the Bing Entity Search API to return descriptions for the given locations.

management_azure_com
Endpoint for the management of Azure services including the Search service
It enables the management and creation of keys for the Search service. Here you create the key for your search app - YOUR_COG_SERACH

YOUR_COG_SERACH.search.windows_net
Endpoint for the Search service. The client application will use this to query the indexes,

api.cognitive.microsoft_com
Endpoint is used by the Bing Search services.

Microsoft.Search
Provider for the Search service.
Using this provider, you can regenerate the admin keys or create query keys for the Search service.

Microsoft.CognitiveServices
Provider for generic Cognitive Services.
It can regenerate the primary and secondary keys for the service, but it cannot generate query keys for the Search service.

Microsoft_Authorization
Provider for managing resources in Azure and can be used to define Azure policies and apply locks to resources.


SpeechRecognizer class
Start the speech service.
Can perform speech-to-text processing.

AudioDataStream
Represents an audio data stream when using speech synthesis in speech-to-text.

addPhrase
Add the ITEM_NAME_THAT_HAS_TO_BE_CONV_TO_TEXT to improve recognition of the product name.

start_continuous_recognition
Starts speech recognition until an end event is raised.
Continuous recognition can easily process the audio up to 2 min long

recognize_once / recognize_once_async
Methods only listen to audio for a maximum of 15 seconds.


Upload
Method of the IndexDocumentsAction class that you can use to push your data to a search index.
If the document is new, then it is inserted. If the document already exists, then this method updates its values instead.

IndexDocuments
Method of the SearchClient class that you can use to send a batch of upload, merge, or delete actions to your target search index.

Merge
Method of the IndexDocumentsAction class that you can use to update the values of an existing document. The
Merge method will fail if you will try to push data into a document that does not yet exist in the search index.

Autocomplete
Method of the SearchClient class that you can use to use the search-as-you-type functionality in your app.
It uses text from your input field to autosuggest query terms from the matching documents in your search index.

GetDocument
Method of the SearchClient class that you can use to retrieve the specific document from your search index. In your case, you need to upload new documents instead.

customEvents
Telemetry sent by your bot to Azure Application Insights can be retrieved by the Kusto query only from the customEvents table.
Analyze your bot's telemetry

summarize operator
Produces a table with the data aggregated by specified columns.
This operator can call the count() aggregation function to return a count of the group.

Heartbeat table
Azure Monitor collects heartbeat data from Azure resources like virtual machines (VMs).

StormEvents table
Table in the Azure Data Explorer's sample database that contains information about US storms.

top operator
Returns the first N records sorted by the specified column

Bot Framework Composer
Windows desktop application that allows you to build bots using visual designer interface.

Log Analytics workspace
Used to send logs to a repository where they can be enriched with other monitoring logs collected by Azure Monitor to build powerful log queries.
Log Analytics is a flexible tool for log data search and analysis.
In a Log Analytics workspace you can combine your Azure Cognitive Services logs with other logs and metrics data collected by Azure Monitor, use Kusto query language to analyze your log data and also leverage other Azure Monitor capabilities such as alerts and visualizations.

Event Hub
Stream data to external log analytics solutions.
Event Hub can receive, transform and transfer millions of events per second.
With Event Hub, you can stream real-time logs and metrics from your Azure Cognitive Services to external systems such as Security Information and Event Management (SIEM) systems or third-party analytics solutions.

Azure Blob storage
Used to archive logs for audit, static analysis or backup purposes, keeping them in JSON format files.
Blob storage is optimized for storing big volumes of unstructured data.
With Blob storage you can keep your logs in a very granular format by the hour or even minutes, to assist with a potential investigation of specific incidents reported for your Azure Cognitive Services.

PythonSettings.json
It is a workspace configuration file for the Python used in the Visual Studio integrated development environment (IDE).

requirements.txt
In Python, you can use requirements.txt to specify the external packages that your Python project requires. You can install them all using the Python pip installer.

Cloning model
Copies the model into a new version.
The cloned version becomes the active version.
Cloning allows you to make changes to the model, such as adding intents, entities, and utterances, and test them without changing the original version of the model

Pattern.Any entity.
Helps the model understand the importance of the word order.
It is a variable-length placeholder to indicate where the entity starts and ends in a sentence.
It is used when utterances are very similar but refer to different entities and when entities are made up of multiple words.

activity handler
Send a welcome message when a user joins the bot conversation.
An activity handler organizes the conversational logic for a bot. Activity handlers respond to events such as a user joining the bot conversation.

component dialog
Use it to create a reusable set of steps for one or more bots.
A component dialog is a set of dialogs that can call the other dialogs in the set.
The component dialog manages the child dialogs in this set.
A component dialog can be reused in the same bot or in several bots.

waterfall dialog
Us it to create a set of sequential steps for a bot
A waterfall dialog is a set of dialog steps in a sequence to gather specific information from a user.

prompt dialog
Single individual prompt and user response.
Component dialogs and waterfall dialogs will contain one or more prompt dialogs.

Chat files
Are markdown files. They consist of two parts:
  • header - here you define participants
  • conversation.
Person
Object you add to the Face API to store names and images of faces for identification.
A Person can hold up to 248 face images.

PersonDirectory
Structure into which you add the Persons and their facial images.
PersonDirectory can hold up to 75 million Persons and does not need training for new facial images.

DynamicPersonGroup
Subset of identities from a PersonDirectory that you can filter identification operations on.
Using a DynamicPersonGroup, you can increase the accuracy of facial identification by only verifying faces against the smaller list people, instead of the entire set of faces in the PersonDirectory.

FaceList
Used for the Find Similar operation, not for identification.
Find Similar is used to find people who look similar to the facial image, it cannot be used to verify that a face belongs to a person.

PersonGroup
Can only hold 1,000 Persons on the free tier and 10,000 Persons on the standard tier.

LargePersonGroup
Can hold up to one million identities.
However, a LargePersonGroup requires training when a new facial image is added.
This training can take 30 minutes, and the face cannot be recognized until training is complete.
This solution unable to identify new faces immediately

Direct Line channel
Does not enable voice in capability in your bot.
Enables the communication between a client app and your bot over HTTPS protocol.
Does not support support network isolation

Direct Line Speech
Enables voice in or voice out capabilities in your bot
Does not support support network isolation

Direct Line App Service Extension
Ensure that traffic between your bot and client applications never leaves the boundaries of Azure VNet

You should use pattern matching or LUIS. The Speech software development kit (SDK) has two ways to
. The first is

pattern matching
Can be used to recognize intents for simple instructions or specific phrases that the user is Instructed to use.

Language Understanding (LUIS)
Model can be integrated using the speech SDK to recognize more complex intents with natural language.

SSMC Speech Synthesis Markup Language (SSML)
Adjusts the pitch, pronunciation, speaking rate, and volume of the synthesized speech output from the speech service.

Key phrase extraction
Is part of the Text Analytics API and extracts the main talking points from text.
Key phrase extraction is not integrated with the Speech SDK.

Visemes
Used to lip-sync animations of faces to speech.
Visemes use the position of the lips, jaw, and tongue when making particular sounds.

Scoring profile
Are part of the index definition and boost the relevance search based on the fields that you specify.
To favorize new entries, you can use the date the product was added to boost its relevance score in search.

Computer Vision
Analyze an image and generate a human-readable phrase that describes its contents. The algorithm returns several descriptions based on different visual features, and each description is given a confidence score. The final output is a list of descriptions ordered from highest to lowest confidence.
The endpoint used in crrl:
<Zone where Coputer Vison resource was created>.api.cognitive.microsoft.com....



Note 3, Login to view

Note, Login to view

Note 1, Login to view

References