Deep learning models demand a significant amount of computer power and time to train, and retraining such models from scratch to include newly received data is frequently difficult. Assume the scenario of recognising an object category without ever seeing a snapshot of that sort of object. If you have read a detailed description of a cat, you may be able to identify it in an image the first time you see it. Nowadays, a technique known as "zero-shot learning" is used by computer vision algorithms to apply this idea. Zero-shot learning allows a model to complete a job without receiving any training examples by inferring what might be in an image using auxiliary information such as text descriptions.
What is Zero-Shot Learning?
Zero-shot learning (ZSL) is a machine learning technique that enables a model to categorise items from previously unseen classes without getting any explicit training for those classes. This technique is useful for autonomous systems that must be able to recognise and classify novel things on their own.
A model is pre-trained on a set of classes (seen classes) in ZSL and then requested to generalise to a different set of classes (unseen classes) with no additional training. The purpose of ZSL is to apply the model's existing knowledge to the task of classifying previously undiscovered classes. ZSL is a branch of transfer learning that entails adapting a model to a new task or class set.
Transfer Learning is most commonly used to fine-tune a pre-trained model—a problem with the same feature and label spaces. This is referred to as "homogenous transfer learning." However, Zero-Shot Learning is classified as "heterogeneous transfer learning," as the feature and label spaces are different.
Utility of Zero-shot Learning in Vision AI
Zero-Shot Learning can be helpful because it can assist in overcoming the obstacles and expenses involved with data labelling, which is a time-consuming and frequently costly procedure. Annotations from specialised experts in particular disciplines, such as medical diagnostics data, which requires the competence of skilled medical practitioners, are especially difficult to collect.
Furthermore, there may not be enough training data for a class (for example, when attempting to diagnose infrequent flaws in items), or the data may be skewed. Because of these two circumstances, it is difficult for a model to effectively represent real-world scenarios. Unsupervised learning approaches may also be insufficient for specific tasks, such as identifying several sub-categories of the same object (for example, cat breeds).
ZSL can help to reduce these issues by allowing a model to conduct classification on novel classes (unseen classes) utilising knowledge gained during training.
Image classification, object detection, object tracking, semantic segmentation, style transfer, and natural language processing are all applications of ZSL. It is especially beneficial when labelled data for novel classes is rare or expensive to gather.
How does Zero-Shot Learning Work?
Data in Zero-Shot Learning is classified into three types:
1. Classes that have been seen: Classes that have been used to train the model.
2. Unseen Classes: Classes that the model must classify without any prior training. These classes' data were not used during training.
3. Auxiliary Information: All of the unseen classes' descriptions, semantic information, or word embeddings. Because there are no labelled instances of the unseen classes accessible, this is required to solve the Zero-Shot Learning problem.
Training and inference are both included in Zero-Shot Learning. A model learns about a labelled set of data samples during training. Next, the model uses this knowledge and auxiliary information to categorise a new set of classes during inference.
Humans can accomplish "Zero-Shot Learning" since they already have a language knowledge base. We can draw parallels between new or previously unseen classes and previously seen visual notions. Zero-Shot Learning, like humans, is dependent on prior information.
In computer vision, "knowledge" refers to a labelled training set of observed and unseen classes. The data presented should be associated in a semantic space, which is a high-dimensional vector space. Knowledge from visible classes can be used to unseen classes in this space.
Zero-Shot Learning: AI Models Training Methods
The following are the two most frequent ways for solving zero-shot recognition problems:
- Classifier-based methods
- Instance-based methods
Classifier-based Methods
Classifier-based approaches for training a multi-class zero-shot classifier typically employ a one-versus-rest approach, in which each unseen class is taught with a distinct binary classifier. These methods are further classified into three types based on the approach used to build the classifiers:
Correspondence Methods
These methods seek to construct a classifier for unknown classes by constructing a link in a semantic space between the binary one-versus-rest classifier for each class and its matching class prototype.
In the semantic space, each class has a single prototype that might be called the "representation" of that class. Each class in the feature space has a binary one-versus-rest classifier. This could be considered a "representation" of that class. Correspondence approaches seek to discover a function that maps between these two types of "representations."
Relationship Methods
Relationship techniques for building a classifier for unseen classes focus on the relationships between and within those classes. Using accessible data in the feature space, binary one-versus-rest classifiers for the seen classes may be trained.
Relationship approaches seek to construct a classifier for unseen classes based on these trained classifiers and the relationships between visible and unseen classes. This information may be obtained by computing the relationships between corresponding prototypes. In order to categorise unseen classes, these approaches try to combine both class associations and learnt binary observed class classifiers.
Combination Methods
This category of methods entails generating a classifier for unknown classes by integrating classifiers for the essential elements that comprise those classes.
It is expected with this technique that there is a list of fundamental elements that may be utilised to build classes. Each data point in the seen and unseen classes is likewise presumed to be a mix of these components. A cat picture, for example, may have components such as a tail and hair.
Each dimension in the semantic space is connected with a fundamental element. Each class prototype represents the appropriate class's combination of these components. For each dimension, the class prototypes can take on a value of 1 or 0, indicating if the class has the relevant element. These techniques are especially well-suited for application in semantic environments.
Instance-based Methods
Instance-based approaches for zero-shot learning begin with gathering labelled instances of the unknown classes, which are subsequently used to train a classifier. Instance-based approaches are classified into three types based on the source of these instances:
Projection Methods
By projecting both the feature space instances and the semantic space prototypes into a common space, these approaches try to acquire labelled examples of the unseen classes. This entails labelled training examples in the feature space for the seen classes, as well as prototypes in the semantic space for both the seen and unseen classes.
The feature and semantic spaces are both represented as real number spaces, with instances and prototypes as vectors. In this perspective, prototypes may also be regarded labelled instances. Projection techniques entail projecting cases from these two spaces onto a common space, allowing labelled examples of the unseen classes to be obtained.
Instance-borrowing Methods
Instance-borrowing approaches entail borrowing labelled samples of previously unknown classes from training instances. These techniques rely on class similarities. If we wish to train a classifier for the "tiger" class but don't have any labelled examples of trucks, we may utilise labelled examples of the "cat" and "lion" classes as positive instances while training the classifier for "tiger." This is comparable to how people recognise and learn about items by recognising related ones. We may not have seen specific classes, but we can recognise them based on our understanding of related classes.
Synthesizing Methods
This category involves creating labelled examples of unknown classes by synthesising pseudo-instances using various strategies. Some approaches presume that examples of each class follow a certain distribution and utilise this knowledge to estimate distribution parameters for unseen classes and construct synthetic instances.
Evaluation Metrics for Zero-Shot Learning
To assess a classifier's performance in zero-shot recognition, we compute the average per category top-k accuracy. The proportion of test samples whose actual class (label) is one of the k classes with the greatest predicted probability by the classifier is measured by this metric.
Let's say we have a 5-class categorization challenge. The top two classes are class-0 and class-3 if the classifier generates the probability distribution [0.40, 0.15, 0.15, 0.20, 0.1] for classes 0, 1, 2, 3, and 4. If the real label of the sample is either class-0 or class-3, the classifier is said to have predicted correctly.
The average top-k accuracy per category is computed by averaging the top-k accuracy for each class, where C is the number of unseen classes.
Zero-Shot Learning Limitations
Zero-Shot Learning, like any other notion, has limits. Here are some of the most common challenges you will encounter while putting Zero-Shot Learning into practise.
Bias
When a model is trained using just data from the seen classes, it is more likely to correctly predict samples from unseen classes as the ones originating from the seen classes during the testing phase. This bias can be more obvious when the model is tested on a combination of seen and unseen class data.
Domain Shift
The primary purpose of Zero-Shot Learning is to adjust a pre-trained model to classify samples from novel classes as data from these classes becomes available. A model taught to recognise tigers and zebras with supervised learning, for example, may be extended to identify elephants on the fly with Zero-Shot Learning.
The "domain shift" problem, which happens when the distribution of data in the training set (seen classes) and the testing set (which may include samples from both visible and unseen classes), is a typical obstacle in this setting.
As a result, zero-shot learning is inappropriate for making predictions on pictures that are outside of the domain on which the model was trained.
Hubness
The "hubness" problem, which occurs owing to the curse of dimensionality in the closest neighbour search of high-dimensional data, is a frequent obstacle for Zero-Shot Learning. "Hubs" are points that regularly occur in the k-nearest neighbour set of other points.
The hubness problem occurs in Zero-Shot Learning for two reasons:
- Because both input and semantic data reside in a high-dimensional space, projecting this high-dimensional vector onto a low-dimensional space might result in point clustering as hubs owing to lower variance.
- Ridge regression, a popular technique in Zero-Shot Learning, may also cause hubness. As a result, predictions are biassed and just a few classes are regularly predicted. When performing the nearest neighbour search in semantic space, the points in these hubs tend to be close to the semantic attribute vectors of many classes, which might contribute to a decrease in performance during the testing phase.
Semantic Loss
Semantic loss is the loss of latent information contained in seen classes that may not have been essential in discriminating between seen classes but may have been important in categorising unseen classes during the testing phase. This can happen when the model has been taught to focus on certain characteristics in the observed classes and may overlook other potentially relevant information. A cat/dog classifier that concentrates on variables such as face appearance and body form, but ignores the fact that both are four-legged animals, which might be helpful in categorising humans as an unseen class during the test phase, is an example of this.
Zero-Shot Learning: Key Takeaways
Zero-Shot Learning is a Machine Learning paradigm that uses a previously trained model to evaluate test data from classes that were not utilised during training. That is, a model must be able to expand to new categories in the absence of prior semantic knowledge.
Such learning frameworks reduce the requirement for model retraining.
Furthermore, we do not need to be concerned about dataset class imbalance. It has been used in numerous areas of computer vision, including image classification and segmentation, object recognition and tracking, NLP, and so on.
However, there are certain issues with Zero-Shot Learning frameworks as well, therefore the topic is constantly being researched to increase its capabilities.
Learn more about SKY ENGINE AI offering
To get more information on synthetic data, tools, methods, technology check out the following resources:
- Telecommunication equipment AI-driven detection using drones
- A talk on Ray tracing for Deep Learning to generate synthetic data at GTC 2020 by Jakub Pietrzak, CTO SKY ENGINE AI (Free registration required)
- Example synthetic data videos generated in SKY ENGINE AI platform to accelerate AI models training
- Presentation on using synthetic data for building 5G network performance optimization platform
- Working example with code for synthetic data generation and AI models training for team-based sport analytics in SKY ENGINE AI platform