CA3002100A1 - Unsupervised domain adaptation with similarity learning for images - Google Patents

Unsupervised domain adaptation with similarity learning for images Download PDF

Info

Publication number
CA3002100A1
CA3002100A1 CA3002100A CA3002100A CA3002100A1 CA 3002100 A1 CA3002100 A1 CA 3002100A1 CA 3002100 A CA3002100 A CA 3002100A CA 3002100 A CA3002100 A CA 3002100A CA 3002100 A1 CA3002100 A1 CA 3002100A1
Authority
CA
Canada
Prior art keywords
input image
features
prototype
domain
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CA3002100A
Other languages
French (fr)
Inventor
Pedro Henrique Oliveira Pinheiro
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ServiceNow Canada Inc
Original Assignee
Element AI Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Element AI Inc filed Critical Element AI Inc
Priority to CA3002100A priority Critical patent/CA3002100A1/en
Publication of CA3002100A1 publication Critical patent/CA3002100A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

Systems and methods for addressing the cross domain issue using a similarity based classifier convolutional neural network. An input image is passed through a convolutional neural network that extracts its features. These features are compared to features of multiple sets of prototype representations with each set of prototype representations being extracted from and representing a category of images. The similarity between the features of the input image and features of the various prototype representations is scored and the prototype representation whose features are most similar to the features of the input image will have its label applied to the input image. The classifier is trained using images from a source domain and the input images are from a target domain. The training for the classifier is such that the classifier will be unable to determine if a specific data point is from the source domain or from the target domain.

Description

Attorney Docket No. 1355P0030A01 UNSUPERVISED DOMAIN ADAPTATION WITH SIMILARITY LEARNING FOR
IMAGES
TECHNICAL FIELD
The present invention relates to artificial intelligence. More specifically, the present invention relates to systems and methods for classifying and labelling images based on similarities between input images and known labelled images.
BACKGROUND
Convolutional Neural Networks based methods achieve excellent results in large-scale supervised learning problems, where a lot of labeled data exists. Moreover, these features are quite general and can be used in a variety of vision problems, such as image captioning, object detection, and segmentation.
However, direct transfers of features from different domains do not work very well in practice, as the data distributions of domains might change. In computer vision, this problem is sometimes referred to as a domain shift. The most commonly used approach to transfer learned features is to further modify them through a process called "fine-tuning". In this case, the features are adapted by training the neural network with labeled samples from the new data distribution. In many cases, however, acquiring labeled data can be expensive.
Unsupervised Domain Adaptation deals with the domain shift problem. What is of interest are representations that are invariant to domains with different data distributions. In this Attorney Docket No. 1355P003CA01 scenario, the machine has access to a labeled dataset (called a source domain) and an unlabeled dataset (with a similar but different data distribution, called a target domain), and the objective is to correctly infer the labels to be assigned to the latter. Most current approaches are based on deep learning methods and consist of two steps: (i) learn features that preserve a low risk on labeled samples (i.e. the source domain) and (ii) make the features from both domains to be as indistinguishable as possible so that a classifier trained on the source domain can also be applied to the target domain.
Theoretical studies in domain adaptation suggest that a good cross-domain representation is one in which the systems are not able to identify from which domain the original input comes from. Most current approaches to domain adaptation achieve this goal by mapping cross-domain features into a common space using deep learning methods. This is generally achieved by minimizing some measure of domain variance (such as the Maximum Mean Discrepancy (MMD)), or by matching moments of the two distributions.
Another way to deal with the domain adaptation problem is to make use of adversarial training. In this scenario, the domain adaptation problem is cast as a minimax game between a domain classifier and feature learning. A neural network learns features that are, at the same time, as discriminative as possible (in the source domain) and as indistinguishable as possible (among the domains). In general, the classifier used in the source domain is a simple fully-connected network followed by a softmax over the categories (as in standard supervised learning). While this is, in principle, a good idea (given the representations are trained to be indistinguishable), it leaves the shared
- 2 -Attorney Docket No. 1355P003CA01 representation vulnerable to contamination by noise that is correlated with the underlying shared distribution.
From the above, there is therefore a need for systems and methods that provide solutions to the above issues while mitigating the shortcomings of the prior art.
SUMMARY
The present invention relates to systems and methods for addressing the cross domain issue using a similarity based classifier convolutional neural network. An input image is passed through a convolutional neural network that extracts its features. These features are then compared to the features of multiple sets of prototype representations with each set of prototype representations being extracted from and representing a category of images. The similarity between the features of the input image and the features of the various prototype representations is scored and the prototype representation whose features are most similar to the features of the input image will have its label applied to the input image. The classifier is trained using images from a source domain and the input images are from a target domain. The training for the classifier is such that the classifier will be unable to determine if a specific data point is from the source domain or from the target domain.
In one aspect, the present invention provides a method for assigning labels to unlabeled input images, the method comprising:
a) receiving an unlabeled input image;
- 3 -Attorney Docket No. 1355P003CA01 b) comparing features of said input image to features of a plurality of prototype representations, each of said plurality of prototype representations being representative of a category of images and each prototype representation being associated with a specific label;
c) determining a similarity between features of said input image and features of each of said plurality of prototype representations;
d) determining which of said plurality of prototype representations is most similar to said input image;
e) associating said input image with a specific label associated with a prototype representation which is most similar to said input image;
wherein - features of said input image are extracted prior to step b) by way of a convolutional neural network; and - said input image and images in categories of images represented by said prototype representations are in different domains.
In another aspect, the present invention provides a system for determining an input image's content, the system comprising:
- a first convolutional neural network for receiving said input image and extracting features of said input image;
- 4 -Attorney Docket No. 1355P003CA01 - a database of pre-extracted features of prototype representations, each prototype representation being representative of a category of images;
- feature comparison block for receiving said features of said input image from said first convolutional neural network and for receiving said features of said prototype representations from said database, said feature comparison block also being for comparing features of said input image and features of said prototype representations to determine which of said prototype representations is most similar to said input image;
wherein - said system outputs an indication of which of said prototype representation is most similar to said input image.
BRIEF DESCRIPTION OF THE DRAWINGS
The embodiments of the present invention will now be described by reference to the following figures, in which identical reference numerals in different figures indicate identical elements and in which:
FIGURE 1 is a schematic diagram detailing one aspect of the present invention;
FIGURE 2 is block diagram illustrating another aspect of the present invention; and FIGURE 3 is a flowchart detailing the steps in a method according to another aspect of the present invention.
- 5 -Attorney Docket No. 1355P0030A01 DETAILED DESCRIPTION
Referring to Figure 1, a schematic diagram illustrating one aspect of the present invention is presented. In the system 10, an input image 20 is passed through a first convolutional neural network 30 that extracts the input image's features. These features are then compared, using a comparison block 40, with the extracted features of multiple prototype representations 50A, 50B, 50C. These prototype representations 50A ... 500 were previously passed through second convolutional neural networks 60A ... 600 to extract their features. The comparison block 40 determines which of the prototype representations 50A ... 500, based on the extracted features, are most similar to the input image 20. The label for the prototype representation that is most similar to the input image 20 is then assigned to the input image 20.
It should be clear that the content of each of the different prototype representations represents a different class or a different category of items or subjects. Thus, it can be seen from Figure 1 that, as examples, the three categories represented by the illustrated prototype representations are:
airplanes (prototype representation 50A), bicycles (prototype representation 50B), and trucks (prototype representation 500).
The prototype representations are, preferably, extracted from multiple images of the category that the prototype representation is representing.
From Figure 1, it should be clear that the input image 20 is most similar to the prototype representation 500 (i.e. a prototype representation of a truck) and that the output of the system is a labeled input image with its label being the same as the label for prototype representation 500.
- 6 -Attorney Docket No. 1355P003CA01 To train the various neural networks used in the system 10, a fully connected discriminator neural network 70 is also used.
The first convolutional neural network (f(-)) is trained using source domain images and target domain images, with the desired goal being that the results of this first neural network should not be classifiable as being from either the source domain or the target domain. As noted above, the source domain and the target domain are, of course, different domains. During training, the output of f(-) is passed through the discriminator network and through a classifier network that determines probabilities as to whether the output is from the source domain or the target domain. Once the discriminator network and the classifier network are unable to determine whether the output of f(-) is from the source or the target domain, then the network f(-) is considered trained.
It should be clear that the present invention proposes a different way to perform classification while keeping the adversarial domain-confusion component. The present invention, in one embodiment, uses a similarity-based classifier in which each input image is compared to a set of prototypes (or centroids). The label associated with the prototype representation that best matches the query or input image is given to that query or input image. For clarity, these prototypes are vector representations that are representative of each category that appears in the dataset. These prototypes are learned at the same time as the image embeddings, and the whole system can use backpropagation to be more efficiently trained.
Tests have shown that the present invention is more robust than the prior art, especially when applied to the domain shift issues between two datasets. Results in two important large-
- 7 -Attorney Docket No. 1355P003CA01 scale domain adaptation datasets (Office-31, which contains images of office objects in three different domains and VisDA, a large-scale dataset focused on simulation-to-reality shift) have shown that the present invention is both viable and efficient.
It should be clear that the present invention may be used for multiple purposes. More specifically, the present invention may be used to train systems using synthetic data (e.g. using 3D
rendered images or game engine based images). One use the present invention would be to use a virtually unlimited number of labeled (and synthetic) images to train the invention's system and then to adapt the system of the invention to handle natural images. Indeed, in certain areas of research, such as robotics vision or reinforcement learning, acquiring training samples can be very expensive. Training in a synthetic (i.e.
artificially generated) domain and then transferring the learned features to real-world environments can be one solution to alleviate the high cost of acquiring and labeling training data.
For clarity, in the domain adaptation problem, there is access to labeled images Xs= t(xf,yis))i so drawn from a source domain distribution ps(x,y) and to target images Xt = f(xf,y[)}Ni_to drawn from a target distribution pt(x,y) . In the unsupervised setting, there is no information about the label on the target domain.
In the present invention, the problem of unsupervised domain adaptation is addressed using a similarity-based classifier. As can be seen from Figure 1, one aspect of the invention uses two different components: (i) the domain-confusion component (i.e.
the discriminator), which forces the features of both domains, f (Xs) and f(Xt), to be as indistinguishable as possible and (ii) a classifier based on a set of prototypes, pc (one for each category
- 8 -Attorney Docket No. 1355P003CA01 C E (1, 2, ..., C}). Both components are trained jointly and in an end-to-end fashion.
This approach is based on the assumption that there exists an embedding for each category such that all the points of the category cluster around it, independent of its domain. Inference is then performed in a test image by simply finding the most semantically similar prototype.
The classifier network used in the present invention is composed of C different prototypes, with one prototype per category. Each prototype represents a general embedding for a category, incorporating all of the category's variations. It is assumed that there exists a representation space in which all samples of a given category can be clustered around its corresponding prototype.
Each prototype is represented by an m-dimensional vector pc P:
parametrized by a convolutional neural network g(-), with trainable parameters eg. The prototypes are computed by the average representation of all source samples belonging to the category c:

= .9(4) (1) IXcl 4exc where A:c is the set of all images in the source domain labeled with category c. Similarly, the input images (from either domain) are represented as a n-dimensional vector L.
nri) ER', with the vector being produced by passing the input image through a convolutional neural network f(.) parametrized by ef.
- 9 -Attorney Docket No. 1355P003CA01 By leveraging the powerful representations of convolutional neural networks, the present invention uses a simple model that can predict which of the prototypes (and therefore categories) best describes a given input. For this purpose, a similarity metric between images and prototypes is learned. The similarity between an input image xi and prototype pc is defined simply as a bilinear operation:
h(xi,pc) = fiT Sp, (2) with S E Rnrrn being the trainable parameters. S is an unconstrained bilinear similarity operator, and it does not have to be positive or symmetric. In Figure 1, this similarity metric is represented by the comparison block 40.
Regarding the convolutional neural networks f(.) and g(-), these networks do not share the same parameters. This is particularly important in the domain adaptation scehario in which the representations (from the input and from the prototypes) have different roles in the classification. On the one hand, the domain features fi should be domain invariant while simultaneously matching one of the prototypes pc. On the other hand, the embedding prototypes should be as close as possible to the source domain images that represents its category. In the case of single domain classification, it would make sense to use the same network for f and g to reduce the capacity of the system, since there is no shift in the domain. In this case, the model would be similar to Siamese Networks.
It should be clear that the neural networks used are trained to discriminate the target prototype pc from all other prototypes pk, (with k 0 c), given a labeled image. The output of the
- 10 -Attorney Docket No. 1355P003CA01 network is interpreted as class conditional probabilities by applying a softmax function over the bilinear operator:
e4-1 Pe (cixi, 'Pc) = ______ (3) Eken(xi44) 0= tOpeg,Slrepresents the set of all trainable parameters of the system. Learning is achieved by minimizing the negative log-likelihood (with respect to e), over all labeled samples (x010 C Xs:
=Cclass(e) = h (xi, pyi) ¨ log eh(xotk) ygz (4) R is a regularization term that encourages the prototypes to encode different aspects of each category. At each training iteration, the prototypes are approximated by choosing a random subset of examples for each category.
The regularizer is modeled as a soft orthogonality constraint.
Let Pp be a matrix whose rows are the prototypes, the regularization term is written as:
R=IIPRTPit IIIF2 , (5) where IIII is the squared Frobenius norm and I is the identity matrix.
To train the system, the classifier is trained on the source domain (labeled samples) and this is then applied to the target domain (unlabeled samples). To achieve this goal, the system
- 11 -Attorney Docket No. 1355P003CA01 learns features that maximize the domain confusion while preserving a low risk on the source domain.
The domain-invariant component is responsible for minimizing the distance between the empirical source and target feature representation distributions, f(Xs) and fOCO. Assuming this is the case, the classifier trained on the source feature representation can thus be directly applied to the target representation.
Domain confusion is achieved using a domain discriminator D, parametrized by ed. The discriminator classifies whether a data point is drawn from the source or the target domain, and the discriminator is optimized following a standard classification loss:
Ns Nt =Cdisc( 9, 9c1) = log D (f. (4)) ¨ log (1 ¨ D (xit.))) (6) i=o i=o Domain confusion is achieved by applying the Reverse Gradient (RevGrad) algorithm which optimizes the features to maximize the discriminator loss directly. Reference may be made to the article "Domain-adversarial training of neural networks" in the Journal of Machine Learning Research, 2016 for more details.
The contents of this article are hereby incorporated herein by reference.
The system is trained to jointly maximize the domain confusion (between source and target) and to infer the correct category on the source (labeled) samples through the similarity-based classifier described above. The final goal is therefore to optimize the following mini-max objective:
- 12 -Attorney Docket No. 1355P003CA01 min max Cc/as(9f, 0g, S) ¨.Ar1i8(0f, 0d) , (7) 0f,0,,s 0, where A is a balance parameter between the two losses. The objective is optimized using stochastic gradient descent with momentum.
At inference time, the prototypes are computed a priori, following Equation 1, and stored in memory. The similarity between a target domain test image and each prototype is computed, and the label that best matches the query is outputted.
In one implementation, the parameters of networks f and g are initialized with a ResNet-50 that was pre-trained to perform classification on the ImageNet dataset, and the classification layer is removed. The discriminator network and the bilinear classifier are initialized randomly from a uniform distribution.
Parameters used are as follows: the balance parameter A = 0.5 and the regularization coefficient y = 0.1 (it was observed that the system is robust to this hyperparameter). A learning rate of 10-5 is used, with a weight decay of 10-5 and a momentum of 0.99.
Since the similarity matrix and the discriminator are trained from scratch, their learning rates are set to be 10 times that of the other layers.
During training, the images (from both domains) are re-sized such that the shorter dimension is of size 300 pixels and a patch of 224 x 224 is randomly sampled. Each mini-batch has 32 images from each domain. At each training iteration, the prototypes are approximated by picking one random sample for
- 13 -Attorney Docket No. 1355P003CA01 each class. It was noticed that the training converges with this design choice and it was found to be more efficient.
The discriminator is a simple fully-connected network. It contains two layers, each of dimension 1024 and ReLU non-linearity, followed by the domain classifier. It receives, as input, the output of network f and outputs the probability of which domain the input comes from (this probability is, again, modeled by a softmax). It should be clear that a softmax function is a function that highlights the largest values and suppresses values which are significantly below the maximum value.
The bilinear operation is parametrized with a low-rank approximation, S= UTV (U,V E R."771,m = 512) and Equation 2 thus becomes:
h(x Pc) = (U ' (8) This parametrization brings multiple benefits. It allows for control of the system capacity. As well, it allows for the system to be trivially implemented in any modern deep learning framework and the system benefits from an efficient implementation. Finally, the parametrization also provides fast inference, as the right side of h is independent of the input image -- it can be computed only once and stored in memory.
At inference time, the shorter dimensions of the test image are resized to 300 pixels, as in the training stage. The model is applied densely at every location, resulting in an output with spatial dimensions bigger than 1. The output is averaged over the spatial dimensions to achieve a one-dimensional vector.
Because the similarity measure is a bilinear operation, the
- 14 -Attorney Docket No. 1355P003CA01 averaging can be seen as an ensemble over different spatial locations of the test image.
It should be clear that, preferably, the input images and the images used to produce the prototype representations are converted to a common resolution prior to being passed through the relevant convolutional neural networks f(.) and g(-).
Extracting features from images of different resolutions may produce less than useful features.
Referring to Figure 2, a block diagram of another aspect of the invention is illustrated. The aspect of the invention relates to a system for identifying content of an input image based on prototype representations representative of categories of image content. This system 100 has an input image 110 being fed into a pretrained and fully connected convolutional neural network 120 that extracts the features of the input image 110. A
database 130 contains the features of different prototype representations and these features are to be compared to the features from the input image 110. As can be imagined, the features of the different prototype representations (with each prototype representations being representative of a different category or class of image content) were previously extracted using a different convolutional neural network (not shown) and these features were stored previously in the database. These prototype features are sent to a comparator block 140 along with the features from the input image 110. The output of the comparator block 140 is an indication as to which prototype representation is most similar to the input image based on their features. Or, in one implementation, the output would be a label for the input image, this label being the label of the
- 15 -Attorney Docket No. 1355P003CA01 prototype representation which was most similar to the input image.
It should be noted that the system in Figure 2 may be implemented using any suitable data processing system. As such, for example, if an input image contains an image of a mode of transportation, the system can be used to determine what type of mode of transportation is in the input image. Prototype representations representative of a car, a truck, a bicycle, a train, an airplane, etc., etc. can be used and, once the similarity of the input image to the prototype representations have been determined, this can indicate the content of the input image. Of course, other images and other uses of the system can be contemplated.
The system in Figure 2 is thus used to implement a method whose steps are detailed in Figure 3. Referring to Figure 3, the method begins at step 200, that of receiving an input image.
This input image is then passed through a convolutional neural network in step 210 to extract a representation representative of the input image's features. Concurrently or soon after the features have been extracted, a database is then queried for one or more prototype representations of different classes or categories of image content (step 220). These prototype representations are retrieved from the database and compared (step 230) with the extracted features from the input image. Of the retrieved representations, the most similar to the extracted features of the input image is selected (step 240). The logic then determines if there are more representations to be compared to the features from the input image (step 250). If there are more representations, then the logic loops back to step 230 to retrieve these as yet uncompared representations. If, however,
- 16 -Attorney Docket No. 1355P003CA01 no more representations are to be compared to the extracted images, then the logic moves to step 260, that of assigning a label to the input image, the label being the label of the representation that is most similar to the features of the input image.
The embodiments of the invention may be executed by a computer processor or similar device programmed in the manner of method steps, or may be executed by an electronic system which is provided with means for executing these steps. Similarly, an electronic memory means such as computer diskettes, CD-ROMs, Random Access Memory (RAM), Read Only Memory (ROM) or similar computer software storage media known in the art, may be programmed to execute such method steps. As well, electronic signals representing these method steps may also be transmitted via a communication network.
Embodiments of the invention may be implemented in any conventional computer programming language. For example, preferred embodiments may be implemented in a procedural programming language (e.g."C") or an object-oriented language (e.g."C++", "java", "PHP", "PYTHON" or "C#"). Alternative embodiments of the invention may be implemented as pre-programmed hardware elements, other related components, or as a combination of hardware and software components.
Embodiments can be implemented as a computer program product for use with a computer system. Such implementations may include a series of computer instructions fixed either on a tangible medium, such as a computer readable medium (e.g., a diskette, CD-ROM, ROM, or fixed disk) or transmittable to a computer system, via a modem or other interface device, such as a communications adapter connected to a network over a medium. The
- 17 -Attorney Docket No. 1355P003CA01 medium may be either a tangible medium (e.g., optical or electrical communications lines) or a medium implemented with wireless techniques (e.g., microwave, infrared or other transmission techniques). The series of computer instructions embodies all or part of the functionality previously described herein. Those skilled in the art should appreciate that such computer instructions can be written in a number of programming languages for use with many computer architectures or operating systems. Furthermore, such instructions may be stored in any memory device, such as semiconductor, magnetic, optical or other memory devices, and may be transmitted using any communications technology, such as optical, infrared, microwave, or other transmission technologies. It is expected that such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g., shrink-wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server over a network (e.g., the Internet or World Wide Web). Of course, some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments of the invention may be implemented as entirely hardware, or entirely software (e.g., a computer program product).
A person understanding this invention may now conceive of alternative structures and embodiments or variations of the above all of which are intended to fall within the scope of the invention as defined in the claims that follow.
- 18 -

Claims (11)

We claim:
1. A method for assigning labels to unlabeled input images, the method comprising:
a) receiving an unlabeled input image;
b) comparing features of said input image to features of a plurality of prototype representations, each of said plurality of prototype representations being representative of a category of images and each prototype representation being associated with a specific label;
c) determining a similarity between features of said input image and features of each of said plurality of prototype representations;
d) determining which of said plurality of prototype representations is most similar to said input image;
e) associating said input image with a specific label associated with a prototype representation which is most similar to said input image;
wherein features of said input image are extracted prior to step b) by way of a first convolutional neural network; and said input image and images in categories of images represented by said prototype representations are in different domains.
2. The method according to claim 1, wherein said images in said categories of images are artificially generated images.
3. The method according to claim 1, wherein each of said prototype representations comprises an m-dimensional vector parametrized by a second convolutional neural network with trainable parameters.
4. The method according to claim 3, wherein each of said prototype representations is computed by an average representation of all source images belonging to a category c:
where X c is a set of all images in said source domain labeled with category c.
5. The method according to claim 1, wherein said input image is represented by an n-dimensional vector created by passing said input image through said first convolutional neural network.
6. The method according to claim 1, wherein a similarity between features in said input image and features in said prototype representations is determined using a bilinear operation.
7. The method according to claim 1, wherein said first convolutional neural network is trained using a domain discriminator, said domain discriminator being for classifying whether a data point is from a source domain or a target domain, said first neural network being trained to thereby maximize an amount of data points for which said domain discriminator is unable to determine if said data points are from said source domain or said target domain.
8. A system for determining an input image's content, the system comprising:
a first convolutional neural network for receiving said input image and extracting features of said input image;
a database of pre-extracted features of prototype representations, each prototype representation being representative of a category of images;
feature comparison block for receiving said features of said input image from said first convolutional neural network and for receiving said features of said prototype representations from said database, said feature comparison block also being for comparing features of said input image and features of said prototype representations to determine which of said prototype representations is most similar to said input image;
wherein said system outputs an indication of which of said prototype representation is most similar to said input image.
9. The system according to claim 8, wherein said first convolutional neural network is trained using a domain discriminator, said domain discriminator being for classifying whether a data point is from a source domain or a target domain, said first neural network being trained to thereby maximize an amount of data points for which said domain discriminator is unable to determine if said data points are from said source domain or said target domain.
10. The system according to claim 8, wherein features of said input image are represented by an n-dimensional vector created by passing said input image through said first convolutional neural network.
11. The system according to claim 8, wherein features of each of said prototype representations are represented by an m-dimensional vector parametrized by a second convolutional neural network with trainable parameters.
CA3002100A 2018-04-18 2018-04-18 Unsupervised domain adaptation with similarity learning for images Pending CA3002100A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA3002100A CA3002100A1 (en) 2018-04-18 2018-04-18 Unsupervised domain adaptation with similarity learning for images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA3002100A CA3002100A1 (en) 2018-04-18 2018-04-18 Unsupervised domain adaptation with similarity learning for images

Publications (1)

Publication Number Publication Date
CA3002100A1 true CA3002100A1 (en) 2019-10-18

Family

ID=68235777

Family Applications (1)

Application Number Title Priority Date Filing Date
CA3002100A Pending CA3002100A1 (en) 2018-04-18 2018-04-18 Unsupervised domain adaptation with similarity learning for images

Country Status (1)

Country Link
CA (1) CA3002100A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110880019A (en) * 2019-10-30 2020-03-13 北京中科研究院 Method for adaptively training target domain classification model through unsupervised domain
CN116128876A (en) * 2023-04-04 2023-05-16 中南大学 Medical image classification method and system based on heterogeneous domain
CN116543237A (en) * 2023-06-27 2023-08-04 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Image classification method, system, equipment and medium for non-supervision domain adaptation of passive domain

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110880019A (en) * 2019-10-30 2020-03-13 北京中科研究院 Method for adaptively training target domain classification model through unsupervised domain
CN110880019B (en) * 2019-10-30 2022-07-12 北京中科研究院 Method for adaptively training target domain classification model through unsupervised domain
CN116128876A (en) * 2023-04-04 2023-05-16 中南大学 Medical image classification method and system based on heterogeneous domain
CN116543237A (en) * 2023-06-27 2023-08-04 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Image classification method, system, equipment and medium for non-supervision domain adaptation of passive domain
CN116543237B (en) * 2023-06-27 2023-11-28 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Image classification method, system, equipment and medium for non-supervision domain adaptation of passive domain

Similar Documents

Publication Publication Date Title
US10956817B2 (en) Unsupervised domain adaptation with similarity learning for images
CN107527068B (en) Vehicle type identification method based on CNN and domain adaptive learning
Sumbul et al. Fine-grained object recognition and zero-shot learning in remote sensing imagery
WO2020238293A1 (en) Image classification method, and neural network training method and apparatus
CN108960073B (en) Cross-modal image mode identification method for biomedical literature
CN105354307B (en) Image content identification method and device
US20180330238A1 (en) Systems and methods to enable continual, memory-bounded learning in artificial intelligence and deep learning continuously operating applications across networked compute edges
Lempitsky et al. Learning to count objects in images
CN109117793B (en) Direct-push type radar high-resolution range profile identification method based on deep migration learning
US20160253597A1 (en) Content-aware domain adaptation for cross-domain classification
CN111523621A (en) Image recognition method and device, computer equipment and storage medium
CN110909820A (en) Image classification method and system based on self-supervision learning
JP2019125340A (en) Systems and methods for automated inferencing of changes in spatiotemporal images
Sundara Sobitha Raj et al. DDLA: dual deep learning architecture for classification of plant species
CN111985581A (en) Sample-level attention network-based few-sample learning method
CN108446334B (en) Image retrieval method based on content for unsupervised countermeasure training
CA3002100A1 (en) Unsupervised domain adaptation with similarity learning for images
Aa et al. Deep neural networks for image classification
Reddy et al. Handwritten Hindi character recognition using deep learning techniques
CN115937655A (en) Target detection model of multi-order feature interaction, and construction method, device and application thereof
TW202226077A (en) Information processing apparatus and information processing method
US20220114820A1 (en) Method and electronic device for image search
CN108960005B (en) Method and system for establishing and displaying object visual label in intelligent visual Internet of things
Aghdam et al. Analyzing the Stability of Convolutional Neural Networks against Image Degradation.
CN116910571A (en) Open-domain adaptation method and system based on prototype comparison learning

Legal Events

Date Code Title Description
EEER Examination request

Effective date: 20220929

EEER Examination request

Effective date: 20220929

EEER Examination request

Effective date: 20220929

EEER Examination request

Effective date: 20220929