WO2020191706A1 - Système et procédé d'annotation automatique d'image à apprentissage actif - Google Patents

Système et procédé d'annotation automatique d'image à apprentissage actif Download PDF

Info

Publication number
WO2020191706A1
WO2020191706A1 PCT/CN2019/080062 CN2019080062W WO2020191706A1 WO 2020191706 A1 WO2020191706 A1 WO 2020191706A1 CN 2019080062 W CN2019080062 W CN 2019080062W WO 2020191706 A1 WO2020191706 A1 WO 2020191706A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
attribute
input image
attributes
similar images
Prior art date
Application number
PCT/CN2019/080062
Other languages
English (en)
Chinese (zh)
Inventor
倪伟定
林仕胜
杜坚民
蔡一帆
蔡日星
Original Assignee
香港纺织及成衣研发中心有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 香港纺织及成衣研发中心有限公司 filed Critical 香港纺织及成衣研发中心有限公司
Priority to PCT/CN2019/080062 priority Critical patent/WO2020191706A1/fr
Publication of WO2020191706A1 publication Critical patent/WO2020191706A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing

Definitions

  • the invention relates to the field of image annotation. More specifically, the present invention relates to an active learning automatic image labeling system and method.
  • tags marked on images are usually entered manually. This method is costly and extremely time consuming, especially in the face of a large and continuously growing image database.
  • Patent document US7529732B2 provides an image retrieval system and method for semantic and feature relevance feedback. This technology generally belongs to artificially providing relevance feedback.
  • the image retrieval system performs keyword-based and content-based image retrieval, monitors user feedback, and uses it to refine any search work and train itself for future search queries.
  • patent document US7627556B2 also provides a technology for manually providing relevant annotations on images.
  • this document discloses a semi-automatic labeling of multimedia objects.
  • the system based on user feedback on the relevance of objects retrieved from keyword-based and content-based searches, the system automatically tags objects with semantically related keywords and/or updates the relationship between keywords and objects. Relevance. With the repetition of the retrieval-feedback-labeling cycle, the labeling coverage and the accuracy of future searches continue to improve.
  • the US patent document US8204842 discloses a system and method for image annotation and multimodal image retrieval using a probabilistic semantic model, which includes at least one joint probability distribution.
  • a Bayesian framework is used for image annotation and text-image retrieval.
  • US patent document US7274822B2 discloses face tagging for photo management, where, for faces with facial features similar to those in the training database, the probability model can be trained by mapping the facial features to the corresponding individual names, and then the probability model You can mark the face with a name.
  • Patent document WO2009152390A2 discloses automatic image annotation using semantic distance learning, in which an association probability is estimated for each cluster of an image, which specifies the probability that a new image is semantically associated with the cluster. Generate cluster-specific probabilistic annotations for new images from manual annotations of images in each cluster. The associated probabilities and cluster-specific probabilistic annotations corresponding to all clusters are used to generate the final annotations for the new image.
  • the US Patent Document US8594468B2 discloses a statistical method for large-scale image annotation.
  • the annotation technology compiles visual features and text information from multiple images, hashes the visual features of the images, and clusters the images based on their hash values.
  • This patent document labels images by applying a statistical language model, and the language statistical model is constructed from clustered images.
  • Chinese patent document CN103473275A discloses an automatic image annotation method and system using multi-feature fusion.
  • This annotation method uses multiple feature types to represent image content, introduces feature signatures represented by multiple features, and combines K-means clustering algorithm to obtain an image semantic statistical model based on multi-feature fusion for automatic image annotation.
  • an active learning automatic image annotation method includes the following steps: Step S1: Provide an input image; Step S2: Extract the visual features of the input image, and obtain the input image Step S3: Use the visual characteristics of the input image to find similar images in the general image database, and obtain the internal attributes of the similar images from the corresponding general description database; Step S4: Search in the Internet while step S2 Similar images of the input image; Step S5: Extract the visual features of the similar images; Step S6: Compare the visual features of the similar images obtained in Step S5 with the input image; Step S7: If the compared similarity is higher than the predetermined threshold, Then, the external attributes of the similar images are obtained on the Internet; and step S8: the classification attribute, the internal attribute and the external attribute are integrated to obtain the final annotation on the input image.
  • step S8 further includes: if there is a conflict between the internal attribute and the external attribute, comparing the similarity of similar images from the general image database with the similarity of similar images from the Internet, and selecting the similar image with the highest score.
  • the attribute is used as the final annotation of the input image; or if the internal and external attributes are not obtained, the classification attribute is used as the final annotation of the image.
  • the active learning automatic image labeling method further includes: Step S9: the user deletes inappropriate attributes in the final label or manually adds other attributes.
  • the visual features include binary hash codes and deep features obtained through a model of a convolutional neural network.
  • step S3 further includes the following steps: calculating the Hamming distance between the binary hash code of the input image and the binary hash code of the image in the general image database; if the Hamming distance is lower than the threshold, the image is taken as a candidate The image is put into the candidate pool; and the deep features of the candidate images are compared by using cosine similarity.
  • the active learning automatic image annotation method is suitable for the clothing industry.
  • an active learning automatic image tagging system includes: an image input module configured to provide an input image; a feature extraction module configured to extract the vision of the input image Feature and obtain the classification attributes of the input image, and the feature extraction module is also configured to receive similar images searched in the Internet from the external attribute retrieval module and extract the visual features of the similar images; CBIR-based labeling module, CBIR-based labeling module It is configured to receive the visual features of the input image from the feature extraction module and use the visual features of the input image to find similar images in the general image database, and obtain the internal attributes of the similar images from the corresponding general description database, and the CBIR-based labeling module also It is configured to compare the visual features of similar images from the Internet with the input image; the external attribute retrieval module is configured to receive the input image from the image input module, and when the feature extraction module extracts the visual features of the input image, Search for similar images of the input image on the Internet, and the external attribute retrieval module is also configured to obtain the
  • the integration and post-processing module is further configured to: if there is a conflict between the internal attribute and the external attribute, compare the similarity of similar images from the general image database with the similarity of similar images from the Internet, and the selection score is the highest
  • the attributes of similar images are used as the final annotation of the input image; or if the internal and external attributes are not obtained, the classification attribute is used as the final annotation of the image.
  • the active learning automatic image labeling system further includes: a human-computer interaction module configured to allow the user to delete inappropriate attributes in the final label or manually add other attributes.
  • the visual features include binary hash codes and deep features obtained through a model of a convolutional neural network.
  • the CBIR-based labeling module is further configured to: calculate the Hamming distance between the binary hash code of the input image and the binary hash code of the image in the general image database; if the Hamming distance is lower than the threshold, the image Put the candidate image into the candidate pool as a candidate image; and compare the deep features of the candidate image by using cosine similarity.
  • the active learning automatic image labeling system is suitable for the clothing industry.
  • a computer device including: a memory; a processor; and a computer program stored in the memory and running on the processor, and the processor implements the following steps when executing the program: S1: Provide the input image; Step S2: Extract the visual features of the input image, and use the classifier to obtain the classification attributes of the input image; Step S3: Use the visual features of the input image to find similar images in the general image database, and from the corresponding general Describe the internal attributes of similar images obtained in the description database; Step S4: At the same time as Step S2, search for similar images of the input image in the Internet; Step S5: Extract the visual features of the similar images; Step S6: The similar images obtained in step S5 Compare the visual features of the image with the input image; Step S7: If the compared similarity is higher than the predetermined threshold, obtain the external attributes of the similar image on the Internet; and Step S8: Integrate the classification attributes, internal attributes and external attributes to obtain Enter the final annotation on the image.
  • a computer-readable storage medium having computer instructions stored thereon, and when the computer instructions are executed by a processor, the steps of the above-mentioned active learning automatic image labeling method are realized.
  • the present invention searches for the latest information in the Internet while the user uploads images for query, there is no need to manually update the system database. Therefore, the present invention saves time and work for adding and updating image and description databases. In addition, the present invention does not need to wait for any artificial update, and directly uses the latest information for marking from the Internet, thereby avoiding the use of outdated information. Therefore, the present invention also guarantees the latest information of the database.
  • Fig. 1 shows a flowchart of an active learning automatic image labeling method according to an embodiment of the present invention
  • Fig. 2 shows a structural block diagram of an active learning automatic image labeling system according to an embodiment of the present invention
  • FIG. 3 shows a convolutional neural network (CNN)-based model according to an embodiment of the present invention
  • Figure 4 shows a precision-recall rate curve according to an embodiment of the present invention
  • FIG. 5 shows a schematic diagram of a human-computer interaction interface according to an embodiment of the present invention.
  • Fig. 6 shows an example of the application of the active learning automatic image annotation system and method of the present invention in the field of clothing.
  • the present invention relates to an active learning method and system using content-based image retrieval (CBIR) for automatic clothing image labeling, which can automatically integrate image content and text mining by querying a structured image database.
  • CBIR content-based image retrieval
  • Tags are assigned to fashion images, and the database can be updated with the latest information from the Internet.
  • Fig. 1 shows a flowchart of an active learning automatic image labeling method according to an embodiment of the present invention.
  • the active learning automatic image tagging method includes the following steps: providing an input image (step S1); extracting the visual features of the input image, and using a classifier to obtain the classification attributes of the input image (step S2); using the input image Look for similar images in the general image database, and obtain the internal attributes of the similar images from the corresponding general description database (step S3).
  • step S4 search the Internet for similar images of the input image (step S4); extract the visual features of the similar image (step S5); compare the visual features of the similar image obtained in step S5 with the input image (step S5) S6); if the compared similarity is higher than the predetermined threshold, the external attributes of the similar images are obtained on the Internet (step S7).
  • step S8 the classification attributes, internal attributes, and external attributes are integrated to obtain the final annotation on the input image.
  • the active learning automatic image annotation method of the present invention may further include: the user deletes inappropriate attributes in the final annotation or manually adds other attributes (step S9).
  • step S8 further includes: if there is a conflict between the internal attribute and the external attribute, comparing the similarity of similar images from the general image database with the similarity of similar images from the Internet, and selecting The attribute of the similar image with the highest score is used as the final annotation of the input image; or if the internal and external attributes are not obtained, the classification attribute is used as the final annotation of the image.
  • the visual features include binary hash codes and deep features obtained through a model of a convolutional neural network.
  • step S3 also includes the following steps: calculating the Hamming distance between the binary hash code of the input image and the binary hash code of the image in the general image database; if the Hamming distance is lower than Threshold, put the image as a candidate image into the candidate pool; and compare the deep features of the candidate image by using cosine similarity.
  • Fig. 2 shows a structural block diagram of an active learning automatic image labeling system according to an embodiment of the present invention.
  • the active learning automatic image tagging system of the present invention obtains the internal attributes of the image from the general description database through the cooperation of the following modules: image input module, which provides input images; feature extraction module, feature extraction module Extract the visual features of the input image and use the classifier to obtain the classification attributes of the input image; and the CBIR-based labeling module, which receives the visual features of the input image from the feature extraction module and uses the visual features of the input image in the general image database Find similar images in, and get the internal attributes of similar images from the corresponding general description database.
  • image input module which provides input images
  • feature extraction module feature extraction module Extract the visual features of the input image and use the classifier to obtain the classification attributes of the input image
  • CBIR-based labeling module which receives the visual features of the input image from the feature extraction module and uses the visual features of the input image in the general image database Find similar images in, and get the internal attributes of similar images from the corresponding general description database.
  • the aforementioned feature extraction module can also receive similar images searched on the Internet and extract the visual features of the similar images.
  • the above-mentioned CBIR-based labeling module can also compare the visual features of similar images from the Internet with the input image.
  • the active learning automatic image annotation system of the present invention also includes an external attribute retrieval module.
  • the external attribute retrieval module receives the input image from the image input module, and when the feature extraction module extracts the visual features of the input image , Search the Internet for similar images of the input image.
  • the external attribute retrieval module is also configured to obtain the external attributes of similar images on the Internet if the similarity compared by the CBIR-based labeling module is higher than the predetermined threshold.
  • the active learning automatic image tagging system of the present invention can also obtain the external attributes of the image from the Internet through the feature extraction module, the CBIT-based labeling module, and the external attribute retrieval module.
  • the active learning automatic image labeling system of the present invention also includes an integration and post-processing module, the integration and post-processing module is configured to integrate classification attributes, internal attributes and external attributes to obtain the final annotation on the input image .
  • the active learning automatic image labeling system of the present invention may also include a human-computer interaction module.
  • the human-computer interaction module is used for users to delete inappropriate attributes in the final annotation or manually add other attributes, so as to further enhance the coverage and accuracy of the image annotation.
  • the feature extraction module will extract the visual features of the image and transmit them to the labeling module based on CBIR (Content-Based Image Retrieval).
  • CBIR Content-Based Image Retrieval
  • the CBRI-based tagging module will query the general image database to obtain similar images. Then, return the description or label of the similar image stored in the general description database.
  • the external attribute retrieval module will search for similar images on the Internet. These similar images will be sent to the feature extraction module to extract features and compare with the input images in the CBIR-based labeling module. If the similarity is high, the external attribute retrieval module will obtain the text of the website hosting these similar images. After text mining and analysis, some attributes are recommended as output.
  • the integration and post-processing module will integrate the output of the external attribute retrieval module and the attributes retrieved from the content of the input image of the general description database to obtain the final label of the clothing object on the input image. The final attributes of the image are stored in the general description database.
  • the input image and similar images retrieved from the Internet are stored in a general image database, and their hash codes and deep features are generated by the feature extraction module.
  • attributes can be exported or displayed for users to view. Users can delete inappropriate attributes, or manually add more attributes.
  • Image and description databases are developed automatically or semi-automatically with new and updated attributes.
  • This module uses deep learning methods to extract visual features from the input image and obtain the classification attributes of the input image.
  • the visual features are binary hash codes and deep features obtained through a model based on a convolutional neural network.
  • Fig. 3 shows a model based on a convolutional neural network according to an embodiment of the present invention.
  • binary hash coding is the combined binary output of the hidden layer added at the output of the last convolutional layer.
  • the output of the hidden layer is represented by Out(H).
  • the activation features extracted by the convolutional layer are maximized, whitened by PCA (Principal Component Analysis, which is mainly used for data dimensionality reduction), sum-aggregated and normalized to obtain deep features of different models.
  • PCA Principal Component Analysis, which is mainly used for data dimensionality reduction
  • sum-aggregated and normalized to obtain deep features of different models.
  • the deep features of all classifiers are cascaded.
  • more attributes that is, internal attributes, are obtained by using the CBIR technology.
  • the attributes, tags and descriptions associated with the first k similar images are retrieved from the general description database and returned to the integration and post-processing module.
  • Ai and Bi represent the components of vectors A and B, respectively.
  • the selection of attributes is determined by the final similarity.
  • the final similarity is the sum of the output value of the Hamming distance and the output value of the cosine similarity.
  • the range of the output value of the Hamming distance is 0 to 1
  • the range of the output value of the cosine similarity is also 0 to 1. Therefore, the final similarity degree ranges from 0 to 2.
  • the CBIR-based tag module can also help compare input images with similar images obtained from the Internet. If the similarity is higher than the predetermined threshold, it will trigger the external attribute retrieval module to mine attributes from the content of the web page.
  • the threshold can be determined by a precision rate-recall rate metric.
  • Precision (P) is an indicator of the relevance of results
  • recall (R) is an indicator of how many correct and relevant results are returned.
  • Fig. 4 shows a precision-recall (PR) curve according to an embodiment of the present invention, which shows the balance between precision and recall for different thresholds.
  • a high area under the curve (AUC) represents a high recall rate and a high precision rate.
  • a high precision rate is related to a low false positive (Fp) rate
  • a high recall rate is related to a low false negative (Fn) rate. related.
  • Fp false positive
  • Fn low false negative
  • the precision (P) is determined by the following formula:
  • Tp represents the number of real cases
  • Fp represents the number of false positives
  • the recall rate (R) is determined by the following formula:
  • Tp represents the number of true cases
  • Fn represents the number of false negative cases
  • the present invention can find the best threshold that can give the highest AUC.
  • the database stores training images. Initially, the general image database was constructed from images obtained from some known and reliable sources such as e-commerce, and it could continue to be constructed using query images and similar images obtained from the Internet. In addition, the visual characteristics of the image are stored.
  • the general description database stores the descriptions, attributes or tags of related training images.
  • the general description database is initially constructed by obtaining it from some known and reliable sources such as e-commerce. It will continue to build using tags and descriptions of query images and similar images obtained from the Internet. If the user selects some tags and deletes some recommended tags, the database will update the related images.
  • This module uploads the input image and sends a request to the search engine in the Internet to search the first k similar images.
  • the image should come from a predefined reliable source and within a predefined period of time to ensure that the image is relevant and out of date.
  • These similar images will be passed to the feature extraction module and the CBIR-based labeling module to determine whether the similarity is high. If the similarity is high, the attributes, that is, external attributes, will be mined from the content of the website where the similar images are located. The mined attributes are classified (i.e. color, pattern) and fed to the integration and post-processing module.
  • the module integrates the classification attributes of the classifier in the feature extraction module, the internal attributes from the CBIR-based labeling module, and the external attributes from the external attribute retrieval module to obtain the final annotation of the image.
  • the internal attributes and external attributes can be integrated, and the above two attributes can be labeled for the input image. If there is any conflict between the attributes (internal attributes) of similar images returned from the general description database and the attributes of similar images returned from the Internet (external attributes), the image with the highest similarity between the two similar images and the input image is selected Annotate the attributes of the input image.
  • the input image can be labeled with the classification attributes obtained by the classifier of the feature extraction module.
  • Fig. 5 shows a schematic diagram of a human-computer interaction interface according to an embodiment of the present invention.
  • the attributes of the output image allow users to query, view, delete and add attributes.
  • the modification can be carried out by single or multiple users through majority voting (the number of users involved can be configured), and the modification can cover the original output of the integration and post-processing module. Modifications will be fed back to the general description database. For users and institutions, there are some simple graphical interfaces.
  • Fig. 6 shows an example of the application of the active learning automatic image annotation system and method of the present invention in the field of clothing.
  • the annotations obtained by using the method and system of the present invention can be applied to the following applications/systems for sorting and searching interested clothing images from a large database:
  • the present invention can periodically capture and analyze data from different reservation data sources, so users can update photos/pictures through the system/application that has adopted the present invention.
  • the system will show the matching objects classified by style to the user.
  • a computer device that includes: a memory; a processor; and a computer program stored in the memory and running on the processor.
  • the processor can implement the program when the program is executed.
  • 1 shows the steps of the active learning automatic image annotation method of the present invention.
  • the present invention also provides a computer-readable storage medium on which computer instructions are stored. When the computer instructions are executed by a processor, each step of the active learning automatic image labeling method as shown in FIG. 1 is realized.
  • the present invention uses an active learning mechanism.
  • the present invention can retrieve the latest information and images from the Internet to update and enrich the system data.
  • the invention can also perform text mining on attributes from the text. Without relevant keywords, image tags can be mined from all relevant text content stored in the general description database or obtained from the Internet.
  • the present invention uses man-machine collaboration to modify the label. If multiple users suggest changes to the annotations provided by the system, they can feed back to the system and vote for the final changes to the annotations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un système et un procédé d'annotation automatique d'image à apprentissage actif. Le procédé comprend : étape S1, la fourniture d'une image d'entrée ; étape S2, l'extraction de caractéristiques visuelles de l'image d'entrée et l'obtention d'un attribut de classification ; étape S3, l'utilisation des caractéristiques visuelles pour rechercher des images similaires dans une base de données d'images universelle, et l'obtention d'un attribut interne à partir de la base de données d'images universelle ; étape S4, en même temps que l'étape S2, la recherche d'images similaires à l'image d'entrée sur Internet ; étape S5, l'extraction de caractéristiques visuelles d'images similaires ; étape D6, la comparaison des caractéristiques visuelles d'images similaires obtenues à l'étape S5 à celles de l'image d'entrée ; étape S7, si la similarité après comparaison est supérieure à un seuil prédéfini, l'obtention d'un attribut externe sur Internet ; et étape S8, l'intégration de l'attribut de classification, de l'attribut interne et de l'attribut externe pour obtenir une annotation finale sur l'image d'entrée. La présente invention économise du temps et du travail pour une mise à jour de base de données et assure le caractère actuel des informations de la base de données.
PCT/CN2019/080062 2019-03-28 2019-03-28 Système et procédé d'annotation automatique d'image à apprentissage actif WO2020191706A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/080062 WO2020191706A1 (fr) 2019-03-28 2019-03-28 Système et procédé d'annotation automatique d'image à apprentissage actif

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/080062 WO2020191706A1 (fr) 2019-03-28 2019-03-28 Système et procédé d'annotation automatique d'image à apprentissage actif

Publications (1)

Publication Number Publication Date
WO2020191706A1 true WO2020191706A1 (fr) 2020-10-01

Family

ID=72610799

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/080062 WO2020191706A1 (fr) 2019-03-28 2019-03-28 Système et procédé d'annotation automatique d'image à apprentissage actif

Country Status (1)

Country Link
WO (1) WO2020191706A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699261A (zh) * 2020-12-28 2021-04-23 大连工业大学 一种服装图像自动生成系统及方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542067A (zh) * 2012-01-06 2012-07-04 上海交通大学 基于尺度学习和关联标号传播的自动图像语义标注方法
CN102902821A (zh) * 2012-11-01 2013-01-30 北京邮电大学 基于网络热点话题的图像高级语义标注、检索方法及装置
CN105701502A (zh) * 2016-01-06 2016-06-22 福州大学 一种基于蒙特卡罗数据均衡的图像自动标注方法
CN108897778A (zh) * 2018-06-04 2018-11-27 四川创意信息技术股份有限公司 一种基于多源大数据分析的图像标注方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542067A (zh) * 2012-01-06 2012-07-04 上海交通大学 基于尺度学习和关联标号传播的自动图像语义标注方法
CN102902821A (zh) * 2012-11-01 2013-01-30 北京邮电大学 基于网络热点话题的图像高级语义标注、检索方法及装置
CN105701502A (zh) * 2016-01-06 2016-06-22 福州大学 一种基于蒙特卡罗数据均衡的图像自动标注方法
CN108897778A (zh) * 2018-06-04 2018-11-27 四川创意信息技术股份有限公司 一种基于多源大数据分析的图像标注方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699261A (zh) * 2020-12-28 2021-04-23 大连工业大学 一种服装图像自动生成系统及方法

Similar Documents

Publication Publication Date Title
US10650188B2 (en) Constructing a narrative based on a collection of images
Jing et al. Visual search at pinterest
Wang et al. Annotating images by mining image search results
Wu et al. Tag completion for image retrieval
JP4108961B2 (ja) イメージ検索システムおよびその方法
US8150170B2 (en) Statistical approach to large-scale image annotation
US20070286528A1 (en) System and Method for Searching a Multimedia Database using a Pictorial Language
US8606780B2 (en) Image re-rank based on image annotations
WO2020056977A1 (fr) Procédé et dispositif de distribution sélective de points de connaissance et support de stockage lisible par ordinateur
Lee et al. MAP-based image tag recommendation using a visual folksonomy
CN105426529A (zh) 基于用户搜索意图定位的图像检索方法及系统
Wang et al. Duplicate-search-based image annotation using web-scale data
Zhang et al. On-the-fly table generation
US10650191B1 (en) Document term extraction based on multiple metrics
US20050038805A1 (en) Knowledge Discovery Appartus and Method
Long et al. Relevance ranking for vertical search engines
González et al. NMF-based multimodal image indexing for querying by visual example
WO2020191706A1 (fr) Système et procédé d'annotation automatique d'image à apprentissage actif
CN111753861B (zh) 主动学习自动图像标注系统及方法
Liu et al. Clustering-based topical Web crawling using CFu-tree guided by link-context
Yu et al. A Multi-Directional Search technique for image annotation propagation
Barnard et al. Recognition as translating images into text
US8875007B2 (en) Creating and modifying an image wiki page
Gilbert et al. A picture is worth a thousand tags: automatic web based image tag expansion
TWI725568B (zh) 資訊處理系統、資訊處理方法及非暫態電腦可讀取記錄媒體

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19921979

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19921979

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19921979

Country of ref document: EP

Kind code of ref document: A1