CN106886783B - Image retrieval method and system based on regional characteristics - Google Patents

Image retrieval method and system based on regional characteristics Download PDF

Info

Publication number
CN106886783B
CN106886783B CN201710048176.8A CN201710048176A CN106886783B CN 106886783 B CN106886783 B CN 106886783B CN 201710048176 A CN201710048176 A CN 201710048176A CN 106886783 B CN106886783 B CN 106886783B
Authority
CN
China
Prior art keywords
image
features
visual
visual words
retrieval
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710048176.8A
Other languages
Chinese (zh)
Other versions
CN106886783A (en
Inventor
王生进
刘紫琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201710048176.8A priority Critical patent/CN106886783B/en
Publication of CN106886783A publication Critical patent/CN106886783A/en
Application granted granted Critical
Publication of CN106886783B publication Critical patent/CN106886783B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Abstract

The invention provides an image retrieval method and system based on regional characteristics, wherein the method comprises the following steps: s1, dividing the input image to be retrieved into a plurality of subarea images; s2, extracting the area characteristic of each sub-area image, and quantizing each area characteristic into a corresponding visual word; and S3, traversing the visual words corresponding to the regional features, searching the number of the visual words contained in each template image in a database inverted list, and taking the template image containing the most visual words as a search result image. According to the image retrieval method, the images are partitioned, the regional characteristics of each regional image are extracted, the retrieval result is obtained through the matching of the regional characteristics, compared with the existing retrieval through the local characteristics, the number of the extracted regional characteristics is greatly reduced compared with the number of the local characteristics, the image retrieval efficiency is improved, and the universality and the expansibility are better.

Description

Image retrieval method and system based on regional characteristics
Technical Field
The present invention relates to the field of image retrieval technologies, and in particular, to an image retrieval method and system based on regional features.
Background
The image search based on the content is one of key technologies for deep utilization of mass data information in the big data era, is also a research hotspot in the fields of computer vision and multimedia, has important research significance and practical value, and has important application in the actual life. At present, the method steps into a security society, cameras are spread in various places in life, specific targets can be found in massive monitoring videos through image search, and key clues are provided for police to solve the case. The image searching technology also enables daily life to be more intelligent and convenient, people can shoot favorite clothes or articles anytime and anywhere, and then related commodities can be searched in an online shopping mall according to pictures.
The most popular framework for image search is a bag-of-words model based on local invariant features, and the invariance of the local features can well solve the complex situations of occlusion, view angle change and the like in image search, so that the method is widely applied to image recognition, but the local features are not applicable in some situations. Local keypoints are typically detected at edges or corners, which can capture detailed characteristics of rigid objects in the image, such as buildings, written patterns, and the like. Therefore, for rigid objects, the bag-of-words model based on local features can achieve better performance. However, if the texture of the content in the image is smooth, such as sculptures and mollusks, then few key points in the picture can be detected. Therefore, for an object with smooth texture, the local features based on the key points do not express the content in the image well, and the performance of the local features is poor when the retrieval of such pictures is processed. Since each image contains thousands of local features, the efficiency of query is slow when the number of images is large. When the retrieved database is very large, for example, hundreds of millions of pictures, the memory overhead of the inverted list is very large, which also limits the expandability of the word bag model based on the local features.
Disclosure of Invention
The present invention provides a method and system for region feature based image retrieval that overcomes, or at least partially solves, the above-mentioned problems.
According to an aspect of the present invention, there is provided an image retrieval method based on region features, including:
s1, dividing the input image to be retrieved into a plurality of subarea images;
s2, extracting the area characteristic of each sub-area image, and quantizing each area characteristic into a corresponding visual word;
and S3, traversing the visual words corresponding to the regional features, searching the number of the visual words contained in each template image in a database inverted list, and taking the template image containing the most visual words as a search result image.
According to another aspect of the present invention, there is also provided an image retrieval system based on region features, including:
the image dividing module is used for dividing an input image to be retrieved into a plurality of subarea images;
the extraction and quantization module is used for extracting the regional characteristics of each sub-region image and quantizing each regional characteristic into a corresponding visual word;
and the retrieval determining module is used for traversing the visual words corresponding to the regional features, retrieving the number of the visual words contained in each template image from the inverted list of the database, and determining the template image containing the most visual words as the retrieval result image.
The invention has the beneficial effects that: the image is partitioned, the regional characteristics of each regional image are extracted, the retrieval result is obtained through the matching of the regional characteristics, compared with the existing retrieval through the local characteristics, the number of the extracted regional characteristics is greatly reduced compared with the number of the local characteristics, the image retrieval efficiency is improved, the universality and the expansibility are better, compared with the global characteristics, the retrieval accuracy of the method is higher, the retrieval efficiency is not greatly different from the global characteristics, the advantages of the global characteristics and the local characteristics are integrated, and the better retrieval performance and the submitted retrieval efficiency are kept in the image retrieval process.
Drawings
FIG. 1 is a flowchart of an image retrieval method based on region features according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an image retrieval system based on region features according to another embodiment of the present invention;
FIG. 3 is a block diagram illustrating an embodiment of an extraction quantization module;
fig. 4 is a schematic diagram of a specific block diagram of a retrieval determining module according to another embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Referring to fig. 1, an image retrieval method based on regional features according to an embodiment of the present invention includes:
and S1, dividing the input image to be retrieved into a plurality of subarea images.
In this embodiment, an image to be queried and retrieved is first divided into a plurality of sub-region images, and the image to be retrieved is uniformly divided into the plurality of sub-region images by using a specific super-pixel segmentation method. In practical application, the whole image to be retrieved needs to be divided into a plurality of sub-region images with appropriate number in consideration of the accuracy rate and the retrieval speed of the image retrieval, and the number of the divided sub-region images is usually 20.
And S2, extracting the area characteristics of each subarea image and quantizing each area characteristic into a corresponding visual word.
After the image to be retrieved is divided in step S1, the present step extracts the region feature of each sub-region image for each sub-region image. The types of the region features are many, and the region features in this embodiment mainly include deep learning features and texture features. Therefore, when extracting the region feature for each sub-region image, the present embodiment mainly extracts the deep learning feature and the texture feature of each sub-region image. In this embodiment, a siemese network training method is adopted to extract a deep learning feature of each sub-region image, generate a deep learning feature codebook from the extracted deep learning feature, and extract a texture feature of each sub-region image of an image to be retrieved.
According to the extracted deep learning features of each sub-region image, quantizing each deep learning feature into a corresponding visual word, wherein the relationship between the deep learning features and the visual words is as follows: the plurality of deep learning features correspond to a visual word. Wherein the correspondence between the deep learning features and the visual words may be stored in a database. Before searching images, uniformly dividing each template image in the standard database into a plurality of sub-region images according to the same super-pixel segmentation method, extracting deep learning features and texture features of each sub-region image of each template image, and quantizing each extracted deep learning feature into a corresponding visual word. And storing the visual words, the template image identifications corresponding to the visual words and the texture features of the sub-region images in an inverted list of a database according to the visual words as indexes.
In this embodiment, the area feature of each sub-area image of the image to be retrieved is extracted, and each sub-area feature of the image to be retrieved corresponds to one area feature, so that if the image to be retrieved is divided into 20 sub-area images, the number of the area features corresponding to the extracted image to be retrieved is 20, and compared with the extraction of a plurality of local features of the image to be retrieved, the number of the area features corresponding to the extracted image to be retrieved is greatly reduced, and therefore, the memory overhead of the inverted list of the database is also greatly reduced. And S3, traversing the visual words corresponding to the regional features, searching the number of the visual words contained in each template image in a database inverted list, and taking the template image containing the most visual words as a search result image.
In the step S2, the deep learning features and the texture features of each sub-region image of the image to be retrieved are extracted, and each deep learning feature is quantized into a corresponding visual word, so as to obtain a plurality of visual words corresponding to the image to be retrieved. Searching in an inverted list of a database according to each visual word, traversing a plurality of visual words corresponding to the image to be searched once, and recording the number of the visual words contained in each template image in the inverted list of the database; traversing the corresponding texture features of each sub-region image of the image to be retrieved in the inverted list of the database once, and recording the number of the texture features contained in each template image in the inverted list of the database. And obtaining the number of visual words and the number of texture features contained in each template image, calculating corresponding numerical values according to a preset calculation mode, and determining the template image with the maximum numerical value as a retrieval result image. Specifically, in this embodiment, according to the weight coefficient of the preset visual word and the weight coefficient of the preset texture feature, the weight calculation is performed on the number of the visual words and the number of the texture features to obtain a value after the weight calculation, and the template image with the largest calculated value is determined as the final retrieval result image.
The following describes the image retrieval process of this embodiment with a specific example: suppose that an image to be retrieved is divided into 20 sub-region images according to a super-pixel segmentation method, and the deep learning features and the texture features of the 20 sub-region images are extracted to obtain 20 deep learning features and 20 texture features. Each deep learning feature is quantized into a corresponding visual word, and since a plurality of deep learning features may correspond to one visual word, 20 deep learning features may be quantized into 15 visual words, and at this time, for an image to be retrieved, 15 visual words and 20 texture features correspond to each other. For 15 visual words, traversing each visual word in an inverted list of a database, assuming that 100 template images exist in the inverted list of the database, numbering the 100 template images in the inverted list, and each template image has a plurality of corresponding visual words and a plurality of texture features. Traversing each visual word in the inverted list once, and after all the visual words are traversed, assuming that the template image 1 comprises 8 visual words of the image to be retrieved, the template image 2 comprises 13 visual words of the retrieved image, the template image 3 comprises 11 visual words and the like, so as to obtain the number of the visual words comprising the image to be retrieved in each template image in the inverted list of the database. And similarly, traversing the texture features of each sub-region image of the image to be retrieved in the inverted list of the database to obtain the number of the texture features of each template image in the inverted list of the database, wherein the number of the texture features of the image to be retrieved is contained in each template image. And when each template image in the inverted list of the database contains the number of the visual words and the number of the texture features of the image to be retrieved, performing weight calculation on the number of the visual words and the number of the texture features according to the preset weight coefficient of the visual words and the preset weight coefficient of the texture features to obtain a numerical value after weight calculation, and determining the template image with the maximum calculated numerical value as a final retrieval result image.
In the embodiment, the retrieval of the image is performed through the regional features, and the regional features of the extracted image to be retrieved are relatively less, so that when the regional features are used for image retrieval, the retrieval speed is higher than that when the local features are used for image retrieval, and the regional features are easier to extract in the image with smooth content texture, so that the method is particularly suitable for image matching retrieval with smooth content texture in the image, and the defect that the local features are not suitable for image retrieval with smooth texture is overcome. In addition, the number of the regional features of the image to be retrieved extracted in the embodiment is multiple, so that the retrieval accuracy is higher compared with the case of using global feature retrieval, and therefore, the method provided by the embodiment integrates the advantages of the global features and the local features, maintains better retrieval performance and higher retrieval speed in the retrieval process, and has wider applicability compared with the image retrieval of the local features and the image retrieval of the global features.
In addition, the deep learning feature and the texture feature of the image to be retrieved are extracted, the deep learning feature and the texture feature are combined, the image is retrieved by utilizing the two region features, and the accuracy of image retrieval can be improved.
Referring to fig. 2, an image retrieval system based on regional features according to another embodiment of the present invention mainly includes an image dividing module 21, an extraction and quantization module 22, a retrieval determining module 23, and a storage module 24.
The image dividing module 21 is configured to divide an input image to be retrieved into a plurality of sub-region images;
the extraction and quantization module 22 is configured to extract a region feature of each sub-region image, and quantize each region feature into a corresponding visual word;
and the retrieval determining module 23 is configured to traverse the visual word corresponding to each region feature, retrieve the number of the visual words contained in each template image from the inverted list of the database, and determine the template image containing the largest number of the visual words as the retrieval result image.
The image dividing module 21 is specifically configured to uniformly divide an image to be retrieved into a plurality of sub-region images according to a super-pixel segmentation method.
Referring to fig. 3, the extraction and quantization module 22 specifically includes an extraction unit 221 and a quantization unit 222, where the extraction unit 221 is configured to extract, for each uniformly divided sub-region image, a deep learning feature and a texture feature of each sub-region image, so as to obtain a plurality of deep learning features and a plurality of texture features;
the quantizing unit 222 is configured to quantize each deep learning feature into a corresponding visual word, so as to obtain a visual word corresponding to each sub-region image of the image to be retrieved.
The image dividing module 21 is further configured to:
dividing each template image in the database into a plurality of subarea images;
the extraction quantization module 22 is further configured to:
extracting the deep learning feature and the texture feature of each subregion image, and quantizing the deep learning feature corresponding to each subregion image into corresponding visual words;
and the storage module 34 is configured to store the visual words, the template image identifiers corresponding to the visual words, and the texture features of the sub-region images in an inverted table of the database according to the visual words as indexes.
Referring to fig. 4, the retrieval determining module 23 specifically includes a first retrieval sub-unit 231, a second retrieval sub-module 232, a calculating unit 233, and a determining unit 234;
a first retrieving subunit 231, configured to retrieve, from the inverted table of the database, the number of visual words included in each template image for the visual word corresponding to each sub-region image of the image to be retrieved;
the second retrieval subunit 232 is configured to retrieve, from the inverted list of the database, the number of each template image that includes a texture feature according to the texture feature corresponding to each word region image of the image to be retrieved;
a calculating unit 233, configured to calculate, according to the number of visual words and the number of texture features included in each template image, corresponding numerical values in a predetermined calculation manner;
a determining unit 234, configured to determine the template image with the largest value as the search result image.
Wherein, the calculating unit 233 is used for:
and according to the weight coefficient of a preset visual word and the weight coefficient of a preset texture feature, carrying out weight calculation on the number of the visual words and the number of the texture feature to obtain a numerical value after weight calculation.
According to the image retrieval method and system based on the regional characteristics, the images are partitioned, the regional characteristics of each regional image are extracted, the retrieval result is obtained through the matching of the regional characteristics, compared with the existing retrieval method through the local characteristics, the number of the extracted regional characteristics is greatly reduced compared with the number of the local characteristics, the image retrieval efficiency is improved, the universality and the expansibility are better, compared with the global characteristics, the retrieval accuracy of the method is higher, and the retrieval efficiency is not greatly different from the global characteristics; meanwhile, the deep learning characteristic and the texture characteristic of the image to be retrieved are extracted, the deep learning characteristic and the texture characteristic are combined, the image is retrieved by utilizing the two region characteristics, and the accuracy of image retrieval can be improved.
Finally, the method of the present application is only a preferred embodiment and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. An image retrieval method based on regional features is characterized by comprising the following steps:
s1, dividing the input image to be retrieved into a plurality of subarea images;
s2, extracting the area characteristic of each sub-area image, and quantizing each area characteristic into a corresponding visual word;
s3, traversing the visual words corresponding to each regional characteristic, searching the number of the visual words contained in each template image in a database inverted list, and taking the template image containing the most visual words as a search result image;
the step S2 specifically includes:
s21, aiming at each uniformly divided sub-region image, extracting the deep learning feature and the texture feature of each sub-region image to obtain a plurality of deep learning features and a plurality of texture features;
s22, quantizing each deep learning feature into a corresponding visual word to obtain a visual word corresponding to each subregion image of the image to be retrieved, wherein the relationship between the deep learning feature and the visual word is as follows: the plurality of deep learning features correspond to a visual word;
the step S3 specifically includes:
s31, aiming at the visual words corresponding to each sub-region image of the image to be retrieved, retrieving the number of the visual words contained in each template image from the inverted list of the database;
s32, retrieving the number of each template image containing the texture features from the inverted list of the database according to the texture features corresponding to each character region image of the image to be retrieved;
s33, calculating corresponding numerical values according to the number of visual words and the number of texture features contained in each template image and a preset calculation mode;
s34, determining the template image with the largest numerical value as a retrieval result image;
the step S33 specifically includes:
and according to the weight coefficient of a preset visual word and the weight coefficient of a preset texture feature, carrying out weight calculation on the number of the visual words and the number of the texture feature to obtain a numerical value after weight calculation.
2. The image retrieval method based on regional features as claimed in claim 1, wherein the step S1 specifically includes:
and uniformly dividing the image to be retrieved into a plurality of subarea images according to a superpixel segmentation method.
3. The image retrieval method based on regional features of claim 2, wherein the step S1 is preceded by:
dividing each template image in the database into a plurality of sub-region images, and extracting the deep learning characteristic and the texture characteristic of each sub-region image;
quantizing the deep learning features corresponding to each subregion image into corresponding visual words;
and storing the visual words, the template image identifications corresponding to the visual words and the texture features of the sub-region images in an inverted list of a database according to the visual words as indexes.
4. An image retrieval system based on regional features, comprising:
the image dividing module is used for dividing an input image to be retrieved into a plurality of subarea images;
the extraction and quantization module is used for extracting the regional characteristics of each sub-region image and quantizing each regional characteristic into a corresponding visual word;
the retrieval determining module is used for traversing the visual words corresponding to the regional features, retrieving the number of the visual words contained in each template image from the inverted list of the database, and determining the template image containing the most visual words as a retrieval result image;
the retrieval determining module specifically comprises:
the first retrieval subunit is used for retrieving the number of visual words contained in each template image from an inverted list of a database aiming at the visual words corresponding to each sub-area image of the image to be retrieved;
the second retrieval subunit is used for retrieving the number of each template image containing the texture features from the inverted list of the database according to the texture features corresponding to each character region image of the image to be retrieved;
the calculating unit is used for calculating corresponding numerical values according to a preset calculating mode according to the number of the visual words and the number of the texture features contained in each template image;
a determining unit, configured to determine a template image with a largest numerical value as a retrieval result image;
the extraction and quantization module specifically comprises:
the extraction unit is used for extracting the deep learning features and the texture features of each sub-region image to obtain a plurality of deep learning features and a plurality of texture features;
a quantizing unit, configured to quantize each deep learning feature into a corresponding visual word, to obtain a visual word corresponding to each sub-region image of the image to be retrieved, where a relationship between the deep learning feature and the visual word is: the plurality of deep learning features correspond to a visual word;
the computing unit is specifically configured to:
and according to the weight coefficient of a preset visual word and the weight coefficient of a preset texture feature, carrying out weight calculation on the number of the visual words and the number of the texture feature to obtain a numerical value after weight calculation.
5. The image retrieval system based on regional features of claim 4, wherein the image partitioning module is specifically configured to:
and uniformly dividing the image to be retrieved into a plurality of subarea images according to a superpixel segmentation method.
6. The regional feature based image retrieval system of claim 5, wherein the image partitioning module is further configured to:
dividing each template image in the database into a plurality of subarea images;
the extraction quantization module is further configured to:
extracting the deep learning feature and the texture feature of each subregion image, and quantizing the deep learning feature corresponding to each subregion image into corresponding visual words;
further comprising:
and the storage module is used for storing the visual words, the template image identifications corresponding to the visual words and the texture features of the sub-region images into an inverted list of the database according to the visual words as indexes.
CN201710048176.8A 2017-01-20 2017-01-20 Image retrieval method and system based on regional characteristics Active CN106886783B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710048176.8A CN106886783B (en) 2017-01-20 2017-01-20 Image retrieval method and system based on regional characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710048176.8A CN106886783B (en) 2017-01-20 2017-01-20 Image retrieval method and system based on regional characteristics

Publications (2)

Publication Number Publication Date
CN106886783A CN106886783A (en) 2017-06-23
CN106886783B true CN106886783B (en) 2020-11-10

Family

ID=59175958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710048176.8A Active CN106886783B (en) 2017-01-20 2017-01-20 Image retrieval method and system based on regional characteristics

Country Status (1)

Country Link
CN (1) CN106886783B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110956195B (en) * 2019-10-11 2023-06-02 平安科技(深圳)有限公司 Image matching method, device, computer equipment and storage medium
CN112488049B (en) * 2020-12-16 2021-08-24 哈尔滨市科佳通用机电股份有限公司 Fault identification method for foreign matter clamped between traction motor and shaft of motor train unit

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344928B (en) * 2007-07-12 2013-04-17 佳能株式会社 Method and apparatus for confirming image area and classifying image
US8781255B2 (en) * 2011-09-17 2014-07-15 Adobe Systems Incorporated Methods and apparatus for visual search
CN104252625A (en) * 2013-06-28 2014-12-31 河海大学 Sample adaptive multi-feature weighted remote sensing image method
US9424484B2 (en) * 2014-07-18 2016-08-23 Adobe Systems Incorporated Feature interpolation
CN105468596B (en) * 2014-08-12 2019-06-18 腾讯科技(深圳)有限公司 Picture retrieval method and device
CN105447026A (en) * 2014-08-27 2016-03-30 南京理工大学常熟研究院有限公司 Web information extraction method based on minimum weight communication determining set in multi-view image
US9405965B2 (en) * 2014-11-07 2016-08-02 Noblis, Inc. Vector-based face recognition algorithm and image search system
GB201511334D0 (en) * 2015-06-29 2015-08-12 Nokia Technologies Oy A method, apparatus, computer and system for image analysis
CN106021250A (en) * 2015-09-16 2016-10-12 展视网(北京)科技有限公司 Image semantic information retrieval method based on keyword
CN105426533B (en) * 2015-12-17 2019-07-19 电子科技大学 A kind of image search method merging space constraint information
CN105631471A (en) * 2015-12-23 2016-06-01 西安电子科技大学 Aurora sequence classification method with fusion of single frame feature and dynamic texture model
CN105760507B (en) * 2016-02-23 2019-05-03 复旦大学 Cross-module state topic relativity modeling method based on deep learning

Also Published As

Publication number Publication date
CN106886783A (en) 2017-06-23

Similar Documents

Publication Publication Date Title
US11048966B2 (en) Method and device for comparing similarities of high dimensional features of images
CN104376003B (en) A kind of video retrieval method and device
US8577131B1 (en) Systems and methods for visual object matching
US20130216143A1 (en) Systems, circuits, and methods for efficient hierarchical object recognition based on clustered invariant features
CN110503076B (en) Video classification method, device, equipment and medium based on artificial intelligence
CN106897295B (en) Hadoop-based power transmission line monitoring video distributed retrieval method
CN103678702A (en) Video duplicate removal method and device
CN110427517B (en) Picture searching video method and device based on scene dictionary tree and computer readable storage medium
US10489681B2 (en) Method of clustering digital images, corresponding system, apparatus and computer program product
CN104951562B (en) A kind of image search method based on VLAD dual adaptions
CN102236714A (en) Extensible markup language (XML)-based interactive application multimedia information retrieval method
CN111382620B (en) Video tag adding method, computer storage medium and electronic device
CN104486585A (en) Method and system for managing urban mass surveillance video based on GIS
CN113515656A (en) Multi-view target identification and retrieval method and device based on incremental learning
CN106886783B (en) Image retrieval method and system based on regional characteristics
EP3096243A1 (en) Methods, systems and apparatus for automatic video query expansion
CN105989063B (en) Video retrieval method and device
CN104850600A (en) Method and device for searching images containing faces
CN111666263A (en) Method for realizing heterogeneous data management in data lake environment
CN104866818A (en) Method and device for searching pictures including human faces
Juan et al. Content-based video retrieval system research
WO2017143979A1 (en) Image search method and device
JPWO2021145030A5 (en)
CN112069331A (en) Data processing method, data retrieval method, data processing device, data retrieval device, data processing equipment and storage medium
CN111382287A (en) Picture searching method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant