CN106844785A - Saliency segmentation-based content-based image retrieval method - Google Patents

Saliency segmentation-based content-based image retrieval method Download PDF

Info

Publication number
CN106844785A
CN106844785A CN201710151860.9A CN201710151860A CN106844785A CN 106844785 A CN106844785 A CN 106844785A CN 201710151860 A CN201710151860 A CN 201710151860A CN 106844785 A CN106844785 A CN 106844785A
Authority
CN
China
Prior art keywords
image
feature
training
foreground target
visual vocabulary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710151860.9A
Other languages
Chinese (zh)
Inventor
白琮
陈佳楠
黄玲
郝鹏翼
陈胜勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201710151860.9A priority Critical patent/CN106844785A/en
Publication of CN106844785A publication Critical patent/CN106844785A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/242Dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

A saliency segmentation based content-based image retrieval method comprises a training process and a testing process; the training process comprises the steps of: step one, establishing a visual vocabulary dictionary: sequentially analyzing each image in the training image set, and establishing a visual vocabulary dictionary for subsequent retrieval; secondly, segmenting the image to obtain a foreground target image and a background area image of the image by using the visual saliency characteristics of the image; respectively extracting color features and texture features of the image from the foreground target image and the background area image; step four, on the basis of the operation of the step one, counting a visual vocabulary distribution histogram of each picture in a database; the test process comprises a fifth step, and the search of the test image can be realized on the basis of the operations from the first step to the fourth step. The invention provides the image retrieval method based on the content and based on the significance segmentation, which effectively reduces the semantic gap and has higher accuracy.

Description

A kind of CBIR method based on conspicuousness segmentation
Technical field
The present invention relates to a kind of image search method, more particularly to a kind of image retrieval side of view-based access control model conspicuousness segmentation Method, belongs to CBIR field.
Background technology
With the development of multimedia technology, the quantity of digital picture presents the growth of geometry level.How in vast and numerous figure As the resource that user's request is fast and accurately found out in resource is also put in face of people.
CBIR technology allows the pictures of user input one to search with same or like content Other pictures, it represents the mainstream development trend of image retrieval technologies.
Josef Sivic et al. proposed bag of words in 2006.Its core concept is that one is detected in entire image A little key points, then extract the local feature of these key points, and these local features are quantified as " visual word " afterwards.It is so every Width image can be expressed as the characteristic vector on " visual word ".The mature technology of text retrieval can be just used for reference afterwards, should Retrieval is completed with expansible index and Fast search technique.
The features such as bag of words have simply and can utilize text retrieval mature technology, but there is certain lacking simultaneously Fall into.Because may including foreground target and background two parts in piece image, if carrying out feature extraction in entire image Easily cause subject goal not clear and definite enough, by-end characteristic remarkable can so largely effect on image in results such as foreground targets The result of retrieval.
The content of the invention
In order to the presence semantic gap, the accuracy that overcome the shortcomings of existing image search method are poor, the invention provides A kind of effective reduction semantic gap, accuracy the CBIR method based on conspicuousness segmentation higher.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of CBIR method based on conspicuousness segmentation, described image search method includes training Journey and test process;
The training process is comprised the following steps:
Step one, set up visual vocabulary dictionary:Treatment is analyzed to every image that training image is concentrated successively, after being A visual vocabulary dictionary is set up in continuous retrieval;
Step 2, the vision significance feature using image, segmentation obtain the foreground target figure and Background regional image of image;
Step 3, the color character and textural characteristics that extract image respectively to foreground target figure and Background regional image;
Step 4, on the basis of the operation of step one, in staqtistical data base per pictures visual vocabulary distribution histogram;
The test process includes step 5, on the basis of step one to step 4 operation, test image is examined Rope, process is as follows:
Step 5.1:Foreground target figure and Background regional image are obtained to the segmentation of test image step 2 methods described;
Step 5.2:Test image foreground target figure and Background regional image are extracted with step 3 methods described to test image Color character and textural characteristics;
Step 5.3:The visual word of foreground target figure and Background regional image is counted with step 4 methods described to test image Remittance statistic histogram;
Step 5.4:Each feature that test image is obtained according to step 5.1, step 5.2 and step 5.3 successively with instruction The character pair for practicing every image in image set carries out distance metric;
Step 5.5:Distance metric to every kind of feature of step 5.4 gained adds up, and obtains total distance, uses D1, D2... DiRepresent, wherein i represents the amount of images of training image concentration;
Step 5.6:For every image that training image is concentrated, according to D1,D2... DiOrder from small to large is to training Image sorts and exports.
By the operation of above-mentioned steps, you can realize the retrieval to test image.
Further, it is in the step one, the step of set up visual vocabulary dictionary:
Step 1.1:Color image into gray level image;
Step 1.2:The characteristic vector of every image that training image is concentrated is obtained on the gray level image that step 1.1 is obtained Represent collection, represented with Fm, wherein F1, F2 ... Fm represents that the characteristic vector of an image represents collection respectively, and m represents training figure The number of image in image set;The characteristic vector is multidimensional, and its dimension is represented with N, N>2;
The characteristic vector for obtaining every image that training image is concentrated represents that the method for integrating is SIFT algorithms;
Step 1.3:Clustered and obtained each feature for the characteristic vector that the character representation that step 1.2 is obtained is concentrated The representative feature vector of classification.K classification is obtained, K representative feature is vectorial, and K is positive integer;
The clustering method is K-means clustering algorithms;
The representative feature vector of each feature classification is the barycenter of whole feature vector clusters in this feature classification;
Step 1.4:Visual vocabulary dictionary needed for obtaining later retrieval, the visual vocabulary dictionary is by step 1.4 gained Each feature classification representative feature vector composition.
Further, in the step 2, the cutting operation step for single image is:
Step 2.1:The Saliency maps of image are calculated, the computational methods of Saliency maps are used based on the straight of regional correlation Square figure comparative approach;
Step 2.2:The Saliency maps obtained using step 2.1, iteration operation image partitioning algorithm, respectively obtain image Foreground target figure and Background regional image, image segmentation algorithm use GrabCut.
In the step 3, the extraction operating procedure for single image is:
Step 3.1:The color space of foreground target figure and Background regional image is transformed into HSV color spaces;
Step 3.2 extracts color character in the H passages and channel S of foreground target figure, extracting method using calculate H values with The algorithm of the statistic histogram of S values;
Step 3.3 extracts color character in the H passages and channel S of Background regional image, extracting method using calculate H values with The algorithm of the statistic histogram of S values;
Step 3.4 texture feature extraction on the V passages of foreground target figure, extracting method is calculated using local binary patterns Method;
Step 3.5 texture feature extraction on the V passages of Background regional image, extracting method is calculated using local binary patterns Method.
In the step 4, for single image, the operating procedure for setting up character representation is:
Step 4.1:The foreground target figure and Background regional image that step 2 is obtained are converted into gray level image;
Step 4.2:The visual vocabulary dictionary that step 1 is obtained is combined in the gray level image that step 4.1 is obtained, is counted respectively The statistic histogram of the visual vocabulary of foreground target figure and Background regional image.
In the step 5.4, the method that distance metric is used is COS distance.
The present invention is foreground target figure and Background regional image by by means of vision significance model, dividing the image into, so The local features such as the global characteristics such as color, texture and SIFT are extracted respectively on this two classes image afterwards, finally use these features In image retrieval.This search method can more accurately represent the semantic content of retrieving image, be language in a kind of reduction retrieval Effective trial of adopted wide gap.
Beneficial effects of the present invention are mainly manifested in:The method combination saliency partitioning algorithm, by a picture in its entirety The characteristics of image of foreground target and background, fusion foreground target and background is partitioned into, is completed to retrieve by means of bag of words.We The more traditional bag of words searching algorithm of method can more accurately represent the semantic content of retrieving image, be that one kind can be reduced effectively The search method of semantic gap.
Brief description of the drawings
Fig. 1 is the flow chart of the CBIR method based on conspicuousness segmentation.
Specific embodiment
The invention will be further described below in conjunction with the accompanying drawings.
Reference picture 1, a kind of CBIR method based on conspicuousness segmentation, the picture in the implementation case It is divided into 100 classes, there are 100 per class picture.Random 20 pictures of selecting for training, use by remaining 80 pictures in every class picture In test.Training image is trained using the CBIR method split based on conspicuousness and to test Image is retrieved, and its structural framing is as shown in figure 1, concrete operation step includes training process and test process.
The training process includes step one to step 4, specially:
Step one, set up visual vocabulary dictionary.
Treatment is analyzed to every image that training image is concentrated successively, is that later retrieval sets up a visual vocabulary word Allusion quotation.Set up concretely comprising the following steps for visual vocabulary dictionary:
Step 1.1:Color image into gray level image.
Step 1.2:The characteristic vector of every image that training image is concentrated is obtained on the gray level image that step 1.1 is obtained Represent collection, represented with Fm, wherein F1, F2 ... Fm represents that the characteristic vector of an image represents collection respectively, and m=2000 is represented Training image concentrates the number of image;The characteristic vector is 128 dimensions.
The method that the character representation for obtaining every image that training image is concentrated integrates is SIFT (Scale Invariant Feature Transform) algorithm.
Step 1.3:Clustered and obtained each feature for the characteristic vector that the character representation that step 1.2 is obtained is concentrated The representative feature vector of classification.Obtain 30 classifications, 30 representative feature vectors.
The clustering method is K-means clustering algorithms.
The representative feature vector of each feature classification is the barycenter of whole feature vector clusters in this feature classification.
Step 1.4:Visual vocabulary dictionary needed for obtaining later retrieval.The visual vocabulary dictionary is by step 1.4 gained Each feature classification representative feature vector composition.
Step 2, the vision significance feature using image, segmentation obtain the foreground target figure and Background regional image of image. Concrete operation step for single image is:
Step 2.1:Calculate the Saliency maps of image.The computational methods of Saliency maps are used based on the straight of regional correlation Square figure comparative approach.
Step 2.2:The Saliency maps obtained using step 2.1, iteration operation image partitioning algorithm, respectively obtain image Foreground target figure and Background regional image.Image segmentation algorithm uses GrabCut.
Step 3, the color character and textural characteristics that extract image respectively to foreground target figure and Background regional image.For The concrete operation step of single image is:
Step 3.1:The color space of foreground target figure and Background regional image is transformed into HSV color spaces.
Step 3.2:Color character is extracted in the H passages and channel S of foreground target figure.Extracting method is using calculating H values With the algorithm of the statistic histogram of S values.H channel histograms have 64 dimensions, and channel S histogram has 16 dimensions.
Step 3.3:Color character is extracted in the H passages and channel S of Background regional image.Extracting method is using calculating H values With the algorithm of the statistic histogram of S values.H channel histograms have 64 dimensions, and channel S histogram has 16 dimensions.
Step 3.4:The texture feature extraction on the V passages of foreground target figure.Extracting method uses local binary patterns (Local Binary Pattern) algorithm.The textural characteristics have 256 dimensions.
Step 3.5 texture feature extraction on the V passages of Background regional image.Extracting method uses local binary patterns (Local Binary Pattern) algorithm.The textural characteristics have 256 dimensions.
Step 4, on the basis of the operation of step one, in staqtistical data base per pictures visual vocabulary distribution histogram. For single image, the concrete operation step for setting up character representation is:
Step 4.1:The foreground target figure and Background regional image that step 2 is obtained are converted into gray level image.
Step 4.2:The visual vocabulary dictionary that step 1 is obtained is combined in the gray level image that step 4.1 is obtained, is counted respectively The statistic histogram that visual vocabulary occurs in the middle visual vocabulary dictionary of foreground target figure and Background regional image.
The test process includes step 5.
Step 5, step one to step 4 operation on the basis of, test image is retrieved.
Step 5.1:Foreground target figure and Background regional image are obtained to the segmentation of test image step 2 methods described.
Step 5.2:Test image foreground target figure and Background regional image are extracted with step 3 methods described to test image Color character and textural characteristics.
Step 5.3:The visual word of foreground target figure and Background regional image is counted with step 4 methods described to test image Remittance statistic histogram.
Step 5.4:Each feature that test image is obtained according to step 5.1, step 5.2 and step 5.3 successively with instruction The character pair for practicing every image in image set carries out distance metric, and the distance for obtaining uses D respectivelyS,DHC, DSCAnd DLBPRepresent. The method that distance metric is used is COS distance.
Step 5.5:Distance metric to every kind of feature of step 5.4 gained is weighted cumulative, obtains total distance, uses D1,D2... DiRepresent, wherein i represents the amount of images of training image concentration, Di=DS+DHC+DSC+3*DLBP
Step 5.6:For every image that training image is concentrated, according to D1,D2... DiOrder from small to large is to training Image sorts and exports.
By the operation of above-mentioned steps, you can realize the retrieval to test image.
Above-described specific descriptions, purpose, technical scheme and beneficial effect to inventing have been carried out further specifically It is bright, should be understood that and the foregoing is only specific embodiment of the invention, for explaining the present invention, it is not used to limit this The protection domain of invention, all any modification, equivalent substitution and improvements within the spirit and principles in the present invention, done etc., all should It is included within protection scope of the present invention.

Claims (6)

1. it is a kind of based on conspicuousness segmentation CBIR method, it is characterised in that:Described image search method Including training process and test process;
The training process is comprised the following steps:
Step one, set up visual vocabulary dictionary:Treatment is analyzed to every image that training image is concentrated successively, is follow-up inspection Mono- visual vocabulary dictionary of Suo Jianli;
Step 2, the vision significance feature using image, segmentation obtain the foreground target figure and Background regional image of image;
Step 3, the color character and textural characteristics that extract image respectively to foreground target figure and Background regional image;
Step 4, on the basis of the operation of step one, in staqtistical data base per pictures visual vocabulary distribution histogram;
The test process includes step 5, on the basis of step one to step 4 operation, test image is retrieved, mistake Journey is as follows:
Step 5.1:Foreground target figure and Background regional image are obtained to the segmentation of test image step 2 methods described;
Step 5.2:The color of test image foreground target figure and Background regional image is extracted with step 3 methods described to test image Color feature and textural characteristics;
Step 5.3:Test image is united with the visual vocabulary that step 4 methods described counts foreground target figure and Background regional image Meter histogram;
Step 5.4:Each feature that test image is obtained according to step 5.1, step 5.2 and step 5.3 successively with training figure The character pair of every image in image set carries out distance metric;
Step 5.5:Distance metric to every kind of feature of step 5.4 gained adds up, and obtains total distance, uses D1,D2... Di Represent, wherein i represents the amount of images of training image concentration;
Step 5.6:For every image that training image is concentrated, according to D1,D2... DiOrder from small to large is to training image Sort and export.
It is 2. as claimed in claim 1 to be based on the CBIR method that conspicuousness is split, it is characterised in that:It is described It is in step one, the step of set up visual vocabulary dictionary:
Step 1.1:Color image into gray level image;
Step 1.2:The characteristic vector that every image that training image is concentrated is obtained on the gray level image that step 1.1 is obtained is represented Collection, represented with Fm, wherein F1, F2 ... Fm represents that the characteristic vector of an image represents collection respectively, and m represents training image collection The number of middle image;The characteristic vector is multidimensional, and its dimension is represented with N, N>2;
The characteristic vector for obtaining every image that training image is concentrated represents that the method for integrating is SIFT algorithms;
Step 1.3:Clustered and obtained each feature classification for the characteristic vector that the character representation that step 1.2 is obtained is concentrated Representative feature vector.K classification is obtained, K representative feature is vectorial, and K is positive integer;
The clustering method is K-means clustering algorithms;
The representative feature vector of each feature classification is the barycenter of whole feature vector clusters in this feature classification;
Step 1.4:Visual vocabulary dictionary needed for obtaining later retrieval, the visual vocabulary dictionary is every by step 1.4 gained The representative feature vector composition of individual feature classification.
It is 3. as claimed in claim 1 or 2 to be based on the CBIR method that conspicuousness is split, it is characterised in that: In the step 2, the cutting operation step for single image is:
Step 2.1:The Saliency maps of image are calculated, the computational methods of Saliency maps use the histogram based on regional correlation Comparative approach;
Step 2.2:The Saliency maps obtained using step 2.1, iteration operation image partitioning algorithm, respectively obtain the prospect of image Target figure and Background regional image, image segmentation algorithm use GrabCut.
It is 4. as claimed in claim 1 or 2 to be based on the CBIR method that conspicuousness is split, it is characterised in that: In the step 3, the extraction operating procedure for single image is:
Step 3.1:The color space of foreground target figure and Background regional image is transformed into HSV color spaces;
Step 3.2 extracts color character in the H passages and channel S of foreground target figure, and extracting method is using calculating H values and S values Statistic histogram algorithm;
Step 3.3 extracts color character in the H passages and channel S of Background regional image, and extracting method is using calculating H values and S values Statistic histogram algorithm;
Step 3.4 texture feature extraction on the V passages of foreground target figure, extracting method uses local binary patterns algorithm;
Step 3.5 texture feature extraction on the V passages of Background regional image, extracting method uses local binary patterns algorithm.
It is 5. as claimed in claim 1 or 2 to be based on the CBIR method that conspicuousness is split, it is characterised in that: In the step 4, for single image, the operating procedure for setting up character representation is:
Step 4.1:The foreground target figure and Background regional image that step 2 is obtained are converted into gray level image;
Step 4.2:The visual vocabulary dictionary that step 1 is obtained is combined in the gray level image that step 4.1 is obtained, prospect is counted respectively The statistic histogram of the visual vocabulary of target figure and Background regional image.
It is 6. as claimed in claim 1 or 2 to be based on the CBIR method that conspicuousness is split, it is characterised in that: In the step 5.4, the method that distance metric is used is COS distance.
CN201710151860.9A 2017-03-15 2017-03-15 Saliency segmentation-based content-based image retrieval method Pending CN106844785A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710151860.9A CN106844785A (en) 2017-03-15 2017-03-15 Saliency segmentation-based content-based image retrieval method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710151860.9A CN106844785A (en) 2017-03-15 2017-03-15 Saliency segmentation-based content-based image retrieval method

Publications (1)

Publication Number Publication Date
CN106844785A true CN106844785A (en) 2017-06-13

Family

ID=59144602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710151860.9A Pending CN106844785A (en) 2017-03-15 2017-03-15 Saliency segmentation-based content-based image retrieval method

Country Status (1)

Country Link
CN (1) CN106844785A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107357834A (en) * 2017-06-22 2017-11-17 浙江工业大学 Image retrieval method based on visual saliency fusion
CN108334627A (en) * 2018-02-12 2018-07-27 北京百度网讯科技有限公司 Searching method, device and the computer equipment of new media content
CN110276281A (en) * 2019-06-10 2019-09-24 浙江工业大学 A kind of screenshotss picture and text identification extracting method and system towards mobile terminal
CN111415357A (en) * 2020-03-19 2020-07-14 长光卫星技术有限公司 Portable shadow extraction method based on color image
CN115131589A (en) * 2022-08-31 2022-09-30 天津艺点意创科技有限公司 Image generation method for intelligent design of Internet literary works
CN117874278A (en) * 2024-03-11 2024-04-12 盛视科技股份有限公司 Image retrieval method and system based on multi-region feature combination

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020265A (en) * 2012-12-25 2013-04-03 深圳先进技术研究院 Image retrieval method and system
CN103064985A (en) * 2013-01-28 2013-04-24 北京理工大学 Priori knowledge based image retrieval method
CN105469096A (en) * 2015-11-18 2016-04-06 南京大学 Feature bag image retrieval method based on Hash binary code
CN106126572A (en) * 2016-06-17 2016-11-16 中国科学院自动化研究所 Image search method based on area validation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020265A (en) * 2012-12-25 2013-04-03 深圳先进技术研究院 Image retrieval method and system
CN103064985A (en) * 2013-01-28 2013-04-24 北京理工大学 Priori knowledge based image retrieval method
CN105469096A (en) * 2015-11-18 2016-04-06 南京大学 Feature bag image retrieval method based on Hash binary code
CN106126572A (en) * 2016-06-17 2016-11-16 中国科学院自动化研究所 Image search method based on area validation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
汪启伟: "图像直方图特征及其应用研究", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107357834A (en) * 2017-06-22 2017-11-17 浙江工业大学 Image retrieval method based on visual saliency fusion
CN108334627A (en) * 2018-02-12 2018-07-27 北京百度网讯科技有限公司 Searching method, device and the computer equipment of new media content
CN110276281A (en) * 2019-06-10 2019-09-24 浙江工业大学 A kind of screenshotss picture and text identification extracting method and system towards mobile terminal
CN111415357A (en) * 2020-03-19 2020-07-14 长光卫星技术有限公司 Portable shadow extraction method based on color image
CN111415357B (en) * 2020-03-19 2023-04-07 长光卫星技术股份有限公司 Portable shadow extraction method based on color image
CN115131589A (en) * 2022-08-31 2022-09-30 天津艺点意创科技有限公司 Image generation method for intelligent design of Internet literary works
CN115131589B (en) * 2022-08-31 2022-11-22 天津艺点意创科技有限公司 Image generation method for intelligent design of Internet literary works
CN117874278A (en) * 2024-03-11 2024-04-12 盛视科技股份有限公司 Image retrieval method and system based on multi-region feature combination
CN117874278B (en) * 2024-03-11 2024-05-28 盛视科技股份有限公司 Image retrieval method and system based on multi-region feature combination

Similar Documents

Publication Publication Date Title
CN106844785A (en) Saliency segmentation-based content-based image retrieval method
CN104199931B (en) A kind of consistent semantic extracting method of trademark image and trade-mark searching method
CN107357834A (en) Image retrieval method based on visual saliency fusion
CN109409384A (en) Image-recognizing method, device, medium and equipment based on fine granularity image
CN107103326A (en) The collaboration conspicuousness detection method clustered based on super-pixel
CN106021603A (en) Garment image retrieval method based on segmentation and feature matching
Wang et al. Improving human action recognition by non-action classification
CN103279738B (en) Automatic identification method and system for vehicle logo
Wang et al. Cattle face recognition method based on parameter transfer and deep learning
Tian et al. Natural scene text detection with MC–MR candidate extraction and coarse-to-fine filtering
CN107330027A (en) A kind of Weakly supervised depth station caption detection method
CN105678244B (en) A kind of near video search method based on improved edit-distance
CN113112498B (en) Grape leaf spot identification method based on fine-grained countermeasure generation network
Wang et al. S 3 D: Scalable pedestrian detection via score scale surface discrimination
CN104156690A (en) Gesture recognition method based on image space pyramid bag of features
CN111507413A (en) City management case image recognition method based on dictionary learning
CN103985130A (en) Image significance analysis method for complex texture images
CN108268875A (en) A kind of image meaning automatic marking method and device based on data smoothing
CN114842507A (en) Reinforced pedestrian attribute identification method based on group optimization reward
Xu et al. A robust hierarchical detection method for scene text based on convolutional neural networks
CN108805183A (en) A kind of image classification method of fusion partial polymerization descriptor and local uniform enconding
CN109145140A (en) One kind being based on the matched image search method of hand-drawn outline figure and system
CN106066887B (en) A kind of sequence of advertisements image quick-searching and analysis method
CN104537392A (en) Object detection method based on distinguishing semantic component learning
CN110363164A (en) Unified method based on LSTM time consistency video analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170613