CN102495865A - Image annotation method combined with image internal space relation and visual symbiosis relation - Google Patents

Image annotation method combined with image internal space relation and visual symbiosis relation Download PDF

Info

Publication number
CN102495865A
CN102495865A CN2011103827351A CN201110382735A CN102495865A CN 102495865 A CN102495865 A CN 102495865A CN 2011103827351 A CN2011103827351 A CN 2011103827351A CN 201110382735 A CN201110382735 A CN 201110382735A CN 102495865 A CN102495865 A CN 102495865A
Authority
CN
China
Prior art keywords
image
symbiosis
vision
relation
cray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011103827351A
Other languages
Chinese (zh)
Other versions
CN102495865B (en
Inventor
郭乔进
李宁
丁轶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN 201110382735 priority Critical patent/CN102495865B/en
Publication of CN102495865A publication Critical patent/CN102495865A/en
Application granted granted Critical
Publication of CN102495865B publication Critical patent/CN102495865B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an image annotation method combined with an image internal space relation and a visual symbiosis relation. The method consists of image segmentation, characteristic extraction and an annotation algorithm, and comprises the following steps of: firstly, segmenting an image into a plurality of regions by using an over-segmentation method; secondly, extracting the visual characteristics of each region; and lastly, establishing an image annotation classifying model by using context information such as a space position relation, the visual symbiosis relation and the like among regions in an image. The image annotation method has the advantages that: the image annotation accuracy is high, and the accuracy of image annotation can be increased by fully and effectively using two kinds of different context information of the space position relation and the visual symbiosis relation in the image.

Description

The image labeling method of combining image inner space relation and vision symbiosis
Technical field
The present invention relates to a kind of image labeling algorithm, relate in particular to the image labeling method of a kind of combining image inner space relation and vision symbiosis, belong to technical field of image processing based on contextual information.
Background technology
Along with the development of Internet and Digital image technology, view data magnanimity increases, and all is a great challenge to tissue, analysis, retrieval and the management etc. of image.People have reached unprecedented scale more for the interest of the semantic concept that image comprised, thereby press for a kind of image management method that meets human perception and cognitive mechanism, understands based on semantic concept.Image labeling can solve " semantic wide gap " problem that exists in the image retrieval to a certain extent through setting up the mapping relations between low layer visual signature and the high-level semantic.Image labeling can be divided into manual mark and mark two types automatically.Using artificial mode to carry out image labeling is the most also to be effective and efficient manner; But this also is a job that takes time and effort very much, and therefore having at present much has a lot of websites and tissue to encourage the users on the web to mark for the image that it provides at present.Yet along with the sharp increase of amount of images, depending artificial mark alone can not meet the demands, and this has also promoted the research of the automatic marking method of image.
Hanbury etc. are divided into three kinds according to the form of mark with the mark of image, based on the mark of keyword, based on the mark of body with based on the mark of natural language.In the research work of image labeling, study maximum marks that is based on keyword at present.And the area marking of image is based on a kind of form common in the image labeling of keyword, and contextual information comprises that spatial relationship, symbiosis etc. are used widely, and effectively raises the accuracy of image labeling in the image-region mark.Its main flow process comprises three parts: image segmentation, feature extraction and dimensioning algorithm.But the conventional images mask method can not utilize various different contextual informations in the image simultaneously.
Summary of the invention
Goal of the invention: to the problems and shortcomings that exist in the prior art, the present invention provides a kind of can utilize the combining image inner space relation of various different contextual informations in the image and the image labeling method of vision symbiosis simultaneously.
Technical scheme: the image labeling method of a kind of combining image inner space relation and vision symbiosis comprises the steps:
(1) utilize the over-segmentation method that every width of cloth image segmentation is several regions, this target of cutting apart is that the different objects in the image is divided into different zones, makes only to comprise other object of unitary class in each cut zone;
(2) to each zone in the image, extract characteristic informations such as color, texture, shape, locus, constitute the proper vector of one group of successive value;
(3) utilize Kmeans that the proper vector of all successive values is carried out cluster, obtain K cluster centre, constitute vocabulary V;
(4) utilize vocabulary V that each regional proper vector is quantized, thereby obtain the vision keyword W of each pixel;
(5) to the spatial relation of all cut zone in the image, consider the spatial relationship between the adjacent area, set up single order markov network model;
(6) according to the vision keyword of All Ranges in the image, add up its vision keyword histogram, utilize implicit Di Li Cray to divide the symbiosis between the pairing vision keyword to carry out modeling;
(7) spatial relationship between each zone and symbiosis in the combining image are set up a kind of probability graph model that combines single order markov network and implicit Di Li Cray to distribute;
(8) characteristic, quantification, structure and training pattern are cut apart, extracted to the image data set of the artificial mark of utilization to (7), according to said step (1), obtains the parameter of a group model;
(9) to the image that does not mark, the parameter initialization model that utilizes training to obtain, and according to characteristic of being extracted and vision keyword, each cut zone is marked.
Method of the present invention comprises structure, the training of model parameter and four parts of classification that do not mark image that the cutting apart of image, feature extraction and quantification, single order markov network model and implicit Di Li Cray distribute; Wherein step (1) to (4) has been described cutting apart and the extraction of visual signature and the construction process of vision keyword of image; Step (5) to (7) has been described the structure of single order markov network model and implicit Di Li Cray distribution; And the combination of two kinds of probability graph models, step (8) to (9) has been described the training and the classification problem of how carrying out image labeling based on above-mentioned probability graph model.
Beneficial effect: method of the present invention compared with prior art; Its remarkable advantage is: can effectively separate the accuracy that the spatial positional information that combines in the image and two kinds of different contextual informations of vision symbiosis improve image labeling; Through using single order markov network model the spatial relation of adjacent area in the image is described; And through utilizing implicit Di Li Cray to divide the vision symbiosis in the pairing image to carry out modeling; And combine two kinds of different probability graph models in view of the above, carry out Parameter Optimization according to training dataset, thereby reached better pictures mark accuracy.
Description of drawings
Fig. 1 is the latent dirichlet allocation model in the embodiment of the invention;
Fig. 2 is the single order markov network model in the embodiment of the invention;
Fig. 3 is the structure that single order markov network and the implicit Di Li Cray in the embodiment of the invention distributes;
Fig. 4 is the disaggregated model that combines single order markov network model and latent dirichlet allocation model in the embodiment of the invention: implicit Di Li Cray markov network model.
Embodiment
Below in conjunction with accompanying drawing and specific embodiment; Further illustrate the present invention; Should understand these embodiment only be used to the present invention is described and be not used in the restriction scope of the present invention; After having read the present invention, those skilled in the art all fall within the application's accompanying claims institute restricted portion to the modification of the various equivalent form of values of the present invention.
When utilizing latent dirichlet allocation model that the target in the image is detected, discerns and locating; Ignore the spatial positional information between each zone in the image, need quantize each regional visual signature simultaneously, can only handle the vision keyword of discretize; And single order markov network model can be handled the visual signature (or proper vector) of successive value; Therefore spatial relationship between simultaneously can processing region, the present invention proposes a kind of image labeling model that combines latent dirichlet allocation model and single order markov network model; The subject information that utilizes latent dirichlet allocation model to generate is described the vision symbiosis in the image; Thereby improve single order markov network model classification effect, when utilizing latent dirichlet allocation model to generate each regional subject information simultaneously, not only consider the keyword that each node is corresponding; Also add corresponding class information, thereby further improved the accuracy of image labeling.
The detailed process of the inventive method is following:
This method comprises the following steps:
Step (1) utilizes the over-segmentation method that every width of cloth image segmentation is several regions, and this target of cutting apart is that the different objects in the image is divided into different zones, makes only to comprise other object of unitary class in each cut zone;
Step (2) is extracted characteristic informations such as color, texture, shape, locus to each zone in the image, constitutes the proper vector of one group of successive value;
Step (3): utilize Kmeans that all successive value proper vectors are carried out cluster, obtain K cluster centre, constitute vocabulary V.To the successive value proper vector that 27 of all pixels in the present image are tieed up, utilize Kmeans to carry out cluster, obtain K cluster centre (c 1, c 2..., c K), constitute visual vocabulary Table V={ c 1, c 2..., c K.
Step (4): utilize vocabulary V to each successive value proper vector H iQuantize, to the successive value proper vector H of each pixel iIn vocabulary, select the minimum vision keyword of Euclidean distance
Figure BDA0000113016740000041
Thereby original image is converted into the image of forming by the vision keyword of each pixel.
Step (5) is considered the spatial relationship between the adjacent area to the spatial relation of all cut zone in the image, sets up single order markov network model, and is as shown in Figure 3.Single order markov network model model is the mark that is used for natural language the earliest equally; Image data processing need be used the single order markov network model of two dimension; Its structure is as shown in Figure 2; Wherein distinguish each regional visual signature and classification in the representative image, one group of corresponding probability of mark sequence of image is:
P ( c | x ) = exp ( Σ i ∈ N uF ( x i , c i ) + Σ ij ∈ E vF ( x i , x j , c i , c j ) ) Z ( x ) . - - - ( 1 )
Z ( x ) = Σ c exp ( Σ i ∈ N uF ( x i , c i ) + Σ ij ∈ E vF ( x i , x j , c i , c j ) ) . - - - ( 2 )
Wherein N and E represent node (Node) and the set of limit (Edge) in the single order markov network model respectively, and u is the corresponding weight of visual signature of each node, and v is the corresponding weight of boundary characteristic, F (x i, c i) characteristic of corresponding present node, F (x i, x j, c i, c j) characteristic on corresponding current border.
Step (6) is added up its vision keyword histogram according to the vision keyword of All Ranges in the image, utilizes implicit Di Li Cray to divide the symbiosis between the pairing vision keyword to carry out modeling.It is the topic model a kind of commonly used in the natural language processing that implicit Di Li Cray distributes, and obtains the subject information P (w of different keywords in different document through the information such as the keyword frequency of occurrences in the statistics document n| z k, d).When utilizing implicit Di Li Cray to distribute to come image data processing, at first need image be carried out piecemeal, then each segmented areas is extracted characteristic, and be quantified as plurality of keywords.Here suppose total N keyword w in the vocabulary n, n=1 ..., N representes k theme, the quantity that K is the theme, d=1 ..., D represents D document.The probability graph structure that implicit Di Li Cray distributes is as shown in Figure 1, and wherein α is the K dimensional vector, and P (θ | α) satisfy Dirichlet and distribute, P (z| θ) satisfies polynomial expression and distributes β Kn=P (w=n|z=k).P (w n| z k, d) be illustrated among the document d corresponding theme z kKeyword be w nProbability.Quantity according to the artificial definite cut zone of priori is K; According to collected works C; Training a theme quantity is that the implicit Di Li Cray of K distributes, thereby obtains the probability
Figure BDA0000113016740000051
that each pixel in each zone belongs to different themes
Spatial relationship and symbiosis in step (7) combining image between each zone are set up a kind of probability graph model that combines single order markov network and implicit Di Li Cray to distribute.Here we use single order markov network model to handle spatial relationship and successive value characteristic (or proper vector), and the classification of each node also can receive the influence of subject information simultaneously, and the condition probability formula of implicit Di Li Cray markov network is:
p ( c | z , x , u , v ) = exp ( Σ k = 1 K Σ i δ ( z i = k ) u k F ( x i , c i ) + Σ ij vF ( x i , x j , c i , c j ) ) Z ( u , v , x , z ) . - - - ( 3 )
δ ( x - a ) = 1 if x = a 0 o . w . . - - - ( 4 )
As can be seen from Figure 4; Compare with latent dirichlet allocation model; When generating each regional subject information, not only consider the vision keyword that each zone is corresponding, also can utilize each regional class label; And the class label between the zone exists spatial relationship, distributes the deficiency that can't utilize spatial positional information thereby remedied implicit Di Li Cray; Compare with single order markov network model model, the branch time-like is being carried out in each zone, not only consider the visual signature of present node and neighborhood node, also can utilize the subject information of current region to assist and classify.Thereby improve the effect of image labeling through the advantage that combines two kinds of models.
Step (8) is utilized the image data set of artificial mark, according to above-mentioned steps, cuts apart, extracts characteristic, quantification, structure and training pattern, obtains the parameter of a group model.The target of the implicit Di Li Cray markov network of training is in order to seek parameter { α, β, s}=argmax { α, β, s}(s), for this reason, we at first set up expansion single order markov network model model to logP, utilize variational method (Variational Methods) and expansion single order markov network model to find the solution parameter then for D| α, β.The conditional probability distribution of expansion single order markov network model is:
P ( c | x , φ , s ) = exp ( Σ i φ i 1 M φ ik T u 1 M u k F ( x i , c i ) + Σ ij vF ( x i , x j , c i , c j ) ) Z ( x , φ , s ) . - - - ( 5 )
φ wherein Ik=P (z i=k) be the probability of corresponding k the theme of node i.Can find out that expansion single order markov network model utilizes the probability of theme that the node diagnostic of single order markov network model is expanded.
The training step of implicit Di Li Cray markov network is as shown in the table:
Figure BDA0000113016740000062

Claims (2)

1. the image labeling method of combining image inner space relation and vision symbiosis is characterized in that comprising the following steps:
(1) utilizes the over-segmentation method that every width of cloth image segmentation is several regions, the different objects in the image is divided into different zones, make only to comprise other object of unitary class in each cut zone;
(2) to each zone in the image, extract color, texture, shape and locus characteristic information, constitute the proper vector of one group of successive value;
(3) utilize Kmeans that all successive value proper vectors are carried out cluster, obtain K cluster centre, constitute vocabulary V;
(4) utilize vocabulary V that each regional proper vector is quantized, thereby obtain the vision keyword W of each pixel;
(5) to the spatial relation of all cut zone in the image, consider the spatial relationship between the adjacent area, set up single order markov network model;
(6) according to the vision keyword of All Ranges in the image, add up its vision keyword histogram, utilize implicit Di Li Cray to divide the symbiosis between the pairing vision keyword to carry out modeling;
(7) spatial relationship between each zone and symbiosis in the combining image are set up a kind of probability graph model that combines single order markov network and implicit Di Li Cray to distribute;
(8) characteristic, quantification, structure and training pattern are cut apart, extracted to the image data set of the artificial mark of utilization to (7), according to said step (1), obtains the parameter of a group model;
(9) to the image that does not mark, the parameter initialization model that utilizes training to obtain, and according to characteristic of being extracted and vision keyword, each cut zone is marked.
2. the image labeling method of combining image as claimed in claim 1 inner space relation and vision symbiosis; It is characterized in that: in the said step (6) when utilizing implicit Di Li Cray to distribute to come image data processing; At first need image be carried out piecemeal; Then each segmented areas is extracted characteristic; And be quantified as plurality of keywords, and utilize implicit Di Li Cray to distribute the vision symbiosis contextual information of handling in the image, utilize single order markov network model that the spatial positional information between the zone in the image is carried out modeling simultaneously.
CN 201110382735 2011-11-28 2011-11-28 Image annotation method combined with image internal space relation and visual symbiosis relation Expired - Fee Related CN102495865B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110382735 CN102495865B (en) 2011-11-28 2011-11-28 Image annotation method combined with image internal space relation and visual symbiosis relation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110382735 CN102495865B (en) 2011-11-28 2011-11-28 Image annotation method combined with image internal space relation and visual symbiosis relation

Publications (2)

Publication Number Publication Date
CN102495865A true CN102495865A (en) 2012-06-13
CN102495865B CN102495865B (en) 2013-08-07

Family

ID=46187690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110382735 Expired - Fee Related CN102495865B (en) 2011-11-28 2011-11-28 Image annotation method combined with image internal space relation and visual symbiosis relation

Country Status (1)

Country Link
CN (1) CN102495865B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799614A (en) * 2012-06-14 2012-11-28 北京大学 Image search method based on space symbiosis of visual words
CN103197983A (en) * 2013-04-22 2013-07-10 东南大学 Service component reliability online time sequence predicting method based on probability graph model
CN103700094A (en) * 2013-12-09 2014-04-02 中国科学院深圳先进技术研究院 Interactive collaborative shape segmentation method and device based on label propagation
CN103902965A (en) * 2012-12-29 2014-07-02 深圳先进技术研究院 Spatial co-occurrence image representing method and application thereof in image classification and recognition
CN104484347A (en) * 2014-11-28 2015-04-01 浙江大学 Geographic information based hierarchical visual feature extracting method
CN105740891A (en) * 2016-01-27 2016-07-06 北京工业大学 Target detection method based on multilevel characteristic extraction and context model
CN106778812A (en) * 2016-11-10 2017-05-31 百度在线网络技术(北京)有限公司 Cluster realizing method and device
CN107122801A (en) * 2017-05-02 2017-09-01 北京小米移动软件有限公司 The method and apparatus of image classification
CN107967494A (en) * 2017-12-20 2018-04-27 华东理工大学 A kind of image-region mask method of view-based access control model semantic relation figure
CN108229491A (en) * 2017-02-28 2018-06-29 北京市商汤科技开发有限公司 The method, apparatus and equipment of detection object relationship from picture
CN109145936A (en) * 2018-06-20 2019-01-04 北京达佳互联信息技术有限公司 A kind of model optimization method and device
CN113449755A (en) * 2020-03-26 2021-09-28 阿里巴巴集团控股有限公司 Data processing method, model training method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009282980A (en) * 2008-05-20 2009-12-03 Ricoh Co Ltd Method and apparatus for image learning, automatic notation, and retrieving
CN101877064A (en) * 2009-04-30 2010-11-03 索尼株式会社 Image classification method and image classification device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009282980A (en) * 2008-05-20 2009-12-03 Ricoh Co Ltd Method and apparatus for image learning, automatic notation, and retrieving
CN101877064A (en) * 2009-04-30 2010-11-03 索尼株式会社 Image classification method and image classification device

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
《2010,IEEE ON computer vision and pattern recognition 》 20101231 Yu Xiang , et al. "Semantic Context Modeling with Maximal Margin Conditional Random Fields for Automatic Image Annotation" 3368-3375 1-2 , *
《2011 IEEE International Conference on SMC》 20111012 Guo Qiaojin ET AL. "Supervised LDA for Image Annotation" 471-476 1-2 , *
《2011 Sixth International Conference on Image and Graphics》 20110815 Guo Qiaojin et al. "Image Annotation with Multiple Quantization" 631-635 1-2 , *
《计算机工程与应用》 20111031 郭乔进,丁轶,李宁 "基于关键词的图像标注综述" 155-158 1-2 第47卷, 第30期 *
GUO QIAOJIN ET AL.: ""Image Annotation with Multiple Quantization"", 《2011 SIXTH INTERNATIONAL CONFERENCE ON IMAGE AND GRAPHICS》, 15 August 2011 (2011-08-15), pages 631 - 635 *
GUO QIAOJIN ET AL.: ""Supervised LDA for Image Annotation"", 《2011 IEEE INTERNATIONAL CONFERENCE ON SMC》, 12 October 2011 (2011-10-12), pages 471 - 476 *
YU XIANG , ET AL.: ""Semantic Context Modeling with Maximal Margin Conditional Random Fields for Automatic Image Annotation"", 《2010,IEEE ON COMPUTER VISION AND PATTERN RECOGNITION 》, 31 December 2010 (2010-12-31), pages 3368 - 3375 *
郭乔进,丁轶,李宁: ""基于关键词的图像标注综述"", 《计算机工程与应用》, vol. 47, no. 30, 31 October 2011 (2011-10-31), pages 155 - 158 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799614A (en) * 2012-06-14 2012-11-28 北京大学 Image search method based on space symbiosis of visual words
CN103902965A (en) * 2012-12-29 2014-07-02 深圳先进技术研究院 Spatial co-occurrence image representing method and application thereof in image classification and recognition
CN103197983B (en) * 2013-04-22 2015-04-29 东南大学 Service component reliability online time sequence predicting method based on probability graph model
CN103197983A (en) * 2013-04-22 2013-07-10 东南大学 Service component reliability online time sequence predicting method based on probability graph model
CN103700094A (en) * 2013-12-09 2014-04-02 中国科学院深圳先进技术研究院 Interactive collaborative shape segmentation method and device based on label propagation
CN104484347B (en) * 2014-11-28 2018-06-05 浙江大学 A kind of stratification Visual Feature Retrieval Process method based on geography information
CN104484347A (en) * 2014-11-28 2015-04-01 浙江大学 Geographic information based hierarchical visual feature extracting method
CN105740891A (en) * 2016-01-27 2016-07-06 北京工业大学 Target detection method based on multilevel characteristic extraction and context model
CN105740891B (en) * 2016-01-27 2019-10-08 北京工业大学 Target detection based on multi level feature selection and context model
CN106778812A (en) * 2016-11-10 2017-05-31 百度在线网络技术(北京)有限公司 Cluster realizing method and device
CN106778812B (en) * 2016-11-10 2020-06-19 百度在线网络技术(北京)有限公司 Clustering implementation method and device
CN108229491A (en) * 2017-02-28 2018-06-29 北京市商汤科技开发有限公司 The method, apparatus and equipment of detection object relationship from picture
CN108229491B (en) * 2017-02-28 2021-04-13 北京市商汤科技开发有限公司 Method, device and equipment for detecting object relation from picture
CN107122801B (en) * 2017-05-02 2020-03-03 北京小米移动软件有限公司 Image classification method and device
CN107122801A (en) * 2017-05-02 2017-09-01 北京小米移动软件有限公司 The method and apparatus of image classification
CN107967494A (en) * 2017-12-20 2018-04-27 华东理工大学 A kind of image-region mask method of view-based access control model semantic relation figure
CN107967494B (en) * 2017-12-20 2020-12-11 华东理工大学 Image region labeling method based on visual semantic relation graph
CN109145936B (en) * 2018-06-20 2019-07-09 北京达佳互联信息技术有限公司 A kind of model optimization method and device
CN109145936A (en) * 2018-06-20 2019-01-04 北京达佳互联信息技术有限公司 A kind of model optimization method and device
CN113449755A (en) * 2020-03-26 2021-09-28 阿里巴巴集团控股有限公司 Data processing method, model training method, device, equipment and storage medium
CN113449755B (en) * 2020-03-26 2022-12-02 阿里巴巴集团控股有限公司 Data processing method, model training method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN102495865B (en) 2013-08-07

Similar Documents

Publication Publication Date Title
CN102495865B (en) Image annotation method combined with image internal space relation and visual symbiosis relation
CN104199972B (en) A kind of name entity relation extraction and construction method based on deep learning
CN101923653B (en) Multilevel content description-based image classification method
Yadollahpour et al. Discriminative re-ranking of diverse segmentations
Guillaumin et al. Multimodal semi-supervised learning for image classification
WO2018010365A1 (en) Cross-media search method
CN102819836B (en) Method and system for image segmentation
CN102637199B (en) Image marking method based on semi-supervised subject modeling
Ahn et al. Face and hair region labeling using semi-supervised spectral clustering-based multiple segmentations
CN108763192B (en) Entity relation extraction method and device for text processing
Dong et al. An adult image detection algorithm based on Bag-of-Visual-Words and text information
CN102945372B (en) Classifying method based on multi-label constraint support vector machine
Wang et al. Unsupervised segmentation of greenhouse plant images based on modified Latent Dirichlet Allocation
Kim et al. Image segmentation using consensus from hierarchical segmentation ensembles
CN107301426A (en) A kind of multi-tag clustering method of shoe sole print image
CN103714178B (en) Automatic image marking method based on word correlation
CN102496146B (en) Image segmentation method based on visual symbiosis
Gao et al. A hierarchical image annotation method based on SVM and semi-supervised EM
CN108804524B (en) Emotion distinguishing and importance dividing method based on hierarchical classification system
CN105678265A (en) Manifold learning-based data dimensionality-reduction method and device
Dong et al. Segmentation using subMarkov random walk
Yang et al. Vegetation segmentation based on variational level set using multi-channel local wavelet texture and color
Park et al. PESSN: precision enhancement method for semantic segmentation network
Guo et al. An automatic image annotation method based on the mutual K-nearest neighbor graph
Marin-Castro et al. Automatic image annotation using a semi-supervised ensemble of classifiers

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130807

Termination date: 20161128

CF01 Termination of patent right due to non-payment of annual fee