CN102142089A - Semantic binary tree-based image annotation method - Google Patents
Semantic binary tree-based image annotation method Download PDFInfo
- Publication number
- CN102142089A CN102142089A CN 201110002770 CN201110002770A CN102142089A CN 102142089 A CN102142089 A CN 102142089A CN 201110002770 CN201110002770 CN 201110002770 CN 201110002770 A CN201110002770 A CN 201110002770A CN 102142089 A CN102142089 A CN 102142089A
- Authority
- CN
- China
- Prior art keywords
- image
- word
- mark
- binary tree
- semantic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides a semantic binary tree-based image annotation method. The method comprises the following steps of: 1, segmenting an annotated image for learning by using an image segmentation algorithm for an image set at a specific scene to acquire visual description of an image area; 2, constructing visual nearest neighbor images of all images for learning; 3, establishing a semantic binary tree of the scene according to the nearest neighbor images in the step 2; and 4, discovering a corresponding position from a root node to a leaf node of the semantic binary tree for an image to be annotated on the scene, and transmitting all annotation words from the node to the root node to the image. The invention aims to establish the semantic binary tree for an annotated image set for training on the specific scene, so that the accuracy of automatic semantic annotation of the image which is subjected to scene classification by using an image visual feature is improved.
Description
Technical field
What the present invention relates to is a kind of automatic semanteme marking method of image.
Background technology
The mark word of image has reflected the senior semantic information of image preferably as a kind of very valuable iamge description resource.How making full use of the mark word information of training image, is the important means that improves the image labeling precision.Background of the present invention is on the correlativity basis of comprehensive utilization image, semantic and visual signature, extracts the semantic scene of training image, and the training image of different scenes is set up vision mode, according to visual signature image to be marked is carried out semanteme at last and sorts out.
Summary of the invention
The object of the present invention is to provide a kind of can the raising to sort out the image labeling method based on semantic binary tree of the mark precision of back image to be marked through scene.
The object of the present invention is achieved like this:
Step 1 for the image set of special scenes, adopts image segmentation algorithm that the mark image that is used to learn is cut apart, and the vision that obtains image-region is described;
Step 2, the vision arest neighbors figure of all images that is configured to learn;
Step 3, the semantic binary tree of setting up described scene according to the arest neighbors figure in the step 2;
Step 4 to the image to be marked under the described scene, finds the relevant position from the root node of semantic binary tree to leaf node, and this node place is passed to described image to all mark words of root node.
The method of the vision arest neighbors figure of the described all images that is configured to learn is: the visible sensation distance between image adopts the similarity measure dozer distance of the integrated coupling of multizone, corresponding each width of cloth image in the summit of figure, the visible sensation distance between the limit correspondence image on connection summit.
The method of the semantic binary tree of described foundation is: the root node place of binary tree has compiled all the mark images in the scene, represent the semantic expressiveness of the corresponding root node of mark word of described scene, arest neighbors figure in the step 2 is adopted two fens algorithms of normalization cutting, image is divided into two set, represent the left subtree and the right subtree of root node respectively, add up the remarkable mark of except the mark word at root node place word in two set, and redefine the ownership of every width of cloth image by this mark word; The method of seeking remarkable mark word is the occurrence number that respectively marks word in the statistics set, and the mark word that occurrence number is the highest is as significantly marking word; If more than one of the maximum mark word of number of times, the mark word that word frequency is lower is as significantly marking word;
The left subtree of root node and right subtree are repeated aforesaid operations, in having only a sub-picture or set, do not have the mark word that significantly occurs, the leaf node correspondence of bottom the image of the lower mark word of occurrence frequency.
The present invention utilizes mark word and visual information that the mark image of special scenes is set up semantic binary tree, has proposed a concrete grammar of the mark image of special scenes being set up semantic tree.The summit of tree is to modal mark word under should scene, growth along with semantic tree, the semanteme of each leaf node correspondence is branched cutting, the refinement gradually of the semanteme of child node, the mark word of representative is progressively concrete, is tending towards and the semantic binary tree by setting up, to the image to be marked of this scene, to leaf node, obtain corresponding markup information from the root of this scene semantic tree.
The present invention is intended to the mark image set of the training usefulness under the special scenes is set up semantic binary tree, improves and utilizes Image Visual Feature to carry out the precision of the automatic semantic tagger of the image behind the scene classification.
The present invention is used for image labeling with the binary tree that node has key word, has higher utility.To use many CBIR valuable help, for example the image rustling sound engine of google will be arranged.
Description of drawings
Accompanying drawing is a process flow diagram of the present invention.
Embodiment
For example the present invention is done more detailed description below in conjunction with accompanying drawing:
Step 1 for the image set of special scenes, adopts image segmentation algorithm that the mark image that is used to learn is cut apart, and the vision that obtains image-region is described.
Step 2, the vision arest neighbors figure of all images that is configured to learn.The similarity measure dozer distance of the integrated coupling of visible sensation distance employing multizone between image (Earth Mover ' s Distance, EMD).Corresponding each width of cloth image in the summit of figure, the visible sensation distance between the limit correspondence image on connection summit.
Step 3, the semantic binary tree of setting up this scene according to the arest neighbors figure in the step 2.Method is as follows.
The root node place of binary tree has compiled all the mark images in this scene, represents the semantic expressiveness of the corresponding root node of mark word of this scene.Arest neighbors figure in the step 2 is adopted two fens algorithms of N-Cut (Normalized Cut, normalization cutting), image is divided into two set, represent the left subtree and the right subtree of root node respectively.Add up the remarkable mark of except the mark word at root node place word in two set, and redefine the ownership of every width of cloth image by this mark word.The method of seeking remarkable mark word is the occurrence number that respectively marks word in the statistics set, and the mark word that occurrence number is the highest is as significantly marking word.If more than one of the maximum mark word of number of times, the mark word that word frequency is lower is as significantly marking word.
Left subtree and right subtree to root node repeat aforesaid operations, do not have the mark word that significantly occurs in having only a sub-picture or set.The leaf node correspondence of bottom the image of the lower mark word of occurrence frequency.
Step 4 to the image to be marked under this scene, finds the relevant position from the root node of semantic binary tree to leaf node, and this node place is passed to this image to all mark words of root node.
Claims (3)
1. image labeling method based on semantic binary tree is characterized in that:
Step 1 for the image set of special scenes, adopts image segmentation algorithm that the mark image that is used to learn is cut apart, and the vision that obtains image-region is described;
Step 2, the vision arest neighbors figure of all images that is configured to learn;
Step 3, the semantic binary tree of setting up described scene according to the arest neighbors figure in the step 2;
Step 4 to the image to be marked under the described scene, finds the relevant position from the root node of semantic binary tree to leaf node, and this node place is passed to described image to all mark words of root node.
2. the image labeling method based on semantic binary tree according to claim 1, the method that it is characterized in that the vision arest neighbors figure of the described all images that is configured to learn is: the visible sensation distance between image adopts the similarity measure dozer distance of the integrated coupling of multizone, corresponding each width of cloth image in the summit of figure, the visible sensation distance between the limit correspondence image on connection summit.
3. the image labeling method based on semantic binary tree according to claim 1 and 2, the method that it is characterized in that the semantic binary tree of described foundation is: the root node place of binary tree has compiled all the mark images in the scene, represent the semantic expressiveness of the corresponding root node of mark word of described scene, arest neighbors figure in the step 2 is adopted two fens algorithms of normalization cutting, image is divided into two set, represent the left subtree and the right subtree of root node respectively, add up the remarkable mark of except the mark word at root node place word in two set, and redefine the ownership of every width of cloth image by this mark word; The method of seeking remarkable mark word is the occurrence number that respectively marks word in the statistics set, and the mark word that occurrence number is the highest is as significantly marking word; If more than one of the maximum mark word of number of times, the mark word that word frequency is lower is as significantly marking word;
The left subtree of root node and right subtree are repeated aforesaid operations, in having only a sub-picture or set, do not have the mark word that significantly occurs, the leaf node correspondence of bottom the image of the lower mark word of occurrence frequency.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110002770A CN102142089B (en) | 2011-01-07 | 2011-01-07 | Semantic binary tree-based image annotation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110002770A CN102142089B (en) | 2011-01-07 | 2011-01-07 | Semantic binary tree-based image annotation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102142089A true CN102142089A (en) | 2011-08-03 |
CN102142089B CN102142089B (en) | 2012-09-26 |
Family
ID=44409586
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110002770A Expired - Fee Related CN102142089B (en) | 2011-01-07 | 2011-01-07 | Semantic binary tree-based image annotation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102142089B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102436583A (en) * | 2011-09-26 | 2012-05-02 | 哈尔滨工程大学 | Image segmentation method based on annotated image learning |
CN103365850A (en) * | 2012-03-27 | 2013-10-23 | 富士通株式会社 | Method and device for annotating images |
CN103530415A (en) * | 2013-10-29 | 2014-01-22 | 谭永 | Natural language search method and system compatible with keyword search |
CN103632388A (en) * | 2013-12-19 | 2014-03-12 | 百度在线网络技术(北京)有限公司 | Semantic annotation method, device and client for image |
CN106814162A (en) * | 2016-12-15 | 2017-06-09 | 珠海华海科技有限公司 | A kind of Outdoor Air Quality solution and system |
CN108171283A (en) * | 2017-12-31 | 2018-06-15 | 厦门大学 | A kind of picture material automatic describing method based on structuring semantic embedding |
CN108182443A (en) * | 2016-12-08 | 2018-06-19 | 广东精点数据科技股份有限公司 | A kind of image automatic annotation method and device based on decision tree |
WO2019021088A1 (en) * | 2017-07-24 | 2019-01-31 | International Business Machines Corporation | Navigating video scenes using cognitive insights |
CN110199525A (en) * | 2017-01-18 | 2019-09-03 | Pcms控股公司 | For selecting scene with the system and method for the browsing history in augmented reality interface |
CN110288019A (en) * | 2019-06-21 | 2019-09-27 | 北京百度网讯科技有限公司 | Image labeling method, device and storage medium |
CN110413820A (en) * | 2019-07-12 | 2019-11-05 | 深兰科技(上海)有限公司 | A kind of acquisition methods and device of picture description information |
US10916013B2 (en) | 2018-03-14 | 2021-02-09 | Volvo Car Corporation | Method of segmentation and annotation of images |
US11100366B2 (en) | 2018-04-26 | 2021-08-24 | Volvo Car Corporation | Methods and systems for semi-automated image segmentation and annotation |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1936892A (en) * | 2006-10-17 | 2007-03-28 | 浙江大学 | Image content semanteme marking method |
-
2011
- 2011-01-07 CN CN201110002770A patent/CN102142089B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1936892A (en) * | 2006-10-17 | 2007-03-28 | 浙江大学 | Image content semanteme marking method |
Non-Patent Citations (2)
Title |
---|
《IEEE》 20091231 Lixing Jianga等 Automatic Image Annotation Based on Decision Tree Machine Learning , * |
《智能系统学报》 20100228 刘咏梅等 基于空间位置约束的K均值图像分割 第5卷, 第1期 * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102436583A (en) * | 2011-09-26 | 2012-05-02 | 哈尔滨工程大学 | Image segmentation method based on annotated image learning |
CN103365850A (en) * | 2012-03-27 | 2013-10-23 | 富士通株式会社 | Method and device for annotating images |
CN103365850B (en) * | 2012-03-27 | 2017-07-14 | 富士通株式会社 | Image labeling method and image labeling device |
CN103530415A (en) * | 2013-10-29 | 2014-01-22 | 谭永 | Natural language search method and system compatible with keyword search |
CN103632388A (en) * | 2013-12-19 | 2014-03-12 | 百度在线网络技术(北京)有限公司 | Semantic annotation method, device and client for image |
CN108182443B (en) * | 2016-12-08 | 2020-08-07 | 广东精点数据科技股份有限公司 | Automatic image labeling method and device based on decision tree |
CN108182443A (en) * | 2016-12-08 | 2018-06-19 | 广东精点数据科技股份有限公司 | A kind of image automatic annotation method and device based on decision tree |
CN106814162A (en) * | 2016-12-15 | 2017-06-09 | 珠海华海科技有限公司 | A kind of Outdoor Air Quality solution and system |
US11663751B2 (en) | 2017-01-18 | 2023-05-30 | Interdigital Vc Holdings, Inc. | System and method for selecting scenes for browsing histories in augmented reality interfaces |
CN110199525B (en) * | 2017-01-18 | 2021-12-14 | Pcms控股公司 | System and method for browsing history records in augmented reality interface |
CN110199525A (en) * | 2017-01-18 | 2019-09-03 | Pcms控股公司 | For selecting scene with the system and method for the browsing history in augmented reality interface |
US10970334B2 (en) | 2017-07-24 | 2021-04-06 | International Business Machines Corporation | Navigating video scenes using cognitive insights |
WO2019021088A1 (en) * | 2017-07-24 | 2019-01-31 | International Business Machines Corporation | Navigating video scenes using cognitive insights |
CN108171283B (en) * | 2017-12-31 | 2020-06-16 | 厦门大学 | Image content automatic description method based on structured semantic embedding |
CN108171283A (en) * | 2017-12-31 | 2018-06-15 | 厦门大学 | A kind of picture material automatic describing method based on structuring semantic embedding |
US10916013B2 (en) | 2018-03-14 | 2021-02-09 | Volvo Car Corporation | Method of segmentation and annotation of images |
US11100366B2 (en) | 2018-04-26 | 2021-08-24 | Volvo Car Corporation | Methods and systems for semi-automated image segmentation and annotation |
CN110288019A (en) * | 2019-06-21 | 2019-09-27 | 北京百度网讯科技有限公司 | Image labeling method, device and storage medium |
CN110413820A (en) * | 2019-07-12 | 2019-11-05 | 深兰科技(上海)有限公司 | A kind of acquisition methods and device of picture description information |
CN110413820B (en) * | 2019-07-12 | 2022-03-29 | 深兰科技(上海)有限公司 | Method and device for acquiring picture description information |
Also Published As
Publication number | Publication date |
---|---|
CN102142089B (en) | 2012-09-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102142089B (en) | Semantic binary tree-based image annotation method | |
Zamir et al. | Image geo-localization based on multiplenearest neighbor feature matching usinggeneralized graphs | |
CN106951830B (en) | Image scene multi-object marking method based on prior condition constraint | |
CN107369183A (en) | Towards the MAR Tracing Registration method and system based on figure optimization SLAM | |
CN104050682A (en) | Image segmentation method fusing color and depth information | |
CN108090911A (en) | A kind of offshore naval vessel dividing method of remote sensing image | |
CN101842788A (en) | Method, apparatus and computer program product for performing a visual search using grid-based feature organization | |
CN103425757A (en) | Cross-medial personage news searching method and system capable of fusing multi-mode information | |
CN105389550A (en) | Remote sensing target detection method based on sparse guidance and significant drive | |
CN107248176A (en) | Indoor map construction method and electronic equipment | |
Li et al. | A method based on an adaptive radius cylinder model for detecting pole-like objects in mobile laser scanning data | |
CN111507296A (en) | Intelligent illegal building extraction method based on unmanned aerial vehicle remote sensing and deep learning | |
CN105608454A (en) | Text structure part detection neural network based text detection method and system | |
CN105138538A (en) | Cross-domain knowledge discovery-oriented topic mining method | |
CN110046218A (en) | A kind of method for digging, device, system and the processor of user's trip mode | |
CN103309982A (en) | Remote sensing image retrieval method based on vision saliency point characteristics | |
CN107977635A (en) | A kind of trellis drainage recognition methods | |
CN104392439A (en) | Image similarity confirmation method and device | |
CN106250396B (en) | Automatic image label generation system and method | |
CN104866852A (en) | Method and apparatus for extracting land cover information in remote sensing image | |
CN105574535A (en) | Graphic symbol identification method based on indirect distance angle histogram space relation expression model | |
CN118411716A (en) | License plate number recognition method based on time sequence joint probability optimization | |
EP3580690B1 (en) | Bayesian methodology for geospatial object/characteristic detection | |
CN103942779A (en) | Image segmentation method based on combination of graph theory and semi-supervised learning | |
CN110276779A (en) | A kind of dense population image generating method based on the segmentation of front and back scape |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20120926 Termination date: 20180107 |