CN1873656A - Detection method of natural target in robot vision navigation - Google Patents
Detection method of natural target in robot vision navigation Download PDFInfo
- Publication number
- CN1873656A CN1873656A CN 200510075539 CN200510075539A CN1873656A CN 1873656 A CN1873656 A CN 1873656A CN 200510075539 CN200510075539 CN 200510075539 CN 200510075539 A CN200510075539 A CN 200510075539A CN 1873656 A CN1873656 A CN 1873656A
- Authority
- CN
- China
- Prior art keywords
- target
- color
- image
- natural
- zone
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims description 46
- 238000000034 method Methods 0.000 claims abstract description 40
- 230000000007 visual effect Effects 0.000 claims description 21
- 238000000638 solvent extraction Methods 0.000 claims description 20
- 230000011218 segmentation Effects 0.000 claims description 16
- 238000000605 extraction Methods 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 3
- 238000003709 image segmentation Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 238000013179 statistical model Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 2
- 238000005286 illumination Methods 0.000 description 6
- 230000003068 static effect Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 201000004569 Blindness Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000014860 sensory perception of taste Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
- Manipulator (AREA)
Abstract
This invention relates to the goal monitor technology field, specially a kind of natural goal examination method used in the robot vision navigation. The method includes: goal modeling, picture gathering, pretreatment, division parameter choice, robustness natural picture division, characteristic withdraw, goal authentication, and output result.
Description
Technical field
The present invention relates to the target detection technique field, particularly a kind of natural target detection method that is used for robot visual guidance.
Background technology
The detection of natural target is based on needs the key issue that solves in the Mobile Robotics Navigation of vision.The influence of complex background and irregular illumination variation makes that natural target is difficult to extract from background.Yet, to compare with artificial target, the vision guided navigation that detects based on natural target has application prospect widely.
Target detection is from broadly being divided into the method based on sense of touch, the sense of hearing, the sense of taste and vision.In robot visual guidance, the image object that target detection is based on vision detects.Image object detects can be divided into the detection of moving object detection and static object again, and what the present invention is directed to is that the static natural target that is used to navigate detects.
Static natural target detects and generally included for two steps: object candidate area search and target authentication.Object candidate area search is mainly by cutting apart or top-down visual processing method search area-of-interest finishes.Target authentication confirms whether the candidate region is target.Object candidate area search and authentication under the complex background are difficult problems always.How in the image of random acquisition, to detect the set goal, need consider simultaneously that natural image cuts apart and several links such as target authentication.
The existing natural image partition method comprises: conventional segmentation methods, based on the method for the dividing method of region-competitive, based target model, based on the dividing method of average drifting, based on the dividing method of graph theory (Graphic theory), based on the dividing method of study etc.In above natural image dividing method, the dividing method of Fusion of Color, profile and textural characteristics is realistic picture structure simultaneously, the development trend of having represented natural image to cut apart.In these methods, be the optimizing process of the recurrence of the overall situation based on the dividing method of region-competitive, object module, graph theory, the extraction of textural characteristics and handle and need bigger calculated amount, so calculation of complex are difficult to use in the real-time applied environment of robot.Simultaneously, be difficult to correctly extract target under the complex background.
How describing target is the basis of target authentication.All goal description methods can reduce two classes: the describing method of goal-orientation and be the describing method at center with the viewpoint.The describing method of goal-orientation is used to describe the static nature of object, is describing method reconstructed object model and recognition objective from the image of several different points of view at center with the viewpoint.Although the goal description method is a lot of at present, each method all has relative merits, does not also have a kind of method to be applicable to all applicable cases.The selection of describing method depends on the application scenario.When target can only partly appear in the image, when perhaps the strong jamming target being arranged in image, be difficult for describing and distinguishing different target.
In sum, existing object detection method also is difficult to reach the following ask for something that the natural target in the robot visual guidance detects:
Target segmentation result stability under the complex background
Computing velocity is fast
When partly appearing in the image, target still can detect target
Still can detect target when jamming target occurring in the image
Summary of the invention
The object of the present invention is to provide a kind of natural target detection method that is used for robot visual guidance.
Natural target detection method in the robot visual guidance be exactly will be under different light and different background condition can both stable detection to the target that is used for the robot navigation.Image object detection algorithm at present commonly used adopts two kinds of strategies: a kind of is from bottom to top data-driven model; Another kind is a knowledge driving model from top to bottom.For the former, no matter which kind of type target to be identified belong to, without exception earlier to original image carry out generally cutting apart, low layers such as sign and feature extraction handle, the eigenvector and the object module of cut zone are complementary with each then.Its advantage is widely applicable, single goal and complicated scene analysis all are suitable for, shortcoming be cut apart, blindness is big owing to lack the guidance of knowledge in the low layer processing procedures such as mark and feature extraction, causes that workload is big, the matching algorithm more complicated.For the latter, model according to target to be identified, earlier hypothesis is proposed the feature that may exist in the image, again according to hypothesis carry out autotelicly cutting apart, low layers such as mark and feature extraction handle, be complementary with object module on this basis, its advantage is that the low layer processing is the thick coupling of carrying out under the guidance of knowledge, can avoid extracting too much unnecessary feature, can improve the efficient of algorithm, it is simple and targeted that therefore smart coupling also becomes, its shortcoming is that target to be identified changes, and the knowledge hypothesis just must change thereupon, thereby substituting relatively poor.
Although above two kinds of strategies respectively have relative merits, cutting apart under knowledge instructs more meets human vision mechanism, therefore, the present invention adopts the top-down recognition methods of cutting apart, invent a kind of natural target detection method of cutting apart based on the robust natural image that is used for the mobile robot visual navigation, its system chart as shown in Figure 1.
This natural target detection method comprises with the lower part: image acquisition, pre-service, partitioning parameters are selected, the robust natural image is cut apart, feature extraction, target authentication, Target Modeling, output result.Wherein image acquisition, pre-service, Shandong group natural image are cut apart, feature extraction and target authentication are carried out in proper order.Partitioning parameters is selected before target detection, and Target Modeling is to set up the model of target under off-line state.
Target Modeling: the foundation of object module need be considered the influence of factors such as jamming target, target partly occur.Jamming target and target to be identified modeling simultaneously are used for target detection, in order to eliminate the influence that jamming target brings.
Image acquisition and pre-service: gather a two field picture to internal memory and filtering.
Partitioning parameters is selected: the selection of partitioning parameters is meant and is used for the parameter selection that the robust natural image is cut apart.For tangible target in the complex background, adopt big partitioning parameters coarse segmentation image.On the contrary, adopt little partitioning parameters to cut apart for unconspicuous target in the background.The selection of partitioning parameters can control chart as the divided area number.
Shandong group natural image is cut apart: the robust natural image cut apart be a kind of based on average drifting from top to bottom and the visual processing method that combines from bottom to top.To under the different illumination conditions and have the natural image of texture, can both obtain stable segmentation result, robust performance is better.And, can adjust parameter according to the needs of upper strata visual task, obtain required segmentation result.
Feature extraction: because some target just partly appears in the image in Mobile Robotics Navigation, therefore, the present invention adopts the target's feature-extraction method based on color.
Target authentication: according to the object module that off-line is set up, the feature that object candidate area is extracted authenticates.
Output result: output segmentation result.
In sum, compare with other object detection methods, the natural target detection method in the robot visual guidance of the present invention has following difference:
● adopt the object detection method on cutting apart based on the robust natural image, improved the robustness of different illumination conditions target detection.
● adopt the method for jamming target and target to be identified modeling simultaneously, eliminated the influence that jamming target brings target detection.
● owing to adopted color of object statistical modeling and robust natural image dividing method, when partly appearing at image, target still can be detected.
● compare with the object detection method on cutting apart based on other natural images, computing velocity is fast.
● have good practical value, extend to simultaneously in other target detection application.
●
Description of drawings
Fig. 1 is the natural target detection method block diagram in the robot visual guidance of the present invention.
Fig. 2 is door and the corridor testing result that is used for robot visual guidance.Leftmost is former figure, the image that middle is after the robust natural image is cut apart, and rightmost is target detection figure as a result.Wherein the zone in the white line frame is exactly the target that is detected.
Specific embodiments
Natural target detection method block diagram in the robot visual guidance as shown in Figure 1.The target detection process comprises eight part steps and forms: Target Modeling, image acquisition, pre-service, partitioning parameters are selected, Shandong group natural image is cut apart, feature extraction, target authentication, output result.
At first, set up object module.Object module adopts the mode of statistical modeling, the color model of target to be detected and jamming target adopts Cb, Cr and three color components of R/G to describe, three-dimensional color component is projected to respectively on Cb-Cr and the Cb-R/G plane, constitute the color of object stencil plane, by a large amount of target sample off-line learning, sample is mapped to the color of object stencil plane, and statistics obtains the color template of target to be detected and jamming target.
Secondly, select partitioning parameters.For tangible target, selection robust natural image is cut apart the coarse segmentation parameter in the dividing method, for unconspicuous target, adopts the segmentation in the robust natural image dividing method to cut parameter during detection.
Then, gather and split image, and extract the feature authentication goals, images acquired, split image under selected partitioning parameters calculates the color average of each cut zone, and each zone is verified with color of object model of having set up.If some candidate regions are under the jurisdiction of the probability of the probability of target greater than other targets, then this zone is considered to target to be detected.
Natural target detection method concrete steps in the robot visual guidance comprise:
Step S1: off-line is set up the color statistical model of target to be detected and jamming target, corresponding two the color description templates of each target;
Step S2: gather a color image frame to internal memory by image pick-up card;
Step S3: Filtering Processing;
Step S4: select partitioning parameters,, cut parameter for unconspicuous target selection segmentation for tangible target selection coarse segmentation parameter;
Step S5: the partitioning parameters based on selected becomes some zones with robust natural image dividing method with image segmentation;
Step S6: calculate the color average of each cut zone, as provincial characteristics;
Step S7: each zone is verified with color of object model of having set up if some candidate regions are under the jurisdiction of the probability of target to be detected greater than the probability that is under the jurisdiction of other targets, then this zone is considered to the target that will detect;
Step S8: export testing result at last.
Fig. 2 is door and the corridor testing result that is used for robot visual guidance.Leftmost is former figure, the image that middle is after the robust natural image is cut apart, and rightmost is target detection figure as a result.Wherein the zone in the white line frame is exactly the target that is detected.
Be that example describes with the door and the detection in corridor of using in the indoor mobile robot vision guided navigation below.
At first, set up the model in door and corridor.Because wall is a serious disturbance target, need consider the influence of wall when detecting gate and corridor, therefore, off-line is set up the color model of door, corridor and three targets of wall simultaneously.The color model of door, corridor and wall adopts Cb, Cr and three color components of R/G to describe.Three-dimensional color component is projected to respectively on Cb-Cr and the Cb-R/G plane, constitute 6 color of object stencil planes.By a large amount of target sample off-line learning, sample is mapped to the color of object stencil plane, statistics obtains the color template of target.
The second, select partitioning parameters.Because door is apparent in view with the color contrast in corridor, when detecting gate, select the coarse segmentation parameter in the robust natural image dividing method.And the color of corridor and wall adopts the segmentation in the robust natural image dividing method to cut parameter because the influence that disturbed by illumination is more approaching when detecting the corridor.Door carries out split image respectively with the detection in corridor under different partitioning parameters.
The 3rd, gather and split image, and extract the feature authentication goals.Images acquired, split image under selected partitioning parameters.Calculate the color average of each cut zone.Each zone is verified with color of object model of having set up if some candidate regions are under the jurisdiction of the probability in door or corridor is under the jurisdiction of other targets greater than 0.1 probability less than 0.05, then this zone is considered to target (door or corridor).
Characteristics of the present invention and effect have:
1) robustness of target detection is good. Since adopted cut apart based on the robust natural image on Object detection method, can stably detect target in different illumination conditions. Use In mobile robot's vision guided navigation.
2) adopt the method for simultaneously modeling of jamming target and target to be identified, eliminated the interference order The impact that mark brings target detection.
3) owing to adopted color of object statistical modeling and robust natural image dividing method, target Part still can be detected when appearing at image.
4) with cut apart based on other natural images on object detection method compare computational speed Hurry up.
5) have good practical value, extend to simultaneously in other target detection application.
Claims (5)
1. the natural target detection method in the robot visual guidance, the target detection process comprises eight part steps and forms: Target Modeling, image acquisition, pre-service, partitioning parameters are selected, Shandong group natural image is cut apart, feature extraction, target authentication, output result.
2. according to the natural target detection method in the robot visual guidance of claim 1, set up the Target Modeling step: object module adopts the mode of statistical modeling, the color model of target to be detected and jamming target adopts Cb, Cr and three color components of R/G to describe, three-dimensional color component is projected to respectively on Cb-Cr and the Cb-R/G plane, constitute the color of object stencil plane, by a large amount of target sample off-line learning, sample is mapped to the color of object stencil plane, and statistics obtains the color template of target to be detected and jamming target.
3. according to the natural target detection method in the robot visual guidance of claim 1, partitioning parameters is selected step: for tangible target, selection robust natural image is cut apart the coarse segmentation parameter in the dividing method, for unconspicuous target, adopt the segmentation in the robust natural image dividing method to cut parameter during detection.
4. according to the natural target detection method in the robot visual guidance of claim 1, image acquisition and image segmentation step: extract the feature authentication goals, images acquired, split image under selected partitioning parameters, calculate the color average of each cut zone, each zone is verified that with the color of object model of having set up if some candidate regions are under the jurisdiction of the probability of the probability of target greater than other targets, then this zone is considered to target to be detected.
5. according to the natural target detection method in the robot visual guidance of claim 1, concrete steps are as follows:
Step S1: off-line is set up the color statistical model of target to be detected and jamming target, corresponding two the color description templates of each target;
Step S2: gather a color image frame to internal memory by image pick-up card;
Step S3: Filtering Processing;
Step S4: select partitioning parameters,, cut parameter for unconspicuous target selection segmentation for tangible target selection coarse segmentation parameter;
Step S5: the partitioning parameters based on selected becomes some zones with robust natural image dividing method with image segmentation;
Step S6: calculate the color average of each cut zone, as provincial characteristics;
Step S7: each zone is verified with color of object model of having set up if some candidate regions are under the jurisdiction of the probability of target to be detected greater than the probability that is under the jurisdiction of other targets, then this zone is considered to the target that will detect;
Step S8: export testing result at last.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 200510075539 CN1873656A (en) | 2005-06-03 | 2005-06-03 | Detection method of natural target in robot vision navigation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 200510075539 CN1873656A (en) | 2005-06-03 | 2005-06-03 | Detection method of natural target in robot vision navigation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN1873656A true CN1873656A (en) | 2006-12-06 |
Family
ID=37484127
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 200510075539 Pending CN1873656A (en) | 2005-06-03 | 2005-06-03 | Detection method of natural target in robot vision navigation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN1873656A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101794391A (en) * | 2010-03-18 | 2010-08-04 | 中国农业大学 | Greenhouse environment leading line extraction method |
CN101469991B (en) * | 2007-12-26 | 2011-08-10 | 南京理工大学 | All-day structured road multi-lane line detection method |
CN105856227A (en) * | 2016-04-18 | 2016-08-17 | 呼洪强 | Robot vision navigation technology based on feature recognition |
CN109785367A (en) * | 2019-01-21 | 2019-05-21 | 视辰信息科技(上海)有限公司 | Exterior point filtering method and device in threedimensional model tracking |
CN110838131A (en) * | 2019-11-04 | 2020-02-25 | 网易(杭州)网络有限公司 | Method and device for realizing automatic cutout, electronic equipment and medium |
-
2005
- 2005-06-03 CN CN 200510075539 patent/CN1873656A/en active Pending
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101469991B (en) * | 2007-12-26 | 2011-08-10 | 南京理工大学 | All-day structured road multi-lane line detection method |
CN101794391A (en) * | 2010-03-18 | 2010-08-04 | 中国农业大学 | Greenhouse environment leading line extraction method |
CN101794391B (en) * | 2010-03-18 | 2011-12-28 | 中国农业大学 | Greenhouse environment leading line extraction method |
CN105856227A (en) * | 2016-04-18 | 2016-08-17 | 呼洪强 | Robot vision navigation technology based on feature recognition |
CN109785367A (en) * | 2019-01-21 | 2019-05-21 | 视辰信息科技(上海)有限公司 | Exterior point filtering method and device in threedimensional model tracking |
CN110838131A (en) * | 2019-11-04 | 2020-02-25 | 网易(杭州)网络有限公司 | Method and device for realizing automatic cutout, electronic equipment and medium |
CN110838131B (en) * | 2019-11-04 | 2022-05-17 | 网易(杭州)网络有限公司 | Method and device for realizing automatic cutout, electronic equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102708356B (en) | Automatic license plate positioning and recognition method based on complex background | |
CN104793620B (en) | The avoidance robot of view-based access control model feature binding and intensified learning theory | |
WO2021212443A1 (en) | Smoke video detection method and system based on lightweight 3d-rdnet model | |
CN100495438C (en) | Method for detecting and identifying moving target based on video monitoring | |
CN102419819B (en) | Method and system for recognizing human face image | |
CN112837344B (en) | Target tracking method for generating twin network based on condition countermeasure | |
CN105260749B (en) | Real-time target detection method based on direction gradient binary pattern and soft cascade SVM | |
CN104598915A (en) | Gesture recognition method and gesture recognition device | |
CN108564598B (en) | Improved online Boosting target tracking method | |
CN102496001A (en) | Method of video monitor object automatic detection and system thereof | |
CN105760846A (en) | Object detection and location method and system based on depth data | |
CN103310466A (en) | Single target tracking method and achievement device thereof | |
CN104036523A (en) | Improved mean shift target tracking method based on surf features | |
CN106157323A (en) | The insulator division and extracting method that a kind of dynamic division threshold value and block search combine | |
CN104637058A (en) | Image information-based client flow volume identification statistic method | |
CN109410248B (en) | Flotation froth motion characteristic extraction method based on r-K algorithm | |
CN105654505B (en) | A kind of collaboration track algorithm and system based on super-pixel | |
CN103761747B (en) | Target tracking method based on weighted distribution field | |
CN1873656A (en) | Detection method of natural target in robot vision navigation | |
CN103310006A (en) | ROI extraction method in auxiliary vehicle driving system | |
CN112365586A (en) | 3D face modeling and stereo judging method and binocular 3D face modeling and stereo judging method of embedded platform | |
CN111191531A (en) | Rapid pedestrian detection method and system | |
CN110516527B (en) | Visual SLAM loop detection improvement method based on instance segmentation | |
CN107247967B (en) | Vehicle window annual inspection mark detection method based on R-CNN | |
CN117152443A (en) | Image instance segmentation method and system based on semantic lead guidance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |