CN102435174B - Method and device for detecting barrier based on hybrid binocular vision - Google Patents

Method and device for detecting barrier based on hybrid binocular vision Download PDF

Info

Publication number
CN102435174B
CN102435174B CN 201110340035 CN201110340035A CN102435174B CN 102435174 B CN102435174 B CN 102435174B CN 201110340035 CN201110340035 CN 201110340035 CN 201110340035 A CN201110340035 A CN 201110340035A CN 102435174 B CN102435174 B CN 102435174B
Authority
CN
China
Prior art keywords
left view
component
saliency maps
conspicuousness
carried out
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 201110340035
Other languages
Chinese (zh)
Other versions
CN102435174A (en
Inventor
戴琼海
罗晓燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Beihang University
Original Assignee
Tsinghua University
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Beihang University filed Critical Tsinghua University
Priority to CN 201110340035 priority Critical patent/CN102435174B/en
Publication of CN102435174A publication Critical patent/CN102435174A/en
Application granted granted Critical
Publication of CN102435174B publication Critical patent/CN102435174B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Measurement Of Optical Distance (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for detecting a barrier based on hybrid binocular vision. The method comprises the following steps of: acquiring a left view and a right view of a scene, wherein the left view is a visible light image; and the right view is a near-infrared image; acquiring visible light obviousness images and near-infrared obviousness images of the left view and the right view; comparing an obviousness mean value of the visible light obviousness images and an obviousness mean value of the near-infrared obviousness images respectively to determine the barrier areas in the left view and the right view respectively; matching the barrier area in the left view and the barrier area in the right view so as to determine the final barrier area; and analyzing the final barrier area to obtain a distance and a shape of the barrier. According the embodiment of the invention, the distance and the shape of the barrier can be accurately detected; and only a small part of areas in the left view and the right view are matched and analyzed, so calculation complexity is effectively reduced, and the method is high in detection efficiency.

Description

Obstacle detection method and device based on hybrid binocular vision
Technical field
The present invention relates to the computer vision processing technology field, particularly a kind of obstacle detection method and device based on hybrid binocular vision.
Background technology
In today that disaster takes place frequently, the aviation emergency management and rescue become the important component part of countries in the world emergency management and rescue with its advantage fast and effectively.But the aircraft security for the intricately empty condition has brought series of problems, and in remote districts, GPS status signal may be lost; Foreign environment, geography information is not enough; Exceedingly odious weather, scene information is not enough.Therefore, to the visually-perceptible of surrounding environment, just become present low flyer, the research theme of aviation robot.
The Marr of 19 century 70 masschusetts, u.s.a science and engineerings is applied in theories of vision on the binocular coupling, and with two planimetric map acquisition depth informations that parallax is arranged, this has just established the theoretical foundation of binocular stereo vision development.Binocular stereo vision just in aspect tremendous developments such as three-dimensional modeling, target identification, topographical surveyings, is widely applied to the numerous areas such as military equipment, commercial measurement, aerospace equipment once proposition.By the binocular vision measuring technique, realized robot vision as the people, the information of energy real-time analysis surrounding environment detects and dodges barrier, autonomous action.Particularly in a single day the aviation robot has the visual performance of human eye, just can be in circumstances not known, and real-time perception restructuring analysis circumstances not known is finished autonomous flight and is executed the task.
At present, binocular stereo vision has become the study hotspot that computer vision is processed, and is based on human eye binocular parallax pattern, seeks out Same Scene in the binocular image by coupling, calculate the corresponding relation of this spatial scene in different images, obtain the D coordinates value of this point.
Yet, in case there is barrier around the scene, existing method will be difficult to accomplish accurately location and navigation, may cause the generation of accident, therefore, for the complicated flight environment of vehicle in low latitude, how accurately to detect the position of barrier and shape to realize that location and navigation to airborne aircraft are problem demanding prompt solutions.
Summary of the invention
The present invention is intended to one of solve the problems of the technologies described above at least.
For this reason, one object of the present invention is to propose a kind of obstacle detection method based on hybrid binocular vision.The method has the distance of barrier and shape accuracy of judgement, and computation complexity is low, the advantage that detection efficiency is high.
Another object of the present invention is to propose a kind of obstacle detector based on hybrid binocular vision.
To achieve these goals, the obstacle detection method based on hybrid binocular vision that first aspect present invention embodiment proposes may further comprise the steps: gather left view and the right view of scene, wherein, described left view is visible images, and described right view is near-infrared image; Respectively described left view and right view are carried out conspicuousness and calculate to obtain the visible light Saliency maps corresponding with described left view and the near infrared Saliency maps corresponding with described right view; Respectively described visible light Saliency maps and described near infrared Saliency maps are carried out the conspicuousness average relatively, and determine barrier region in described left view and the described right view according to comparative result; The barrier region of described left view and the barrier region of described right view are mated to determine final barrier region in described left view and described right view; With described final barrier region is carried out disparity computation determining the distance of obstacle distance collection point, and described barrier region is carried out edge matching to determine the shape of described barrier.
The obstacle detector based on hybrid binocular vision that second aspect present invention embodiment proposes, comprise: image collecting device, described image collecting device is used for gathering left view and the right view of scene, wherein, described left view is visible images, and described right view is near-infrared image; The conspicuousness image collection module is used for that described left view and right view are carried out conspicuousness and calculates to obtain the visible light Saliency maps corresponding with described left view and the near infrared Saliency maps corresponding with described right view; The barrier region determination module is used for described visible light Saliency maps and described near infrared Saliency maps are carried out the conspicuousness average relatively, and determines respectively barrier region in described left view and the described right view according to comparative result; Final barrier region determination module is used for the barrier region of described left view and the barrier region of described right view are mated to determine final barrier region at described left view and described right view; With the barrier locating module, be used for described final barrier region is carried out disparity computation with the distance of definite obstacle distance collection point, and described barrier region is carried out edge matching to determine the shape of described barrier.
Can detect exactly distance and the shape of barrier according to embodiments of the invention, and only the matching analysis be carried out in small part zone in the view of the left and right sides, effectively reduce computation complexity, detection efficiency is high.And then can provide accurate location and navigation for equipment such as aircraft.
Additional aspect of the present invention and advantage in the following description part provide, and part will become obviously from the following description, or recognize by practice of the present invention.
Description of drawings
Above-mentioned and/or additional aspect of the present invention and advantage are from obviously and easily understanding becoming the description of embodiment in conjunction with following accompanying drawing, wherein:
Fig. 1 is the process flow diagram based on the obstacle detection method of hybrid binocular vision of the embodiment of the invention;
Fig. 2 A-2B is respectively left view and the right view that collects by the embodiment of the invention;
Fig. 3 A-3B is respectively left view shown in Fig. 2 A-2B and corresponding visible light Saliency maps and the near infrared Saliency maps of right view;
Fig. 4 A-4B marks off left view and barrier region synoptic diagram corresponding to right view for the method for using the embodiment of the invention in Fig. 2 A-2B;
Fig. 5 A-5B is the final barrier region synoptic diagram that obtains after the method for the application embodiment of the invention is mated barrier region shown in Fig. 4 A-4B; And
Fig. 6 is the structural drawing based on the obstacle detector of hybrid binocular vision of the embodiment of the invention.
Embodiment
The below describes embodiments of the invention in detail, and the example of described embodiment is shown in the drawings, and wherein identical or similar label represents identical or similar element or the element with identical or similar functions from start to finish.Be exemplary below by the embodiment that is described with reference to the drawings, only be used for explaining the present invention, and can not be interpreted as limitation of the present invention.
In description of the invention, it will be appreciated that, term " " center "; " vertically "; " laterally "; " on "; D score; " front ", " afterwards ", " left side ", " right side ", " vertically ", " level ", " top ", " end ", " interior ", orientation or the position relationship of indications such as " outward " are based on orientation shown in the drawings or position relationship, only be for convenience of description the present invention and simplified characterization, rather than device or the element of indication or hint indication must have specific orientation, with specific orientation structure and operation, therefore can not be interpreted as limitation of the present invention.In addition, term " first ", " second " only are used for describing purpose, and can not be interpreted as indication or hint relative importance.
Ultimate principle of the present invention is: based on the low flyer detection of obstacles of visible light and the hybrid binocular vision of near infrared, utilize the mutual relationship of visible light and near infrared spectrum image, extract respectively salient region, the acquired disturbance object area, similarity by intensity image in near-infrared image and the visible spectrum image, find unique point, match barrier region, calculate the approximate location of barrier according to triangle relation.Simultaneously, according to the style characteristic of barrier, select to have the simple and effective mode of dodging.
For above-mentioned principle of the present invention is had more deep understanding, below in conjunction with accompanying drawing principle of the present invention is described in detail.
As shown in Figure 1, be the process flow diagram based on the obstacle detection method of hybrid binocular vision of the embodiment of the invention.The obstacle detection method based on hybrid binocular vision according to the embodiment of the invention may further comprise the steps:
Step S101, left view and the right view of collection scene, wherein, described left view is visible images, described right view is near-infrared image.For example, gather the left and right sides view of scene by hybrid binocular vision (left view of scene and right view) acquisition system.This acquisition system is that visible light and near infrared spectrum imaging device are adopted respectively in visual angle, the left and right sides, and concrete imaging device can adopt the parallel optical axis imaging.Silicon CCD as at present can respond to near-infrared band, so can be carried in the collection that near-infrared image is realized in general black and white camera front with the surplus near infrared optical filtering of filtering visible light.
One embodiment of the present of invention gather left view with GRAS-50S5C-C as color camera, are added in GRAS-50S5M-C camera lens previous crops as near infrared camera collection right view take optical filtering (the filtering visible light remains near infrared).Shown in Fig. 2 A-2B, the left view (Fig. 2 A) and the right view (Fig. 2 B) that obtain for the collecting device by this embodiment.
Step S102 carries out conspicuousness to described left view and right view respectively and calculates to obtain the visible light Saliency maps corresponding with described left view and the near infrared Saliency maps corresponding with described right view.
Generally, for calculating the visible light Saliency maps, because visible images has abundant colouring information, therefore can estimate according to it Saliency maps of the approximate human eye of color region conduct comparatively outstanding with respect to background in the current real-time image acquisition, be designated as Sv, and then the salient region of distinguishing in the visible images is used as possible barrier region, is designated as Rv.
Particularly, calculate in the following manner the visible light Saliency maps: by difference of Gaussian filtering described left view is carried out smoothing processing.Then the left view after the smoothing processing is transformed to the left view based on the Lab color model, is about to image conversion to the L that more meets the human vision rule *a *The b color space.At last the color mean value computation according to L component, a component and the b component of described left view based on the Lab color model obtains described visible light Saliency maps, namely according at L *a *Under the b color space, each component all is worth changing as color conspicuousness component for color, calculates final color Saliency maps.
In this embodiment, the color average of L component, a component and b component obtains by following formula respectively:
m _ L = 1 N * M Σ i = 1 N Σ j = 1 M Iv _ L ( i , j ) ;
m _ a = 1 N * M Σ i = 1 N Σ j = 1 M Iv _ a ( i , j ) ;
m _ b = 1 N * M Σ i = 1 N Σ j = 1 M Iv _ b ( i , j ) ,
Wherein, m_L, m_a and m_b are respectively the color average of L component, a component and b component, Iv_L, Iv_a and Iv_b are respectively L component, a component and b component, Iv is described left view, and N is the position that the line number of described left view, columns, i and j that M is described left view are respectively corresponding pixel.
Described visible light Saliency maps obtains by following formula:
Sv(i,j)=[Iv_L(i,j)-m_L] 2+[Iv_a(i,j)-m_a] 2+[Iv_b(i,j)-m_b] 2
Wherein, Sv is described visible light Saliency maps.
As shown in Figure 3A, a visible light Saliency maps for calculating by the way.
In another example of the present invention, for calculating the near infrared Saliency maps, because it also has good detail of the high frequency, so can detect salient region according to the similarity of each pixel and peripheral region, be designated as Snir, from Snir, find out important zone as barrier region, be designated as Rnir.
Particularly, calculate in the following manner the near infrared Saliency maps:
Described right view is divided into the mutually disjoint a plurality of block p that are made of the 7*7 pixel, and with the pixel value of the central pixel point in described each block pixel value as its place block, namely centered by each pixel, get the adjacent domain of 7*7 as the representative p of this point iFollow according to the distance between the pixel value of different blocks and the distance between the location of pixels, the diversity of more different blocks, wherein the distance between the pixel value of different blocks is by the Euclidean distance d between row composition pixel vector with each block v(p i, p j), the distance between the described location of pixels is the Euclidean distance d with the coordinate between the different central pixel point p(p i, p j), p wherein iAnd p jBlock for different that is to say, according to the distance between the pixel value and the distance between the location of pixels, diversity more between points.Wherein the distance between the pixel value forms vector, the Euclidean distance d that is worth between the calculating pixel vector with piece by row v(p i, p j), and positional distance then directly represents d with the Euclidean distance of the coordinate between two points p(p i, p j).At last by choose with described right view in the immediate K of a current pixel point pixel calculate the conspicuousness of described current pixel point, circulation is carried out until obtain described near infrared Saliency maps, wherein K=30.
In the above-described embodiments, diversity (different degree) obtains by following formula:
d ( p i , p j ) = d v ( p i , p j ) 1 + 3 · d p ( p i , p j ) ,
Wherein, d (p i, p j) be block p iAnd p jBetween diversity, d (p i, p j) value show that both distances are larger greatlyr, then also larger with regard to difference, similarity is less.
The computing formula of described conspicuousness is:
Snir i = 1 - exp { - 1 K Σ k = 1 K d ( p i , p j ) } ,
Wherein, Snir iBe block p iConspicuousness.
Shown in Fig. 3 B, be the example of a near infrared Saliency maps obtaining by the way.
Step S103 carries out the conspicuousness average relatively to described visible light Saliency maps and described near infrared Saliency maps respectively, and determines barrier region in described left view and the described right view according to comparative result.In other words, based on the visible light Saliency maps that has obtained and described near infrared Saliency maps, utilize the average of Saliency maps self relatively, carry out image segmentation, obtain the barrier region of left and right sides view.In an example of the present invention, consider and dodge security, get the maximum difference of the ranks of the first according to pixels coordinate of all Probability Areas orthogonal.Shown in Fig. 4 A-4B.
Particularly, at first described visible light Saliency maps and described near infrared Saliency maps are carried out the conspicuousness average relatively, then respectively described visible light Saliency maps and described near infrared Saliency maps are carried out image segmentation to obtain the barrier region in described left view and the described right view according to comparative result.
Step S104 mates to determine final barrier region in described left view and described right view to the barrier region of described left view and the barrier region of described right view.
Particularly, utilize resulting barrier region in above-described embodiment, the barrier region in the view of the left and right sides is sought matching area as benchmark in another figure respectively.If find that matching area itself also is barrier region in another figure, then getting this zone is final barrier region.Shown in Fig. 5 A-5B, wherein, the final barrier region that at left view mark of Fig. 5 A for determining by said method, the final barrier region that at right view mark of Fig. 5 B for determining by said method.
Step S105 carries out disparity computation with the distance of definite obstacle distance collection point to described final barrier region, and described barrier region is carried out edge matching to determine the shape of described barrier.
Particularly, the shape of determining described barrier at first is transformed to described left view the left view based on the RGB color model; Described left view based on the RGB color model is transformed on the HVS space, extract the V component, and the edge feature of described V component is mated to obtain the shape of described barrier.
More specifically, choose the final barrier region of binocular image centering, carry out disparity computation, acquired disturbance thing current distance.Left view at first is transformed to the RGB coloured image, then transforms on the HVS space, extract the V component, it is considered as visible images and infrared image associated picture.At this moment, binocular image can be understood as barrier region V component image, is designated as Rv, and the near infrared barrier region, is designated as Rnir.Because near-infrared image reflects the high-frequency information of image to a great extent, so adopt edge feature to mate, try to achieve the barrier shape.
In one embodiment of the invention, after the distance and shape of determining final barrier, distance and shape to final barrier are further analyzed, with wide be dominant and height is dominant of determining barrier, just can be dominant and high being dominant takes the suitable principle of dodging to bump with barrier to prevent aircraft etc. according to wide.
As shown in Figure 6, another aspect of the present invention correspondingly proposes a kind of obstacle detector based on hybrid binocular vision, with reference to figure 6, the obstacle detector 600 based on hybrid binocular vision of the embodiment of the invention comprises image collecting device 610, conspicuousness image collection module 620, barrier region determination module 630, final barrier region determination module 640 and barrier locating module 650.
Image collecting device 610 is used for gathering left view and the right view of scene, and wherein, described left view is visible images, and described right view is near-infrared image.Conspicuousness image collection module 620 is used for that described left view and right view are carried out conspicuousness and calculates to obtain the visible light Saliency maps corresponding with described left view and the near infrared Saliency maps corresponding with described right view.Barrier region determination module 630 is used for described visible light Saliency maps and described near infrared Saliency maps are carried out the conspicuousness average relatively, and determines respectively barrier region in described left view and the described right view according to comparative result.Final barrier region determination module 640 is used for the barrier region of described left view and the barrier region of described right view are mated to determine final barrier region at described left view and described right view.Barrier locating module 650 is used for described final barrier region is carried out disparity computation with the distance of definite obstacle distance collection point, and described barrier region is carried out edge matching to determine the shape of described barrier.
In one embodiment of the invention, 620 pairs of left views of conspicuousness image collection module carry out conspicuousness and calculate to obtain the visible light Saliency maps, comprising: by difference of Gaussian filtering described left view is carried out smoothing processing; Left view after the smoothing processing is transformed to left view based on the Lab color model; Color mean value computation according to L component, a component and the b component of described left view based on the Lab color model obtains described visible light Saliency maps.
In the above-described embodiments, the color average of L component, a component and b component obtains by following formula respectively:
m _ L = 1 N * M Σ i = 1 N Σ j = 1 M Iv _ L ( i , j ) ;
m _ a = 1 N * M Σ i = 1 N Σ j = 1 M Iv _ a ( i , j ) ;
m _ L = 1 N * M Σ i = 1 N Σ j = 1 M Iv _ L ( i , j ) ;
Wherein, m_L, m_a and m_b are respectively the color average of L component, a component and b component, Iv_L, Iv_a and Iv_b are respectively L component, a component and b component, Iv is described left view, and N is the position that the line number of described left view, columns, i and j that M is described left view are respectively corresponding pixel.
Described visible light Saliency maps obtains by following formula:
Sv(i,j)=[Iv_L(i,j)-m_L] 2+[Iv_a(i,j)-m_a] 2+[Iv_b(i,j)-m_b] 2
Wherein, Sv is described visible light Saliency maps.
In another embodiment of the present invention, 620 pairs of right views of conspicuousness image collection module carry out conspicuousness and calculate to obtain the near infrared Saliency maps, comprise: at first described right view is divided into the mutually disjoint a plurality of block p that consisted of by the 7*7 pixel, and with the pixel value of the central pixel point in described each block pixel value as its place block, then according to the distance between the pixel value of different blocks and the distance between the location of pixels, the diversity of more different blocks, wherein the distance between the pixel value of different blocks is by the Euclidean distance d between row composition pixel vector with each block v(p i, p j), the distance between the described location of pixels is the Euclidean distance d with the coordinate between the different central pixel point p(p i, p j), p wherein iAnd p jBe different blocks, at last by choose with described right view in the immediate K of a current pixel point pixel calculate the conspicuousness of described current pixel point, circulation is carried out until obtain described near infrared Saliency maps, wherein K=30.
In the above-described embodiments, diversity (different degree) obtains by following formula:
d ( p i , p j ) = d v ( p i , p j ) 1 + 3 · d p ( p i , p j ) ,
Wherein, d (p i, p j) be block p iAnd p jBetween diversity, d (p i, p j) value show that both distances are larger greatlyr, then also larger with regard to difference, similarity is less.
The computing formula of conspicuousness is:
Snir i = 1 - exp { - 1 K Σ k = 1 K d ( p i , p j ) } ,
Wherein, Snir iBe block p iConspicuousness.
In one embodiment of the invention, barrier region determination module 630 is used for described visible light Saliency maps and described near infrared Saliency maps are carried out the conspicuousness average relatively; And respectively described visible light Saliency maps and described near infrared Saliency maps are carried out image segmentation to obtain the barrier region in described left view and the described right view according to comparative result.
In concrete example of the present invention, barrier locating module 650 is for the left view that described left view is transformed to based on the RGB color model; And described left view based on the RGB color model transformed on the HVS space, extract the V component, and the edge feature of described V component is mated to obtain the shape of described barrier.
Can detect exactly distance and the shape of barrier according to embodiments of the invention, and only the matching analysis be carried out in small part zone in the view of the left and right sides, effectively reduce computation complexity, detection efficiency is high.And then can provide accurate location and navigation for equipment such as aircraft.
Particularly, the above embodiment of the present invention has following advantage:
Apparatus structure of the present invention is simple, highly versatile, and to good environmental adaptability, because near infrared is better than visible light according to imaging effect on the degree at mist and low light.Said method step of the present invention is simple, is easy to realize, can effectively process various detection of obstacles problems.Therefore in addition, the method is only carried out the matching analysis to salient region of image, improves processing capability in real time, and method of the present invention only processes visual pattern, does not rely on other knowledge, therefore improves independence and the versatility of method, and is applied widely.
In description of the invention, need to prove that unless clear and definite regulation and restriction are arranged in addition, term " installation ", " linking to each other ", " connection " should be done broad understanding, for example, can be to be fixedly connected with, also can be to removably connect, or connect integratedly; Can be mechanical connection, also can be to be electrically connected; Can be directly to link to each other, also can indirectly link to each other by intermediary, can be the connection of two element internals.For the ordinary skill in the art, can concrete condition understand above-mentioned term concrete meaning in the present invention.
In the description of this instructions, the description of reference term " embodiment ", " some embodiment ", " example ", " concrete example " or " some examples " etc. means to be contained at least one embodiment of the present invention or the example in conjunction with specific features, structure, material or the characteristics of this embodiment or example description.In this manual, the schematic statement of above-mentioned term not necessarily referred to identical embodiment or example.And the specific features of description, structure, material or characteristics can be with suitable mode combinations in any one or more embodiment or example.
Although illustrated and described embodiments of the invention, those having ordinary skill in the art will appreciate that: can carry out multiple variation, modification, replacement and modification to these embodiment in the situation that does not break away from principle of the present invention and aim, scope of the present invention is by claim and be equal to and limit.

Claims (11)

1. the obstacle detection method based on hybrid binocular vision is characterized in that, may further comprise the steps:
Gather left view and the right view of scene, wherein, described left view is visible images, and described right view is near-infrared image;
Respectively described left view and right view are carried out conspicuousness and calculate to obtain the visible light Saliency maps corresponding with described left view and the near infrared Saliency maps corresponding with described right view, wherein,
Describedly left view carried out conspicuousness calculate to obtain the visible light Saliency maps, further comprise:
By difference of Gaussian filtering described left view is carried out smoothing processing;
Left view after the smoothing processing is transformed to left view based on the Lab color model;
Color mean value computation according to L component, a component and the b component of described left view based on the Lab color model obtains described visible light Saliency maps,
Describedly right view carried out conspicuousness calculate to obtain the near infrared Saliency maps, further comprise:
Described right view is divided into the mutually disjoint a plurality of block p that are made of the 7*7 pixel, and with the pixel value of the central pixel point in described each block pixel value as its place block;
According to the distance between the pixel value of different blocks and the distance between the location of pixels, the diversity of more different blocks, wherein the distance between the pixel value of different blocks is by the Euclidean distance d between row composition pixel vector with each block v(p i, p j), the distance between the described location of pixels is the Euclidean distance d with the coordinate between the different central pixel point p(p i, p j), p wherein iAnd p jBe different blocks;
By choose with described right view in the immediate K of a current pixel point pixel calculate the conspicuousness of described current pixel point, circulation is carried out until obtain described near infrared Saliency maps, wherein K=30;
Respectively described visible light Saliency maps and described near infrared Saliency maps are carried out the conspicuousness average relatively, and determine barrier region in described left view and the described right view according to comparative result;
The barrier region of described left view and the barrier region of described right view are mated to determine final barrier region in described left view and described right view; With
Described final barrier region is carried out disparity computation with the distance of definite obstacle distance collection point, and described barrier region is carried out edge matching to determine the shape of described barrier.
2. the obstacle detection method based on hybrid binocular vision according to claim 1 is characterized in that, also comprises:
Wide being dominant that distance and the shape of described barrier are analyzed to determine described barrier is dominant with height.
3. the obstacle detection method based on hybrid binocular vision according to claim 1 is characterized in that, wherein, the color average of described L component, a component and b component obtains by following formula respectively:
Figure FDA00002707810000021
Figure FDA00002707810000023
Wherein, m_L, m_a and m_b are respectively the color average of L component, a component and b component, Iv_L, Iv_a and Iv_b are respectively L component, a component and b component, Iv is described left view, and N is the position that the line number of described left view, columns, i and j that M is described left view are respectively corresponding pixel;
Described visible light Saliency maps obtains by following formula:
Figure FDA00002707810000024
Wherein, Sv is described visible light Saliency maps.
4. the obstacle detection method based on hybrid binocular vision according to claim 1 is characterized in that, described diversity obtains by following formula:
Figure FDA00002707810000025
Wherein, d (p i, p j) be block p iAnd p jBetween diversity,
The computing formula of described conspicuousness is:
Figure FDA00002707810000026
Wherein, Snir iBe block p iConspicuousness.
5. the obstacle detection method based on hybrid binocular vision according to claim 1, it is characterized in that, respectively described visible light Saliency maps and described near infrared Saliency maps are carried out the conspicuousness average relatively, and determine respectively barrier region in described left view and the described right view according to comparative result, further comprise:
Described visible light Saliency maps and described near infrared Saliency maps are carried out the conspicuousness average relatively;
Respectively described visible light Saliency maps and described near infrared Saliency maps are carried out image segmentation to obtain the barrier region in described left view and the described right view according to comparative result.
6. the obstacle detection method based on hybrid binocular vision according to claim 1 is characterized in that, described barrier region is carried out edge matching to determine the shape of described barrier, further comprises:
Described left view is transformed to left view based on the RGB color model;
Described left view based on the RGB color model is transformed on the HVS space, extract the V component, and the edge feature of described V component is mated to obtain the shape of described barrier.
7. the obstacle detector based on hybrid binocular vision is characterized in that, comprising:
Image collecting device, described image collecting device is used for gathering left view and the right view of scene, and wherein, described left view is visible images, and described right view is near-infrared image;
The conspicuousness image collection module, be used for that described left view and right view are carried out conspicuousness and calculate to obtain the visible light Saliency maps corresponding with described left view and the near infrared Saliency maps corresponding with described right view, described conspicuousness image collection module is carried out conspicuousness to left view and is calculated to obtain the visible light Saliency maps, comprising:
By difference of Gaussian filtering described left view is carried out smoothing processing;
Left view after the smoothing processing is transformed to left view based on the Lab color model;
Color mean value computation according to L component, a component and the b component of described left view based on the Lab color model obtains described visible light Saliency maps;
Described conspicuousness image collection module is carried out conspicuousness to right view and is calculated to obtain the near infrared Saliency maps, comprising:
Described right view is divided into the mutually disjoint a plurality of block p that are made of the 7*7 pixel, and with the pixel value of the central pixel point in described each block pixel value as its place block;
According to the distance between the pixel value of different blocks and the distance between the location of pixels, the diversity of more different blocks, wherein the distance between the pixel value of different blocks is by the Euclidean distance d between row composition pixel vector with each block v(p i, p j), the distance between the described location of pixels is the Euclidean distance d with the coordinate between the different central pixel point p(p i, p j), p wherein iAnd p jBe different blocks;
By choose with described right view in the immediate K of a current pixel point pixel calculate the conspicuousness of described current pixel point, circulation is carried out until obtain described near infrared Saliency maps, wherein K=30;
The barrier region determination module is used for described visible light Saliency maps and described near infrared Saliency maps are carried out the conspicuousness average relatively, and determines respectively barrier region in described left view and the described right view according to comparative result;
Final barrier region determination module is used for the barrier region of described left view and the barrier region of described right view are mated to determine final barrier region at described left view and described right view; With
The barrier locating module is used for described final barrier region is carried out disparity computation with the distance of definite obstacle distance collection point, and described barrier region is carried out edge matching to determine the shape of described barrier.
8. the obstacle detector based on hybrid binocular vision according to claim 7 is characterized in that, wherein, the color average of described L component, a component and b component obtains by following formula respectively:
Figure FDA00002707810000041
Figure FDA00002707810000043
Wherein, m_L, m_a and m_b are respectively the color average of L component, a component and b component, Iv_L, Iv_a and Iv_b are respectively L component, a component and b component, Iv is described left view, and N is the position that the line number of described left view, columns, i and j that M is described left view are respectively corresponding pixel;
Described visible light Saliency maps obtains by following formula:
Figure FDA00002707810000044
Wherein, Sv is described visible light Saliency maps.
9. the obstacle detector based on hybrid binocular vision according to claim 7 is characterized in that, described diversity obtains by following formula:
Figure FDA00002707810000045
Wherein, d (p i, p j) be block p iAnd p jBetween diversity,
The computing formula of described conspicuousness is:
Wherein, Snir iBe block p iConspicuousness.
10. the obstacle detector based on hybrid binocular vision according to claim 7 is characterized in that, described barrier region determination module is used for described visible light Saliency maps and described near infrared Saliency maps are carried out the conspicuousness average relatively; And respectively described visible light Saliency maps and described near infrared Saliency maps are carried out image segmentation to obtain the barrier region in described left view and the described right view according to comparative result.
11. the obstacle detector based on hybrid binocular vision according to claim 7 is characterized in that, described barrier locating module is for the left view that described left view is transformed to based on the RGB color model; And described left view based on the RGB color model transformed on the HVS space, extract the V component, and the edge feature of described V component is mated to obtain the shape of described barrier.
CN 201110340035 2011-11-01 2011-11-01 Method and device for detecting barrier based on hybrid binocular vision Active CN102435174B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110340035 CN102435174B (en) 2011-11-01 2011-11-01 Method and device for detecting barrier based on hybrid binocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110340035 CN102435174B (en) 2011-11-01 2011-11-01 Method and device for detecting barrier based on hybrid binocular vision

Publications (2)

Publication Number Publication Date
CN102435174A CN102435174A (en) 2012-05-02
CN102435174B true CN102435174B (en) 2013-04-10

Family

ID=45983379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110340035 Active CN102435174B (en) 2011-11-01 2011-11-01 Method and device for detecting barrier based on hybrid binocular vision

Country Status (1)

Country Link
CN (1) CN102435174B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102707724B (en) * 2012-06-05 2015-01-14 清华大学 Visual localization and obstacle avoidance method and system for unmanned plane
CN103514595B (en) * 2012-06-28 2016-03-30 中国科学院计算技术研究所 Detection method for image salient region
CN103679686B (en) * 2012-09-11 2016-06-29 株式会社理光 Match measure method and apparatus, parallax calculation method, image matching method
CN103729840A (en) * 2013-12-09 2014-04-16 广西科技大学 Automatic coach barrier detection method based on binocular vision
CN103714533A (en) * 2013-12-09 2014-04-09 广西科技大学 Method for automatically detecting obstacles based on binocular vision
CN103712602A (en) * 2013-12-09 2014-04-09 广西科技大学 Binocular vision based method for automatic detection of road obstacle
CN103714532A (en) * 2013-12-09 2014-04-09 广西科技大学 Method for automatically detecting obstacles based on binocular vision
KR101601475B1 (en) * 2014-08-25 2016-03-21 현대자동차주식회사 Pedestrian detection device and method for driving vehicle at night
CN104793630A (en) * 2015-05-13 2015-07-22 沈阳飞羽航空科技有限公司 Light airplane comprehensive obstacle avoiding system
CN105091931B (en) * 2015-08-05 2017-07-14 广州杰赛科技股份有限公司 A kind of directional blasting method, sensor and directional blasting device
CN109478320B (en) * 2016-07-12 2022-03-18 深圳市大疆创新科技有限公司 Processing images to obtain environmental information
WO2018095278A1 (en) 2016-11-24 2018-05-31 腾讯科技(深圳)有限公司 Aircraft information acquisition method, apparatus and device
CN106529495B (en) * 2016-11-24 2020-02-07 腾讯科技(深圳)有限公司 Obstacle detection method and device for aircraft
CN106682584B (en) * 2016-12-01 2019-12-20 广州亿航智能技术有限公司 Unmanned aerial vehicle obstacle detection method and device
CN107462217B (en) * 2017-07-07 2020-04-14 北京航空航天大学 Unmanned aerial vehicle binocular vision barrier sensing method for power inspection task
CN107593200B (en) * 2017-10-31 2022-05-27 河北工业大学 Tree plant protection system and method based on visible light-infrared technology
CN107917701A (en) * 2017-12-28 2018-04-17 人加智能机器人技术(北京)有限公司 Measuring method and RGBD camera systems based on active binocular stereo vision
CN108734143A (en) * 2018-05-28 2018-11-02 江苏迪伦智能科技有限公司 A kind of transmission line of electricity online test method based on binocular vision of crusing robot
CN110667474B (en) * 2018-07-02 2021-02-26 北京四维图新科技股份有限公司 General obstacle detection method and device and automatic driving system
CN109543543A (en) * 2018-10-25 2019-03-29 深圳市象形字科技股份有限公司 A kind of auxiliary urheen practitioner's bowing detection method based on computer vision technique
CN109472826A (en) * 2018-10-26 2019-03-15 国网四川省电力公司电力科学研究院 Localization method and device based on binocular vision
CN109711279B (en) * 2018-12-08 2023-06-20 南京赫曼机器人自动化有限公司 Obstacle detection method for agricultural environment
CN110414392B (en) * 2019-07-15 2021-07-20 北京天时行智能科技有限公司 Method and device for determining distance between obstacles
CN110749323B (en) * 2019-10-22 2022-03-18 广州极飞科技股份有限公司 Method and device for determining operation route
CN112504472A (en) * 2020-11-26 2021-03-16 浙江大华技术股份有限公司 Thermal imager, thermal imaging method and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050105044A1 (en) * 2003-11-14 2005-05-19 Laurence Warden Lensometers and wavefront sensors and methods of measuring aberration

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨珺等.立体图像对的生成.《计算机应用》.2007,第27卷(第09期),2106-2109. *
毛罕平等.基于多源机器视觉信息融合的番茄目标匹配.《农业工程学报》.2009,第25卷(第10期),142-147. *

Also Published As

Publication number Publication date
CN102435174A (en) 2012-05-02

Similar Documents

Publication Publication Date Title
CN102435174B (en) Method and device for detecting barrier based on hybrid binocular vision
CN103413313B (en) The binocular vision navigation system of electrically-based robot and method
CN104374376B (en) A kind of vehicle-mounted three-dimension measuring system device and application thereof
CN107161141B (en) Unmanned automobile system and automobile
CN106681353B (en) The unmanned plane barrier-avoiding method and system merged based on binocular vision with light stream
CN113255481B (en) Crowd state detection method based on unmanned patrol car
CN104848851B (en) Intelligent Mobile Robot and its method based on Fusion composition
CA2950791C (en) Binocular visual navigation system and method based on power robot
CN100494900C (en) Environment sensing one-eye visual navigating method adapted to self-aid moving vehicle
CN102682292B (en) Method based on monocular vision for detecting and roughly positioning edge of road
CN105225482A (en) Based on vehicle detecting system and the method for binocular stereo vision
CN107167139A (en) A kind of Intelligent Mobile Robot vision positioning air navigation aid and system
CN105512628A (en) Vehicle environment sensing system and method based on unmanned plane
CN105654732A (en) Road monitoring system and method based on depth image
CN103925927B (en) A kind of traffic mark localization method based on Vehicular video
CN104933708A (en) Barrier detection method in vegetation environment based on multispectral and 3D feature fusion
CN103955920A (en) Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
CN103400392A (en) Binocular vision navigation system and method based on inspection robot in transformer substation
CN106019264A (en) Binocular vision based UAV (Unmanned Aerial Vehicle) danger vehicle distance identifying system and method
CN106250816A (en) A kind of Lane detection method and system based on dual camera
CN112308913B (en) Vehicle positioning method and device based on vision and vehicle-mounted terminal
CN111506069B (en) All-weather all-ground crane obstacle identification system and method
CN114755662A (en) Calibration method and device for laser radar and GPS with road-vehicle fusion perception
CN117111085A (en) Automatic driving automobile road cloud fusion sensing method
KR101510745B1 (en) Autonomous vehicle system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant