CN107169418A - A kind of obstacle detection method and device - Google Patents

A kind of obstacle detection method and device Download PDF

Info

Publication number
CN107169418A
CN107169418A CN201710254548.2A CN201710254548A CN107169418A CN 107169418 A CN107169418 A CN 107169418A CN 201710254548 A CN201710254548 A CN 201710254548A CN 107169418 A CN107169418 A CN 107169418A
Authority
CN
China
Prior art keywords
image
anaglyph
pixel
barrier
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710254548.2A
Other languages
Chinese (zh)
Inventor
浠茬淮
仲维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Co Ltd
Original Assignee
Hisense Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Group Co Ltd filed Critical Hisense Group Co Ltd
Priority to CN201710254548.2A priority Critical patent/CN107169418A/en
Publication of CN107169418A publication Critical patent/CN107169418A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Abstract

The disclosure discloses a kind of obstacle detection method and device, and this method includes:The binocular image of scene to be measured is obtained, the anaglyph of scene to be measured is generated using binocular image;According to the parallax value of each pixel in anaglyph, build and anaglyph size identical coloured image;Wherein, the tristimulus values of each pixel of coloured image are related to the parallax value of corresponding pixel points in anaglyph;Coloured image is divided into some candidate regions, and combines the parallax value of pixel corresponding with each candidate region in anaglyph, it is determined that the three-dimensional spatial information of each candidate region;According to the three-dimensional spatial information threshold value of the three-dimensional spatial information of each candidate region and default barrier, it is determined that whether each candidate region is barrier region.The scheme that the disclosure is provided, the characteristic parameter without gathering barrier is trained, and thus species of the detection of barrier independent of the barrier for participating in training, improves the accuracy of detection of obstacles.

Description

A kind of obstacle detection method and device
Technical field
This disclosure relates to safe driving technical field, more particularly to a kind of obstacle detection method and device.
Background technology
With the intelligent development of modern society, the requirement of government, public organization and consumer to vehicle safety is got over Come higher, automatic/auxiliary of automobile drives also competitively has chased as the high-tech company such as automobile vendor and internet in recent years Hot technology.In this context, based on multiple sensors such as GPS map, ultrasonic wave, radar, single camera, dual cameras Automobile is automatic/and auxiliary driving scheme arises at the historic moment.
But, in existing scheme, many anti-collision warning or security controls that automobile is carried out by the way of multiple-sensor integration, System composition is relative complex and costly, it is difficult to actually be used.In existing adopted actual solution, especially in vapour Car is automatic/realization of auxiliary driving field anti-collision warning function on, detection of obstacles and distance for pedestrian and vehicle etc. Assess many two dimensional images based on single camera to be handled, and utilize the characteristic parameter training pattern of different barriers, then The barrier in two dimensional image is detected using the model trained.
From the foregoing, needing to collect all possible barrier before detection of obstacles, for unbred obstacle Thing will be unable to accurate detection, thus may be incomplete because collecting, and there is the barrier of None- identified, causes detection discrimination low, Produce inevitable potential safety hazard.
The content of the invention
In order to solve can not accurately to detect unbred barrier present in correlation technique, cause detection discrimination low Lower the problem of, present disclose provides a kind of obstacle detection method.
Present disclose provides a kind of obstacle detection method, this method includes:
The binocular image of scene to be measured is obtained, the anaglyph of the scene to be measured is generated using the binocular image;
According to the parallax value of each pixel in the anaglyph, build colored with the anaglyph size identical Image;Wherein, the tristimulus values of each pixel of the coloured image and the parallax of corresponding pixel points in the anaglyph Value is related;
The coloured image is divided into some candidate regions, and with reference in the anaglyph with each candidate regions The parallax value of the corresponding pixel in domain, it is determined that the three-dimensional spatial information of each candidate region;
According to the three-dimensional spatial information threshold value of the three-dimensional spatial information of each candidate region and default barrier, Whether determine each candidate region is barrier region.
The disclosure additionally provides a kind of obstacle detector, and the device includes:
Image collection module, the binocular image for obtaining scene to be measured is generated described to be measured using the binocular image The anaglyph of scene;
Image construction module, for the parallax value according to each pixel in the anaglyph, builds and the parallax Picture size identical coloured image;Wherein, the tristimulus values of each pixel of the coloured image and the anaglyph The parallax value of middle corresponding pixel points is related;
Three-dimensional computations module, for the coloured image to be divided into some candidate regions, and with reference to the anaglyph In pixel corresponding with each candidate region parallax value, it is determined that the three-dimensional spatial information of each candidate region;
Obstacle determination module, for the three-dimensional spatial information according to each candidate region and default barrier Whether three-dimensional spatial information threshold value, it is barrier region to determine each candidate region.
The technical scheme provided by this disclosed embodiment can include the following benefits:
Obstacle detection method and device that the disclosure is provided, by obtaining the binocular image of scene to be measured, utilize binocular Image synthesizes anaglyph, and generates corresponding coloured image using anaglyph, the different zones in coloured image Whether three-dimensional spatial information is barrier region come the different zones for judging scene to be measured.Thus, the disclosure need not gather obstacle The characteristic parameter of thing is trained, solve prior art it is that may be present because barrier feature collect it is incomplete, exist without instruction Experienced barrier, the problem of leading to not accurately detect barrier, the scheme provided by the disclosure improves obstacle quality testing The accuracy of survey.
It should be appreciated that the general description of the above and detailed description hereinafter are only exemplary, this can not be limited It is open.
Brief description of the drawings
Accompanying drawing herein is merged in specification and constitutes the part of this specification, shows the implementation for meeting the present invention Example, and in specification together for explaining principle of the invention.
Fig. 1 is the schematic diagram of the implementation environment according to involved by the disclosure;
Fig. 2 is a kind of block diagram of device according to an exemplary embodiment;
Fig. 3 is a kind of flow chart of obstacle detection method according to an exemplary embodiment;
Fig. 4 a, 4b, 4c, 4d, 4e are being handled successively two dimensional image according to another exemplary embodiment Design sketch;
Fig. 5 is the schematic flow sheet that step S350 details is described according to an exemplary embodiment;
Fig. 6 is the schematic flow sheet that step S310 details is described according to an exemplary embodiment;
Fig. 7 a, 7b, 7c are the schematic diagrams of the progress Stereo matching processing according to another exemplary embodiment
Fig. 8 is a kind of flow chart of obstacle detection method according to another exemplary embodiment;
Fig. 9 is a kind of block diagram of obstacle detector according to an exemplary embodiment;
Figure 10 is described for the details to image collection module 910 according to the exemplary embodiment of the disclosure one Block diagram;
Figure 11 is described for the details to three-dimensional computations module 950 according to the exemplary embodiment of the disclosure one Block diagram.
Embodiment
Here explanation will be performed to exemplary embodiment in detail, its example is illustrated in the accompanying drawings.Following description is related to During accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represent same or analogous key element.Following exemplary embodiment Described in embodiment do not represent and the consistent all embodiments of the present invention.On the contrary, they be only with it is such as appended The example of the consistent apparatus and method of some aspects be described in detail in claims, the present invention.
Fig. 1 is the schematic diagram of the implementation environment according to involved by the disclosure.The implementation environment includes:Binocular camera 110 With car-mounted terminal 120;Interrelational form between binocular camera 110 and car-mounted terminal 120, includes the network associate mode of hardware And/or agreement, and the data correlation mode come and gone therebetween.Binocular camera 110 is parallel contour including left and right two Camera, left, two cameras gather two dimensional image, that is, binocular image respectively.The two dimension of left and right two cameras collection Image is transmitted to car-mounted terminal 120, and car-mounted terminal 120 is obtained after the two dimensional image of the collection of binocular camera 110, can be adopted The obstacle detection method provided with the embodiment of the present disclosure is handled image, determines barrier region.
Fig. 2 is a kind of block diagram of device 200 according to an exemplary embodiment.For example, device 200 can be Fig. 1 Car-mounted terminal 120 in shown implementation environment.
Reference picture 2, device 200 can include following one or more assemblies:Processing assembly 202, memory 204, power supply Component 206, multimedia groupware 208, audio-frequency assembly 210, sensor cluster 214 and communication component 216.
The integrated operation of the usual control device 200 of processing assembly 202, such as with display, call, data communication, phase Operation that machine is operated and record operation is associated etc..Processing assembly 202 can include one or more processors 218 to perform Instruction, to complete all or part of step of following methods.In addition, processing assembly 202 can include one or more modules, It is easy to the interaction between processing assembly 202 and other assemblies.For example, processing assembly 202 can include multi-media module, with convenient Interaction between multimedia groupware 208 and processing assembly 202.
Memory 204 is configured as storing various types of data supporting the operation in device 200.These data are shown Example includes the instruction of any application program or method for operating on the device 200.Memory 204 can be by any kind of Volatibility or non-volatile memory device or combinations thereof realization, such as static RAM (Static Random Access Memory, abbreviation SRAM), Electrically Erasable Read Only Memory (Electrically Erasable Programmable Read-Only Memory, abbreviation EEPROM), Erasable Programmable Read Only Memory EPROM (Erasable Programmable Read Only Memory, abbreviation EPROM), programmable read only memory (Programmable Red- Only Memory, abbreviation PROM), read-only storage (Read-Only Memory, abbreviation ROM), magnetic memory, flash Device, disk or CD.Also be stored with one or more modules in memory 204, and one or more modules are configured to by this One or more processors 218 are performed, all or part of in any shown method of following Fig. 3, Fig. 5, Fig. 6, Fig. 8 to complete Step.
Power supply module 206 provides electric power for the various assemblies of device 200.Power supply module 206 can include power management system System, one or more power supplys, and other components associated with generating, managing and distributing electric power for device 200.
Multimedia groupware 208 is included in the screen of one output interface of offer between described device 200 and user.One In a little embodiments, screen can include liquid crystal display (Liquid Crystal Display, abbreviation LCD) and touch panel. If screen includes touch panel, screen may be implemented as touch-screen, to receive the input signal from user.Touch panel Including one or more touch sensors with the gesture on sensing touch, slip and touch panel.The touch sensor can be with The not only border of sensing touch or sliding action, but also the detection duration related to the touch or slide and pressure Power.Screen can also include display of organic electroluminescence (Organic Light Emitting Display, abbreviation OLED).
Audio-frequency assembly 210 is configured as output and/or input audio signal.For example, audio-frequency assembly 210 includes a Mike Wind (Microphone, abbreviation MIC), when device 200 is in operator scheme, such as call model, logging mode and speech recognition mould During formula, microphone is configured as receiving external audio signal.The audio signal received can be further stored in memory 204 or sent via communication component 216.In certain embodiments, audio-frequency assembly 210 also includes a loudspeaker, for exporting Audio signal.
Sensor cluster 214 includes one or more sensors, and the state for providing various aspects for device 200 is commented Estimate.For example, sensor cluster 214 can detect opening/closed mode of device 200, the relative positioning of component, sensor group Part 214 can be with the position change of 200 1 components of detection means 200 or device and the temperature change of device 200.At some In embodiment, the sensor cluster 214 can also include Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 216 is configured to facilitate the communication of wired or wireless way between device 200 and other equipment.Device 200 can access the wireless network based on communication standard, such as WiFi (WIreless-Fidelity, Wireless Fidelity).Show at one In example property embodiment, communication component 216 receives broadcast singal or broadcast from external broadcasting management system via broadcast channel Relevant information.In one exemplary embodiment, the communication component 216 also includes near-field communication (Near Field Communication, abbreviation NFC) module, to promote junction service.For example, radio frequency identification (Radio can be based in NFC module Frequency Identification, abbreviation RFID) technology, Infrared Data Association (Infrared Data Association, abbreviation IrDA) technology, ultra wide band (Ultra Wideband, abbreviation UWB) technology, Bluetooth technology and other skills Art is realized.
In the exemplary embodiment, device 200 can be by one or more application specific integrated circuit (Application Specific Integrated Circuit, abbreviation ASIC), it is digital signal processor, digital signal processing appts, programmable Logical device, field programmable gate array, controller, microcontroller, microprocessor or other electronic components are realized, for performing Following methods.
Fig. 3 is a kind of flow chart of obstacle detection method according to an exemplary embodiment.The detection of obstacles The scope of application and executive agent of method, for example, this method is used for the car-mounted terminal 120 of implementation environment shown in Fig. 1.In addition, should Method can be also used for other vehicles (such as aircraft, steamer), smart machine (such as intelligent crutch), Intelligent worn devices (such as Intelligent safety helmet etc.).As shown in figure 3, the obstacle detection method, may comprise steps of:
Step S310:The binocular image of scene to be measured is obtained, regarding for the scene to be measured is generated using the binocular image Difference image;
With concrete application for example, binocular camera shooting can be set in the front end of vehicle, the direction travelled towards vehicle Head, binocular image is gathered by binocular camera.It is as shown in Figs. 4a and 4b, it is left and right two cameras difference of binocular camera The two dimensional image of the scene to be measured of collection.Image processing engine can be set in car-mounted terminal, by image processing engine by binocular Binocular image generation anaglyph, the i.e. three-dimensional image of camera collection.Image processing engine can have CPU, DSP, GPU, FPGA or special ASIC are realized.The two dimensional image that the input of the image processing engine is gathered respectively for binocular camera, output For size and two dimensional image identical three-dimensional image, as illustrated in fig. 4 c.The gray value of three-dimensional image corresponds to solid The parallax value of two dimensional image pixel after matching synthesis.Wherein, the detailed process ginseng of image processing engine generation anaglyph See below.
Further, the binocular image of scene to be measured is obtained in step S310, is treated using described in binocular image generation After the anaglyph for surveying scene, methods described can also comprise the following steps:
Step S321:Noise reduction process and edge extraction operation processing are carried out to the anaglyph;
Specifically, can be carried out anaglyph based on wave filters such as Gausses by the embedded microprocessor of car-mounted terminal Noise reduction process, and based on the marginal information of Canny scheduling algorithms extraction anaglyph.Wherein, for after edge extraction operation Image, the gray value of edge pixel is the parallax value of other pixels beyond the parallax value of the pixel, edge pixel For 0.
Step S322:The non-edge pixels point of anaglyph after edge extraction operation processing is subjected to image Operation is repaired, the parallax value of non-edge pixels point is filled so that the parallax value of each pixel of the anaglyph is in gradual change State.
It should be noted that for the anaglyph after step S321 processing, to the non-edge pixels of anaglyph Point (parallax value is 0 pixel) carries out regarding for In-Painting (image repair) operations, the i.e. edge pixel based on all directions The parallax value of difference filling non-edge pixels (parallax value is 0 pixel) so that each pixel in anaglyph is provided with Non-zero parallax value.As shown in figure 4d, it is the anaglyph after In-Painting is operated, each pixel of anaglyph Parallax value be in gradual change state, in other words, the gray value of each pixel of anaglyph has a change, be not be all it is black or It is all white.
Step S330:According to the parallax value of each pixel in the anaglyph, build and the anaglyph size Identical coloured image;Wherein, the tristimulus values of each pixel of coloured image picture corresponding with the anaglyph The parallax value of vegetarian refreshments is related;
Wherein it is possible to which the anaglyph for being generated image processing engine by the embedded microprocessor of car-mounted terminal is further Generation and anaglyph size identical coloured image, as shown in fig 4e.Specifically, embedded microprocessor is newly-built and disparity map As size identical coloured image;Wherein, each pixel of each pixel of anaglyph and coloured image is corresponded.It is embedding Enter parallax value of the microsever according to each pixel in anaglyph, the corresponding pixel points of coloured image are carried out into color fills out Fill.
It is to be understood that based on the anaglyph after In-Painting is operated, with reference to two of binocular camera The distance between camera B and cam lens focal length f, by formula Z=B*f/d, d is parallax value, it is possible to calculated Go out depth information of each pixel in actual three dimensions, i.e. Z values.Afterwards can be according to depth information to coloured image Corresponding pixel points carry out color filling.For example, cromogram can be adjusted according to the fluctuation range of the depth information of all pixels point RGB (three primary colors) value of each pixel of picture, makes the rgb value of each pixel be fluctuated between 0-255.
As needed, noise reduction and smoothing processing based on wave filters such as Gausses can also be carried out to coloured image.
Step S350:The coloured image is divided into some candidate regions, and with reference in the anaglyph and each The parallax value of the corresponding pixel in the candidate region, it is determined that the three-dimensional spatial information of each candidate region;
It should be noted that Pixel-level segmentation can be carried out to coloured image, and the image after segmentation is divided into some Candidate region.For mark off come each candidate region, can according to each candidate region in anaglyph correspondence position Pixel parallax value, it is determined that the three-dimensional spatial information of each candidate region.Wherein, three-dimensional spatial information includes each candidate The information such as the corresponding length in region, position.
Fig. 5 is the schematic flow sheet that step S350 details is described according to an exemplary embodiment, such as Shown in Fig. 5, step S350 is specifically included:
Step S351:It is different according to the color of the different zones of the coloured image, if the coloured image is divided into Dry candidate region;
Specifically, by one region division of color identical together, by being divided, vehicle, road can be set Wood etc. is divided in different regions respectively.
Step SS52:According to the parallax value of each pixel in the anaglyph, the three of each pixel are calculated Dimension space coordinate;
Specifically, according to the parallax value of each pixel in anaglyph, the three dimensional space coordinate of each pixel is calculated, It can be calculated using below equation:
Z=B*f/d
X=(W/2-u) * B/d-B/2
Y=H '-(v-H/2) * B/d
Wherein, (X, Y, Z) is the three dimensional space coordinate value under required world coordinate system, and B is two of binocular camera The distance between camera, f is cam lens focal length, and d is is parallax value, and H ' is height of the dual camera apart from ground, depending on Difference image size is (W, H), for example:1280*960, coordinate of the pixel in image coordinate system is (u, v), such as Plain (100,100).
Due to B, f, d, H ' and (W, H) and (u, v) be known quantity, from there through above-mentioned formula just can calculate draw The three dimensional space coordinate value of the pixel of each candidate region after point.
Step S353:The three dimensional space coordinate of the pixel included according to each candidate region determines each candidate regions The size in domain and position.
It should be noted that calculating the three dimensional space coordinate value (X, Y, Z) of each pixel of each candidate region Afterwards, it is possible to directly by the difference of coordinates computed value, obtain the length dimension information in each region.And according to each The three dimensional space coordinate value of all pixels point in region can determine the residing locus in each region.
Step S370:According to the three-dimensional spatial information of each candidate region and the three dimensions of default barrier Whether information threshold, it is barrier region to determine each candidate region.
It should be noted that after the three-dimensional spatial information of each candidate region has been obtained, for example, being gone out in calculating , can be according to the three dimensions threshold value of barrier set in advance, such as length, width and height after the size of each candidate region and position Threshold value and position threshold value, barrier decision is done to each candidate region after division, that is, determines each candidate region Whether it is barrier region.For example, when the size of the length, width and height in some region is more than default length and width high threshold, and the area When the position in domain is in the range of default Obstacle Position, it is believed that the region belongs to barrier region.For example, road institute The candidate region of category only has length and width, without height, so the candidate region belonging to road is not belonging to barrier region.Similarly, may be used To judge whether other candidate regions are barrier region respectively.
For the detection of barrier, the main characteristic parameter by collecting different barriers that may be present in the prior art It whether there is barrier in training pattern, the two dimensional image of the scene to be measured then gathered using the model inspection after training.By This unbred barrier, will be unable to accurately detect.The obstacle detection method that disclosure above-described embodiment is provided, passes through The binocular image of scene to be measured is obtained, anaglyph is synthesized using binocular image, and corresponding colour is generated using anaglyph Whether image, the three-dimensional spatial information of the different zones in coloured image is obstacle come the different zones for judging scene to be measured Object area.Thus, the disclosure need not gather the characteristic parameter of barrier and be trained, solve prior art it is that may be present because Barrier feature collects incomplete, there is unbred barrier, the problem of leading to not accurately detect barrier, by this The open scheme provided, improves the accuracy of detection of obstacles.
Fig. 6 is the schematic flow sheet that step S310 details is described according to an exemplary embodiment, such as Shown in Fig. 6, step S310 is specifically included:
Step S311:Obtain the first two dimensional image and the two or two of the scene to be measured of binocular image harvester collection Tie up image;
With implement scene shown in Fig. 1 for example, binocular image harvester be binocular camera 110, left camera and Right camera gathers the two dimensional image of scene to be measured respectively, to make a distinction, and respectively becomes the first two dimensional image and the second two dimension Image.Car-mounted terminal obtains the two dimensional image of left camera collection and the two dimensional image of right camera collection.
Step S312:The image on the basis of first two dimensional image, second two dimensional image is movement images, to institute State the first two dimensional image and the second two dimensional image carries out Stereo matching processing, determine that each pixel institute of the benchmark image is right The parallax value answered;
Wherein it is possible to the two dimensional image that left camera is gathered is as benchmark image, the X-Y scheme that right camera is gathered As being used as movement images, naturally it is also possible to, the two dimensional image that right camera is gathered gathers left camera as benchmark image Two dimensional image be used as movement images.The Stereo matching processing of 3-D view is carried out for movement images and benchmark image afterwards.
Specifically, first against movement images, traversal needs to carry out the central pixel point of Stereo matching, and imago in this Fixed size (W x H) window is set up around vegetarian refreshments, as shown in Figure 7a, is stood as the central pixel point and benchmark image Minimum of computation unit when body is matched.For the pixel center point of selected movement images, corresponding window is mapped in same Y and sat On target benchmark image, as shown in Figure 7b.Order traversal from left to right with the benchmark image of Y-axis window center pixel, With SAD (absolute value of Sum of Absolute Difference respective pixels difference) algorithms or SSD (Sum of Squared The quadratic sum of Difference respective pixels difference) algorithm carries out difference cost (value) and calculated, and corresponding result of calculation is preserved, such as scheme Shown in 7c.When SAD or SSD is calculated, minimum window center pixel will be selected as this with movement images difference cost The match point of pixel center point, (x is sat the displacement difference between the selected pixel center point and the match point of benchmark image of movement images The difference of mark) it is minimum parallax d ' as shown in Figure 7 c, its corresponding depth information is then the benchmark image in 3-D view The respective distances of pixel center point.By traveling through all pixels central point in movement images, by obtain benchmark image each Parallax value corresponding to pixel.
Step S313:Using the parallax value corresponding to each pixel of the benchmark image as the anaglyph pair The gray value of pixel is answered to generate the anaglyph of the scene to be measured.
Specifically, by traveling through all pixels central point in movement images, it is identical with benchmark image by size is calculated Three-dimensional image, i.e. anaglyph.Parallax value corresponding to each pixel of benchmark image, then will save as 3 D stereo The gray value of each pixel of image.
Further, on the basis of above-mentioned example embodiment, step S370 determine each candidate region whether be After barrier region, as shown in figure 8, the obstacle detection method that the disclosure is provided can also comprise the following steps:
Step 810:If it is determined that there is barrier region, then according to corresponding with barrier region each in the anaglyph The parallax value of individual pixel, calculates the depth value of each pixel of the barrier region;
If it should be noted that in step S370, it is determined that some candidate region is barrier region, then can utilize (d is parallax value to formula Z=B*f/d, and B is the distance between two cameras of binocular camera, and f is that cam lens are burnt Away from), it is known quantity according to the parallax value of each corresponding pixel of the barrier region provided in anaglyph, i.e. d, calculates The Z values of the depth value of each pixel of barrier region, i.e. each pixel.
Step 811:Equalization processing is carried out to the depth value of all pixels point of the barrier region, obtain with it is to be measured Actual range in scene between barrier;
Specifically, for barrier region, after the depth value of the corresponding all pixels point of barrier region is calculated, The actual range between barrier can be obtained by calculating the average value of depth value.Such as vehicle and obstacle in travelling The distance between thing.
Step 812:According to the actual range between barrier in scene to be measured, estimation collides with barrier Predicted time, and the predicted time be less than risk threshold value when, send alarm signal.
, can be according to speed it is to be understood that after the actual range between barrier is calculated, estimation may be with The time that barrier collides.It is less than risk threshold when the time (being collided after such as 1.5 seconds) collided that estimation is obtained It is worth (such as 2 seconds), then judges that the possibility collided with front obstacle is larger, alarm signal can be now sent to alarm Terminal, to realize the function of early warning,
Or, according to the real-time change of actual range, the relative velocity change with barrier can be calculated, and then estimate Go out the time that may be collided with barrier, when the time being less than risk threshold value, send alarm signal.
Further, on the basis of any of the above-described exemplary embodiments, each candidate region is determined in step S370 After whether being barrier region, the obstacle detection method that the disclosure is provided can also include:
If it is determined that there is barrier region, then believed according to the corresponding grey value characteristics of barrier region in the binocular image Breath or color-values characteristic information, determine the classification of barrier.
It should be noted that if it is determined that some candidate region be barrier region, then can be adopted with reference to binocular image The two dimensional image of acquisition means collection, believes according to the gray value of barrier region correspondence position in two dimensional image or color value tag Breath, determines the classification of barrier.For example, color is the expression trees of green.By that analogy, for different barriers, have Standby respective gray feature or color character, by the grey value characteristics information and color that obtain barrier region in two dimensional image Value information, it is possible to determine the classification of barrier, is people, car or other barriers, further improves the accurate of detection of obstacles Degree.If this method is applied into traffic and transport field, the stability and reliability of safe driving can also be improved.
Following is disclosure device embodiment, can be used for performing the barrier that the above-mentioned car-mounted terminal 120 of the disclosure is performed Detection method embodiment.For the details not disclosed in disclosure device embodiment, disclosure obstacle detection method refer to Embodiment.
Fig. 9 is a kind of block diagram of obstacle detector according to an exemplary embodiment, detection of obstacles dress Put in the car-mounted terminal 120 that can be used for implementation environment shown in Fig. 1, perform any shown barrier of Fig. 3, Fig. 5, Fig. 6, Fig. 8 The all or part of step of detection method.As shown in figure 9, the obstacle detector includes but is not limited to:Image obtains mould Block 910, image construction module 930, three-dimensional computations module 950 and obstacle determination module 970;
Image collection module 910, the binocular image for obtaining scene to be measured is treated using described in binocular image generation Survey the anaglyph of scene;
Image construction module 930, for the parallax value according to each pixel in the anaglyph, structure is regarded with described Difference image size identical coloured image;Wherein, the tristimulus values of each pixel of the coloured image and the disparity map The parallax value of corresponding pixel points is related as in;
Three-dimensional computations module 950, for the coloured image to be divided into some candidate regions, and with reference to the disparity map The parallax value of pixel corresponding with each candidate region as in, it is determined that the three-dimensional spatial information of each candidate region;
Obstacle determination module 970, for the three-dimensional spatial information and default obstacle according to each candidate region Whether the three-dimensional spatial information threshold value of thing, it is barrier region to determine each candidate region.
The function of modules and the implementation process of effect are specifically referred in above-mentioned obstacle detection method in said apparatus The implementation process of correspondence step, will not be repeated here.
Image collection module 910 such as can be some physical arrangement communication component 216 in Fig. 2.
Image construction module 930, three-dimensional computations module 950 and obstacle determination module 970 can also be functional modules, be used for Perform the corresponding step in above-mentioned obstacle detection method.It is appreciated that these modules can by hardware, software or the two With reference to realizing.When realizing in hardware, these modules may be embodied as one or more hardware modules, such as one or Multiple application specific integrated circuits.When being realized with software mode, these modules may be embodied as holding on the one or more processors Capable one or more computer programs, such as program of storage in memory 204 performed by the processor 218 of Fig. 2.
Figure 10 is described for the details to image collection module 910 according to the exemplary embodiment of the disclosure one Block diagram, as shown in Figure 10, the image collection module 910 can include but not limit:
Image acquisition unit 911, the first two dimension of the scene to be measured for obtaining the collection of binocular image harvester Image and the second two dimensional image;
Stereo matching unit 912, for the image on the basis of first two dimensional image, second two dimensional image be than Compared with image, Stereo matching processing is carried out to first two dimensional image and the second two dimensional image, each of the benchmark image is determined Parallax value corresponding to individual pixel;
Image generation unit 913, for the parallax value corresponding to each pixel using the benchmark image as described The gray value of the corresponding pixel points of anaglyph generates the anaglyph of the scene to be measured.
Optionally, described device can also include but not limit:
Edge extracting module, for carrying out noise reduction process and edge extraction operation processing to the anaglyph;
Image repair module, for by by the edge extraction operation processing after anaglyph non-edge pixels point Image repair operation is carried out, the parallax value of non-edge pixels point is filled so that the parallax of each pixel of the anaglyph Value is in gradual change state.
Figure 11 is described for the details to three-dimensional computations module 950 according to the exemplary embodiment of the disclosure one Block diagram, as shown in figure 11, the three-dimensional computations module 950 can include but not limit:
Image segmentation unit 951, the color for the different zones according to the coloured image is different, by the cromogram As being divided into some regions;
Coordinate calculating unit 952, for the parallax value according to each pixel in the anaglyph, calculates described each The three dimensional space coordinate of pixel;
Area calculation unit 953, the three dimensional space coordinate of the pixel for being included according to each candidate region determines institute State size and the position of each candidate region.
On the basis of any of the above-described exemplary embodiments, optionally, described device can also include but not limit:
Depth calculation module, for when it is determined that there is barrier region, according in the anaglyph with barrier area The parallax value of each corresponding pixel of domain, calculates the depth value of each pixel of the barrier region;
Distance calculation module, the depth value for all pixels point to the barrier region carries out equalization processing, Obtain the actual range between barrier in scene to be measured;
Obstruction forewarning module, for according to the actual range between barrier in scene described and to be measured, estimation and obstacle The predicted time that thing collides, and when the predicted time is less than risk threshold value, send alarm signal.
On the basis of any of the above-described exemplary embodiments, optionally, described device can also include but not limit:
Obstacle identification module, for when it is determined that there is barrier region, according to barrier region in the binocular image Corresponding grey value characteristics information or color-values characteristic information, determine the classification of barrier.
Optionally, the disclosure also provides a kind of obstacle detector, and the obstacle detector can be used for shown in Fig. 1 In the car-mounted terminal 120 of implementation environment, perform Fig. 3, Fig. 5, Fig. 6, Fig. 8 it is any shown in obstacle detection method whole or Part steps.Described device includes:
Processor;
Memory for storing processor-executable instruction;
Wherein, the processor is configured as performing:
The binocular image of scene to be measured is obtained, the anaglyph of the scene to be measured is generated using the binocular image;
According to the parallax value of each pixel in the anaglyph, build colored with the anaglyph size identical Image;Wherein, the tristimulus values of each pixel of the coloured image and the parallax of corresponding pixel points in the anaglyph Value is related;
The coloured image is divided into some candidate regions, and with reference in the anaglyph with each candidate regions The parallax value of the corresponding pixel in domain, it is determined that the three-dimensional spatial information of each candidate region;
According to the three-dimensional spatial information threshold value of the three-dimensional spatial information of each candidate region and default barrier, Whether determine each candidate region is barrier region.
The concrete mode of the computing device operation of device in the embodiment is in the relevant obstacle detection method Embodiment in perform detailed description, explanation will be not set forth in detail herein.
In the exemplary embodiment, a kind of storage medium is additionally provided, the storage medium is computer-readable recording medium, For example can be to include the provisional and non-transitorycomputer readable storage medium of instruction.The storage medium is for example including instruction Memory 204, above-mentioned instruction can perform to complete above-mentioned obstacle detection method by the processor 218 of device 200.
It should be appreciated that the invention is not limited in the precision architecture for being described above and being shown in the drawings, and And various modifications and changes can be being performed without departing from the scope.The scope of the present invention is only limited by appended claim.

Claims (12)

1. a kind of obstacle detection method, it is characterised in that including:
The binocular image of scene to be measured is obtained, the anaglyph of the scene to be measured is generated using the binocular image;
According to the parallax value of each pixel in the anaglyph, build and the anaglyph size identical cromogram Picture;Wherein, the tristimulus values of each pixel of the coloured image and the parallax value of corresponding pixel points in the anaglyph It is related;
The coloured image is divided into some candidate regions, and with reference in the anaglyph and each candidate region pair The parallax value for the pixel answered, it is determined that the three-dimensional spatial information of each candidate region;
According to the three-dimensional spatial information threshold value of the three-dimensional spatial information of each candidate region and default barrier, it is determined that Whether each candidate region is barrier region.
2. according to the method described in claim 1, it is characterised in that the binocular image for obtaining scene to be measured, using described Binocular image generates the anaglyph of the scene to be measured, including:
Obtain the first two dimensional image and the second two dimensional image of the scene to be measured of binocular image harvester collection;
The image on the basis of first two dimensional image, second two dimensional image is movement images, to first X-Y scheme Picture and the second two dimensional image carry out Stereo matching processing, determine the parallax value corresponding to each pixel of the benchmark image;
Using the parallax value corresponding to each pixel of the benchmark image as the corresponding pixel points of the anaglyph ash Angle value generates the anaglyph of the scene to be measured.
3. according to the method described in claim 1, it is characterised in that in the binocular image for obtaining scene to be measured, utilize institute State after the anaglyph that binocular image generates the scene to be measured, methods described also includes:
Noise reduction process and edge extraction operation processing are carried out to the anaglyph;
The non-edge pixels point of anaglyph after edge extraction operation processing is subjected to image repair operation, filling The parallax value of non-edge pixels point so that the parallax value of each pixel of the anaglyph is in gradual change state.
4. according to the method described in claim 1, it is characterised in that described that coloured image is divided into some candidate regions, and With reference to the parallax value of pixel corresponding with each candidate region in the anaglyph, it is determined that the three of each candidate region Dimension space information, including:
It is different according to the color of the different zones of the coloured image, the coloured image is divided into some candidate regions;
According to the parallax value of each pixel in the anaglyph, the three dimensional space coordinate of each pixel is calculated;
The three dimensional space coordinate of the pixel included according to each candidate region determines size and the position of each candidate region Put.
5. according to the method described in claim 1, it is characterised in that determine whether each candidate region is barrier area described After domain, in addition to:
If it is determined that there is barrier region, then regarded according to each pixel corresponding with barrier region in the anaglyph Difference, calculates the depth value of each pixel of the barrier region;
Equalization processing is carried out to the depth value of all pixels point of the barrier region, obtained and barrier in scene to be measured Between actual range;
The predicted time collided according to the actual range between barrier in scene to be measured, estimation with barrier, And when the predicted time is less than risk threshold value, send alarm signal.
6. according to the method described in claim 1, it is characterised in that determine whether each candidate region is barrier area described After domain, in addition to:
If it is determined that there is barrier region, then according to the corresponding grey value characteristics information of barrier region in the binocular image or Color-values characteristic information, determines the classification of barrier.
7. a kind of obstacle detector, it is characterised in that including:
Image collection module, the binocular image for obtaining scene to be measured generates the scene to be measured using the binocular image Anaglyph;
Image construction module, for the parallax value according to each pixel in the anaglyph, builds and the anaglyph Size identical coloured image;Wherein, the tristimulus values of each pixel of the coloured image with it is right in the anaglyph Answer the parallax value of pixel related;
Three-dimensional computations module, for the coloured image to be divided into some candidate regions, and with reference in the anaglyph with The parallax value of the corresponding pixel in each candidate region, it is determined that the three-dimensional spatial information of each candidate region;
Obstacle determination module, for the three-dimensional spatial information and the three-dimensional of default barrier according to each candidate region Whether spatial information threshold value, it is barrier region to determine each candidate region.
8. device according to claim 7, it is characterised in that described image acquisition module includes:
Image acquisition unit, the first two dimensional image and the of the scene to be measured for obtaining the collection of binocular image harvester Two two dimensional images;
Stereo matching unit, for the image on the basis of first two dimensional image, second two dimensional image is movement images, Stereo matching processing is carried out to first two dimensional image and the second two dimensional image, each pixel of the benchmark image is determined Corresponding parallax value;
Image generation unit, the anaglyph is used as the parallax value corresponding to each pixel using the benchmark image The gray values of corresponding pixel points generate the anaglyph of the scene to be measured.
9. device according to claim 7, it is characterised in that described device also includes:
Edge extracting module, for carrying out noise reduction process and edge extraction operation processing to the anaglyph;
Image repair module, for the non-edge pixels point of the anaglyph after edge extraction operation processing to be carried out Image repair is operated, and fills the parallax value of non-edge pixels point so that the parallax value of each pixel of the anaglyph is in Gradual change state.
10. device according to claim 7, it is characterised in that the three-dimensional computations module includes:
Image segmentation unit, the color for the different zones according to the coloured image is different, and the coloured image is divided For some regions;
Coordinate calculating unit, for the parallax value according to each pixel in the anaglyph, calculates each pixel Three dimensional space coordinate;
Area calculation unit, the three dimensional space coordinate of the pixel for being included according to each candidate region determines each time The size of favored area and position.
11. device according to claim 7, it is characterised in that described device also includes:
Depth calculation module, for when it is determined that there is barrier region, according in the anaglyph with barrier region pair The parallax value for each pixel answered, calculates the depth value of each pixel of the barrier region;
Distance calculation module, the depth value for all pixels point to the barrier region carries out equalization processing, obtains With the actual range between barrier in scene to be measured;
Obstruction forewarning module, for according to the actual range between barrier in scene described and to be measured, estimation to be sent out with barrier The predicted time of raw collision, and when the predicted time is less than risk threshold value, send alarm signal.
12. device according to claim 7, it is characterised in that described device also includes:
Obstacle identification module, for when it is determined that there is barrier region, according to barrier region correspondence in the binocular image Grey value characteristics information or color-values characteristic information, determine the classification of barrier.
CN201710254548.2A 2017-04-18 2017-04-18 A kind of obstacle detection method and device Pending CN107169418A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710254548.2A CN107169418A (en) 2017-04-18 2017-04-18 A kind of obstacle detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710254548.2A CN107169418A (en) 2017-04-18 2017-04-18 A kind of obstacle detection method and device

Publications (1)

Publication Number Publication Date
CN107169418A true CN107169418A (en) 2017-09-15

Family

ID=59812183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710254548.2A Pending CN107169418A (en) 2017-04-18 2017-04-18 A kind of obstacle detection method and device

Country Status (1)

Country Link
CN (1) CN107169418A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319910A (en) * 2018-01-30 2018-07-24 海信集团有限公司 A kind of vehicle identification method, device and terminal
CN108446622A (en) * 2018-03-14 2018-08-24 海信集团有限公司 Detecting and tracking method and device, the terminal of target object
CN108520536A (en) * 2018-03-27 2018-09-11 海信集团有限公司 A kind of generation method of disparity map, device and terminal
CN109074668A (en) * 2018-08-02 2018-12-21 深圳前海达闼云端智能科技有限公司 Method for path navigation, relevant apparatus and computer readable storage medium
CN109460709A (en) * 2018-10-12 2019-03-12 南京大学 The method of RTG dysopia analyte detection based on the fusion of RGB and D information
CN109917419A (en) * 2019-04-12 2019-06-21 中山大学 A kind of depth fill-in congestion system and method based on laser radar and image
CN110378168A (en) * 2018-04-12 2019-10-25 海信集团有限公司 The method, apparatus and terminal of polymorphic type barrier fusion
CN110598505A (en) * 2018-06-12 2019-12-20 海信集团有限公司 Method and device for judging suspension state of obstacle and terminal
CN110992424A (en) * 2019-11-27 2020-04-10 苏州智加科技有限公司 Positioning method and system based on binocular vision
CN111382591A (en) * 2018-12-27 2020-07-07 海信集团有限公司 Binocular camera ranging correction method and vehicle-mounted equipment
CN111664798A (en) * 2020-04-29 2020-09-15 深圳奥比中光科技有限公司 Depth imaging method and device and computer readable storage medium
CN111899170A (en) * 2020-07-08 2020-11-06 北京三快在线科技有限公司 Obstacle detection method and device, unmanned aerial vehicle and storage medium
CN112164058A (en) * 2020-10-13 2021-01-01 东莞市瑞图新智科技有限公司 Silk-screen area coarse positioning method and device for optical filter and storage medium
CN114119700A (en) * 2021-11-26 2022-03-01 山东科技大学 Obstacle ranging method based on U-V disparity map
CN115257767A (en) * 2022-09-26 2022-11-01 中科慧眼(天津)研究开发有限公司 Pavement obstacle height measurement method and system based on plane target
CN116912403A (en) * 2023-07-03 2023-10-20 上海鱼微阿科技有限公司 XR equipment and obstacle information sensing method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110073917A (en) * 2009-12-24 2011-06-30 중앙대학교 산학협력단 Apparatus and method for detecting obstacle for on-line electric vehicle based on gpu
CN103268604A (en) * 2013-05-10 2013-08-28 清华大学 Binocular video depth map calculating method
CN103679707A (en) * 2013-11-26 2014-03-26 西安交通大学 Binocular camera disparity map based road obstacle detection system and method
CN104915943A (en) * 2014-03-12 2015-09-16 株式会社理光 Method and apparatus for determining main disparity value in disparity map

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110073917A (en) * 2009-12-24 2011-06-30 중앙대학교 산학협력단 Apparatus and method for detecting obstacle for on-line electric vehicle based on gpu
CN103268604A (en) * 2013-05-10 2013-08-28 清华大学 Binocular video depth map calculating method
CN103679707A (en) * 2013-11-26 2014-03-26 西安交通大学 Binocular camera disparity map based road obstacle detection system and method
CN104915943A (en) * 2014-03-12 2015-09-16 株式会社理光 Method and apparatus for determining main disparity value in disparity map

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
储珺 等: "基于线性滤波的树结构动态规划立体匹配算法", 《自动化学报》 *
崔燕茹: "基于双目视觉的障碍物识别与重建", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
龚文: "基于动态规划的立体匹配算法", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319910B (en) * 2018-01-30 2021-11-16 海信集团有限公司 Vehicle identification method and device and terminal
CN108319910A (en) * 2018-01-30 2018-07-24 海信集团有限公司 A kind of vehicle identification method, device and terminal
CN108446622A (en) * 2018-03-14 2018-08-24 海信集团有限公司 Detecting and tracking method and device, the terminal of target object
CN108520536A (en) * 2018-03-27 2018-09-11 海信集团有限公司 A kind of generation method of disparity map, device and terminal
CN108520536B (en) * 2018-03-27 2022-01-11 海信集团有限公司 Disparity map generation method and device and terminal
CN110378168B (en) * 2018-04-12 2023-05-30 海信集团有限公司 Method, device and terminal for fusing multiple types of barriers
CN110378168A (en) * 2018-04-12 2019-10-25 海信集团有限公司 The method, apparatus and terminal of polymorphic type barrier fusion
CN110598505A (en) * 2018-06-12 2019-12-20 海信集团有限公司 Method and device for judging suspension state of obstacle and terminal
CN109074668A (en) * 2018-08-02 2018-12-21 深圳前海达闼云端智能科技有限公司 Method for path navigation, relevant apparatus and computer readable storage medium
CN109460709B (en) * 2018-10-12 2020-08-04 南京大学 RTG visual barrier detection method based on RGB and D information fusion
CN109460709A (en) * 2018-10-12 2019-03-12 南京大学 The method of RTG dysopia analyte detection based on the fusion of RGB and D information
CN111382591A (en) * 2018-12-27 2020-07-07 海信集团有限公司 Binocular camera ranging correction method and vehicle-mounted equipment
CN111382591B (en) * 2018-12-27 2023-09-29 海信集团有限公司 Binocular camera ranging correction method and vehicle-mounted equipment
CN109917419A (en) * 2019-04-12 2019-06-21 中山大学 A kind of depth fill-in congestion system and method based on laser radar and image
CN109917419B (en) * 2019-04-12 2021-04-13 中山大学 Depth filling dense system and method based on laser radar and image
CN110992424B (en) * 2019-11-27 2022-09-02 苏州智加科技有限公司 Positioning method and system based on binocular vision
CN110992424A (en) * 2019-11-27 2020-04-10 苏州智加科技有限公司 Positioning method and system based on binocular vision
CN111664798A (en) * 2020-04-29 2020-09-15 深圳奥比中光科技有限公司 Depth imaging method and device and computer readable storage medium
CN111899170A (en) * 2020-07-08 2020-11-06 北京三快在线科技有限公司 Obstacle detection method and device, unmanned aerial vehicle and storage medium
CN112164058A (en) * 2020-10-13 2021-01-01 东莞市瑞图新智科技有限公司 Silk-screen area coarse positioning method and device for optical filter and storage medium
CN114119700A (en) * 2021-11-26 2022-03-01 山东科技大学 Obstacle ranging method based on U-V disparity map
CN114119700B (en) * 2021-11-26 2024-03-29 山东科技大学 Obstacle ranging method based on U-V disparity map
CN115257767A (en) * 2022-09-26 2022-11-01 中科慧眼(天津)研究开发有限公司 Pavement obstacle height measurement method and system based on plane target
CN115257767B (en) * 2022-09-26 2023-02-17 中科慧眼(天津)研究开发有限公司 Road surface obstacle height measurement method and system based on plane target
CN116912403A (en) * 2023-07-03 2023-10-20 上海鱼微阿科技有限公司 XR equipment and obstacle information sensing method thereof

Similar Documents

Publication Publication Date Title
CN107169418A (en) A kind of obstacle detection method and device
CN105711597B (en) Front locally travels context aware systems and method
CN107341454A (en) The detection method and device of barrier, electronic equipment in a kind of scene
US20220043449A1 (en) Multi-channel sensor simulation for autonomous control systems
CN107392103A (en) The detection method and device of road surface lane line, electronic equipment
US11841434B2 (en) Annotation cross-labeling for autonomous control systems
CN106845547B (en) A kind of intelligent automobile positioning and road markings identifying system and method based on camera
CN111448478B (en) System and method for correcting high-definition maps based on obstacle detection
CN105313782B (en) Vehicle travel assist system and its method
CN103852067B (en) The method for adjusting the operating parameter of flight time (TOF) measuring system
CN107358168A (en) A kind of detection method and device in vehicle wheeled region, vehicle electronic device
CN108446622A (en) Detecting and tracking method and device, the terminal of target object
CN104902258A (en) Multi-scene pedestrian volume counting method and system based on stereoscopic vision and binocular camera
US20130265419A1 (en) System and method for available parking space estimation for multispace on-street parking
CN108027877A (en) System and method for the detection of non-barrier
CN110799982A (en) Method and system for object-centric stereo vision in an autonomous vehicle
CN105006175B (en) The method and system of the movement of initiative recognition traffic participant and corresponding motor vehicle
CN104378582A (en) Intelligent video analysis system and method based on PTZ video camera cruising
CN104335244A (en) Object recognition device
CN107977654B (en) Road area detection method, device and terminal
CN106096512A (en) Utilize the detection device and method that vehicles or pedestrians are identified by depth camera
WO2020241294A1 (en) Signal processing device, signal processing method, and ranging module
CN107356916B (en) Vehicle distance detecting method and device, electronic equipment, computer readable storage medium
Itu et al. An efficient obstacle awareness application for android mobile devices
CN107844749A (en) Pavement detection method and device, electronic equipment, storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170915

RJ01 Rejection of invention patent application after publication