CN103955920A - Binocular vision obstacle detection method based on three-dimensional point cloud segmentation - Google Patents
Binocular vision obstacle detection method based on three-dimensional point cloud segmentation Download PDFInfo
- Publication number
- CN103955920A CN103955920A CN201410146864.4A CN201410146864A CN103955920A CN 103955920 A CN103955920 A CN 103955920A CN 201410146864 A CN201410146864 A CN 201410146864A CN 103955920 A CN103955920 A CN 103955920A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- dimensional point
- height
- road surface
- video camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
The invention provides a binocular vision obstacle detection method based on three-dimensional point cloud segmentation. The method comprises the steps of synchronously collecting two camera images of the same specification, conducting calibration and correction on a binocular camera, and calculating a three-dimensional point cloud segmentation threshold value; using a three-dimensional matching algorithm and three-dimensional reconstruction calculation for obtaining a three-dimensional point cloud, and conducting image segmentation on a reference map to obtain image blocks; automatically detecting the height of a road surface of the three-dimensional point cloud, and utilizing the three-dimensional point cloud segmentation threshold value for conducting segmentation to obtain a road surface point cloud, obstacle point clouds at different positions and unknown region point clouds; utilizing the point clouds obtained through segmentation for being combined with the segmented image blocks, determining the correctness of obstacles and the road surface, and determining position ranges of the obstacles, the road surface and unknown regions. According to the binocular vision obstacle detection method, the camera and the height of the road surface can be still detected under the complex environment, the three-dimensional segmentation threshold value is automatically estimated, the obstacle point clouds, the road surface point cloud and the unknown region point clouds can be obtained through segmentation, the color image segmentation technology is ended, color information is integrated, correctness of the obstacles and the road surface is determined, the position ranges of the obstacles, the road surface and the unknown regions are determined, the high-robustness obstacle detection is achieved, and the binocular vision obstacle detection method has higher reliability and practicability.
Description
Technical field
The present invention relates to autonomous mobile robot and automatic Pilot etc. based on the automatic detection field of binocular stereo vision barrier, the binocular vision obstacle detection method of specifically cutting apart based on three-dimensional point cloud.
Background technology
Binocular stereo vision is an important branch of computer vision, this vision direct modeling mankind eyes are processed the mode of scenery, easy to be reliable, all have in a lot of fields using value, as robot navigation and aerial survey, three-dimensional measurement, intelligent transportation and virtual reality etc.Binocular stereo vision be by two of diverse location or a video camera through mobile or the same width scene of rotary taking, the parallax by computer memory o'clock in two width images, obtains the D coordinates value of this point.In the research in the field such as autonomous mobile robot and automatic Pilot, fundamental prerequisite is that requirement robot or automatic Pilot car are equipped with sighting device.This vision system should be able to complete the basic function of similar human eye, for the planning in path provides reliable visual information.Obtaining of the target detection of barrier and range information is the key component in autonomous mobile robot and automatic Pilot.The key of the detection of obstacles based on binocular stereo vision is: the right Stereo matching of (1) binocular image, calculates three-dimensional point cloud; (2) extraction of barrier target, identifies Obstacle Position and size.The applied environment complexity such as common autonomous mobile robot and automatic Pilot, be subject to optical distortion and noise, smooth surface mirror-reflection, foreshortening, perspective distortion, low texture, repeat the impact of texture, transparent substance and overlapping and discontinuous regional factor, cannot ensure to can be calculated dense disparity map by Stereo matching.In addition, under complicated road environment, only rely on road color empirical value or highway sideline to detect to the correctness of single image Road Detection not high, cannot be applied directly in practical matter.
Along with the research that deepens continuously to machine vision, at present existing many efficient and can obtain compared with the Stereo Matching Algorithm of dense disparity map and be applied in the middle of actual scene, as three-dimensional reconstruction, intelligent transportation, Intelligent Measurement etc.It is that image is divided into several specific, to have the region of peculiar property and propose interesting target technology and processes that image is cut apart, and it is the committed step of being processed graphical analysis by image.Researchist has updated original color image segmentation method and some new theories and the new method of other subject has been cut apart for image, has proposed the dividing method of many new adaptation specific occasions.
Summary of the invention
The object of the invention is to provide a kind of binocular vision obstacle detection method of cutting apart based on three-dimensional point cloud, the obstacle object point cloud that the method is tentatively partitioned into road surface point cloud and is above the ground level on three-dimensional point cloud, in conjunction with Segmentation of Color Image amalgamation judging, finally determine barrier again.
The technical scheme that realizes the object of the invention is:
A barrier automatic testing method for the binocular stereo vision of cutting apart based on three-dimensional point cloud, comprises the steps:
(1) obtain two-way coloured image by two video cameras of diverse location, utilize stereo calibration method to demarcate binocular camera, calculate inside and outside parameter and the relative position relation of two video cameras, respectively two video cameras are eliminated to distortion and row (or row) aligning according to these parameters, make the imaging origin of two-way coloured image consistent, obtain the binocular colored views after proofreading and correct.Obtain or pre-determine the angle of pitch and the height of video camera phase road pavement by sensor;
(2) step (1) is proofreaied and correct after the binocular colored views that obtains, select wherein a width view as reference map.Utilize Stereo Matching Algorithm to mate calculating to reference map, obtain preliminary disparity map.Reference map is carried out to color images, obtain the segment that color is close.The video camera phase road pavement position relationship data that obtain according to step (1), the threshold value that calculation level cloud is cut apart, is included in minimum distance, maximum distance, maximum height and the breadth extreme that barrier under certain precision, can be detected.
(3) after the disparity map calculating in step (2), utilize step (1) to calculate two video camera relative position relation data (re-projection matrix) and carry out disparity map three-dimensionalreconstruction, calculate three-dimensional point cloud.
(4) position data on the relative ground of video camera obtaining according to step (1), proofreaies and correct the three-dimensional point cloud being calculated by step (3), makes some cloud in road surface consistent with respect to the height of video camera.
(5) three-dimensional point cloud after step (4) correction is detected to pavement-height (height of the relative video camera in road surface), the segmentation threshold being calculated by step (2) is partitioned into barrier, road surface and unknown point cloud.
(6) segment that the some cloud that fusion steps (5) is cut apart and step (2) are cut apart reference map image, determine barrier, road surface and zone of ignorance, and mark shows.
Above-mentioned whole process, two described video cameras remain same specification, the performance such as pixel, focal length, aperture is identical or approaching, considers and affects the factor that Stereo matching calculates, and selects the video camera of high definition as far as possible.Relative position and the focal length of two video cameras are fixed, once demarcating the relative position and the focal length that no longer change two video cameras.The luffing angle on two video camera phases and road surface and highly can fixing, also can change.The calculated performance of the resolution package of two cameras in step (1) and determining, can adopt the resolution such as 640x480 or 1027x768 conventionally.Calibrating camera can adopt artificial demarcation, plane chessboard calibration or circle (flat circle or parallel circle) demarcation etc., calculates tangential distortion, radial distortion, focal length, imaging initial point, correction maps matrix and re-projection matrix.Correction maps, for the colored views of proofreading and correct two video cameras, makes two width image imaging origins consistent, and two camera optical axises are parallel, left and right imaging plane is coplanar, polar curve row or column is alignd.The position of video camera phase road pavement comprises the angle of pitch and height, is highly wherein option.
The reference map of selecting described in step (2) is the master image of determining subsequent calculations reference.At present Stereo Matching Algorithm is difficult point and the focus in computer vision, algorithm has a variety of, general step has that coupling cost is calculated, coupling cost stack, parallax are obtained, parallax refinement (sub-pixel), and matching strategy has local algorithm and and Global Algorithm.The calculated performance of package and degree of accuracy requirement, select suitable Stereo Matching Algorithm, efficiently calculates rapidly denser disparity map.Color images is carried out image to coloured image by colouring information and is cut apart.Can determine and can accept under precision detectable barrier from video camera minimum distance, from video camera maximum distance, maximum height and breadth extreme according to the focal length of the relative distance of two video cameras and fiducial cameras.
After the re-projection matrix three-dimensionalreconstruction calculating that the described three-dimensional point cloud of step (3) utilizes step (1) to calculate by disparity map, obtain.Disparity map is to appoint piece image as benchmark taking image pair, and its size is the size of this benchmark image, the image that element value is parallax value, the range information that has comprised scene.
Step (4) is described rotates three-dimensional point cloud according to video camera relative to the luffing angle on ground, makes some cloud in road surface consistent with respect to the height of video camera.
The height that the described maximum height detecting according to step (2) of step (5) or step (1) obtain is cut apart three-dimensional point cloud, obtains road surface point cloud; The barrier detecting according to step (2), from video camera minimum distance with from video camera maximum distance, is cut apart three-dimensional point cloud and is obtained obstacle object point cloud and the zone of ignorance point cloud from video camera different distance scope; The breadth extreme detecting according to step (2) is cut apart the obstacle object point cloud from video camera different distance scope, obtains the obstacle object point cloud of diverse location.This step is partitioned into obstacle object point cloud, road surface point cloud and zone of ignorance point cloud, and region point cloud is not to be made up of some cloud and the poor some cloud of precision of some coupling miscounts.
The segment that the some cloud that the described fusion steps of step (6) (5) is cut apart and step (2) are cut apart reference map image, the image block obtaining by step (2) color images and obstacle object point cloud and the comparison of road surface point cloud, correctness and the regional extent on judgement barrier and road surface.
Described binocular camera is demarcated: demarcate and comprise camera calibration and stereo calibration.Camera calibration mainly calculates tangential distortion, radial distortion, focal length and optical centre.Stereo calibration is the process of two video camera geometric relationships on computer memory, depends on and searches two rotation matrix and translation vectors between video camera; Described binocular is proofreaied and correct: video camera need to be corrected and three-dimensional correction, and rectification is to eliminate tangential and radial distortion.Three-dimensional correction is because two video cameras may have the imaging plane of coplanar and capable aligning accurately hardly, need to proofread and correct by the rotation matrix between two video cameras of three-dimensional demarcation and translation vector;
Described binocular coupling and disparity computation: Stereo matching is mainly by finding out the corresponding relation between every pair of image, according to principle of triangulation, obtains disparity map; Obtaining after parallax information, can obtain easily depth information and the three-dimensional information of original image according to projection model.It is also the problem of most critical that Stereo Matching Technology is generally considered the most difficult in stereoscopic vision, and at present Stereo Matching Algorithm is difficult point and the focus in computer vision, and algorithm is a lot, the general step of calculating is: coupling cost is calculated, the stack of coupling cost, parallax obtains, parallax refinement;
Described automatic estimation segmentation threshold: in the minimum-depth accuracy rating allowing, the two video camera spacings, the focometer that obtain according to the resolution of video camera and stereo calibration calculate barrier from the minimum distance of video camera, from video camera maximum distance, maximum height and breadth extreme.And then in the accuracy rating allowing, calculate before and after barrier and left and right between interval, obstacle height error range.
Described three-dimensional point cloud pavement-height detects: the three-dimensional point cloud that three-dimensionalreconstruction is obtained carries out pavement-height detection, and the height that meets specification error is considered as pavement-height.Detect pavement-height time error for the first time may be larger, carry out repeatedly after iterative detection, the detection degree of accuracy of pavement-height trend is stable, and along with nearly height detection value is several times upgraded and upgrades.
Described three-dimensional point cloud is cut apart: obtain, after denser three-dimensional point cloud, rotating three-dimensional point cloud according to video camera relative to the luffing angle on ground at three-dimensionalreconstruction, make three-dimensional point cloud set out on a journey millet cake cloud and video camera in sustained height., cut apart three-dimensional point cloud and obtain road surface point cloud, obstacle object point cloud and zone of ignorance point cloud from the minimum distance of video camera, from video camera maximum distance, maximum height, breadth extreme, road surface point cloud level degree and obstacle height error by barrier; By the interval between before and after barrier and left and right, cut apart obstacle object point cloud and obtain the obstacle object point cloud of each position; Zone of ignorance point cloud is to be made up of some cloud and the poor some cloud of precision of some coupling miscounts.
Described three-dimensional point cloud Fusion of Color information detects barrier: three-dimensional point cloud Fusion of Color information cut apart to three-dimensional point cloud the obstacle object point cloud, road surface point cloud and the zone of ignorance point cloud that obtain diverse location, colour reference figure is carried out cutting apart and obtaining image block by color image, merge cut apart the some cloud obtaining and cut apart the image block obtaining with contrast, the correctness on judgement barrier and road surface, determine the position range of barrier, road surface and zone of ignorance, realize the detection of obstacles of high reliability.
Advantage of the present invention is: compared with prior art, the three-dimensional point cloud that the present invention's three-dimensionalreconstruction in compared with complex environment is denser, become the threshold value that still can detect video camera and pavement-height automatic estimation three-dimensional segmentation under certain angle of pitch at video camera with road surface, be partitioned into obstacle object point cloud, road surface point cloud and zone of ignorance point cloud; Finish color Image Segmentation, Fusion of Color information is adjudicated the correctness on barrier and road surface, determines the position range of barrier, road surface and zone of ignorance; Compare the method that single image dependence experience detects road obstacle, there is higher dependable with function.
Brief description of the drawings
Fig. 1 is the binocular vision obstacle detection method schematic diagram of cutting apart based on three-dimensional point cloud.
Fig. 2 is binocular stereo imaging schematic diagram.
Embodiment
Cut apart the also barrier automatic testing method of Fusion of Color information based on three-dimensional point cloud, as shown in Figure 1, comprise the following steps:
Step 1: two video cameras by diverse location obtain two-way coloured image, utilize stereo calibration method to demarcate binocular camera, calculate inside and outside parameter and the relative position relation of two video cameras, respectively two video cameras are eliminated to distortion and row (or row) aligning according to these parameters, make the imaging origin of two-way coloured image consistent, obtain the binocular colored views after proofreading and correct.Obtain or pre-determine the angle of pitch and the height of video camera phase road pavement by sensor.Relative position and the focal length of two video cameras are fixed, once demarcating the relative position and the focal length that no longer change two video cameras.The luffing angle on two video camera phases and road surface and highly can fixing, also can change.
The calculated performance of the resolution package of two cameras and determining, can adopt the resolution such as 640x480 or 1027x768 conventionally.Calibrating camera can adopt artificial demarcation, plane chessboard calibration or circle (flat circle or parallel circle) demarcation etc., calculates tangential distortion, radial distortion, focal length, imaging initial point, correction maps matrix and re-projection matrix.Correction maps, for the colored views of proofreading and correct two video cameras, makes two width image imaging origins consistent, and two camera optical axises are parallel, left and right imaging plane is coplanar, polar curve row or column is alignd.The position of video camera phase road pavement comprises the angle of pitch and height, is highly wherein option.
In the present invention, two video cameras remain same specification, and the performance such as pixel, focal length, aperture is identical or approaching, consider and affect the factor that Stereo matching calculates, and select the video camera of high definition as far as possible.
Now use for example plane chessboard calibration method to demarcate the binocular camera that left and right horizontal places.Adjust binocular camera and collect after two-way RGB image, utilizing the capable m Lie Heibaige of n chessboard to demarcate (n and m are greater than 4) to it, camera exists radial distortion because the characteristic of optical lens makes imaging:
x
corrected=x(1+k
1r
2+k
2r
4+k
3r
6) (1)
y
corrected=y(1+k
1r
2+k
2r
4+k
3r
6)
(x, y) original image pixels for obtaining in formula, (x
corrected, y
corrected) be the image pixel of correct radial distortion, parameter k
1, k
2, k
3for radial distortion parameter;
Due in technique, while being assembled into image-position sensor and camera lens, there is error, not completely parallel between sensor and optical lens, there is tangential distortion in imaging therefore:
x
corrected=x+[2p
1y+p
2(r
2+2x
2)] (2)
y
corrected=y+[p
1(r
2+2y
2)+2p
2x]
(x, y) original image pixels for obtaining in formula, (x
corrected, y
corrected) for proofreading and correct the image pixel of tangential distortion, parameter p
1, p
2for tangential distortion parameter; The calibration of single camera is mainly calculate the internal reference of video camera and join outward.Binocular camera calibration not only will draw the inner parameter of each video camera, also needs to measure two relative positions (D translation vector sum rotation matrix) between video camera by stereo calibration.
It is according to the monocular internal reference data that obtain after camera calibration (focal length, imaging initial point, distortion factor) and binocular relative position relation (rotation matrix and translation vector) that binocular is proofreaied and correct, respectively left and right view is eliminated to distortion and row aligning, make the imaging origin of left and right view consistent, two camera optical axises are parallel, left and right imaging plane is coplanar, to polar curve row alignment, be convenient to the calculating of Stereo matching.
Step 2: the binocular colored views that obtains after step 1 is proofreaied and correct, select wherein a width view as reference map.Utilize Stereo Matching Algorithm to mate calculating to reference map, obtain preliminary disparity map.Reference map is carried out to color images, obtain the segment that color is close.The video camera phase road pavement position relationship data that obtain according to step 1, the threshold value that calculation level cloud is cut apart, is included in and can accepts detectable minimum distance, maximum distance, maximum height and breadth extreme under precision.
Now utilize the SGBM Stereo Matching Algorithm that Heiko Hirschm ü ller proposes also to be optimized, taking left view as base figure mates, calculate preliminary disparity map.Stereo Matching Algorithm comprises following step: first mates cost and calculates,
C(p,d)=C(x,y,d)=C
BT(p
Li,p
Li+d,I′
L,I′
R)
=max{0,I
L(p
Li)-I′
max,I′
min-I
L(p
Li)} (3)
In equation (3), d is parallax.
Coupling cost stack as shown in the formula:
D is disparity map in formula (4), right side Section 1 represent all pixels coupling cost and.
Image partition method has a variety of, uses the mean shift clustering Similarity algorithm based on color here.Here mean shift segmentation can find the spatially peak value of color distribution, and by the given data of giving a group, its dimension is (x, y, indigo plant, green, red) more, and average drifting can find " the poly-piece " that packing density is the highest with a window scanning space.Set a space radius, set a color radius according to color variance.In the time that average drifting window moves, the institute that converges to data peaks after window transform has a capital to be communicated with, and belongs to this peak value.Relation under this, is from intensive spike radiation, has formed cutting apart of image.Cut apart by ratio pyramid and complete, the color bunch that pyramid is on the middle and senior level, has oneself border, these borders in pyramid by explication at pyramidal low one deck.
Calculate minimum distance, maximum distance, maximum height, breadth extreme and definite obstacle height error range according to resolution of video camera by following formula:
Wherein Vz is that the minimum obtaining is attempted range accuracy, and z is the distance of barrier from video camera, the focal length that f is video camera, and T is two distances between video camera, Vd is parallax value.
Step 3: after the disparity map calculating in step 2, utilize step 1 to calculate two video camera relative position relation data (re-projection matrix) and carry out disparity map three-dimensionalreconstruction, calculate three-dimensional point cloud.
According to binocular stereo imaging principle, as Fig. 2:
Wherein baseline is the distance of the projection centre line of two video cameras apart from T; Camera focus is f.
After three-dimensional correction, the image of two video cameras is in same plane, and the imaging center of benchmark image is (c
x, c
y), this point x coordinate on right image is c '
x, the distance of two video camera x directions is T
x, under just re-projection matrix duty:
The parallax d associated with it of a given homogeneous point of two dimension, can be by this spot projection in three-dimensional:
Therefore, (X/W, Y/W, Z/W) is exactly the three-dimensional coordinate that parallax point d is corresponding.
Step 4: the position data on the relative ground of video camera obtaining according to step 1, proofread and correct the three-dimensional point cloud being calculated by step 3, make three-dimensional point cloud Road millet cake cloud consistent with respect to camera height.Obtain after the angle of pitch on the relative ground of video camera, just rotatable three-dimensional point cloud, makes the pavement-height value of three-dimensional point cloud consistent.Binocular camera is set parallel with road surface here.
Step 5: the three-dimensional point cloud after step 4 is proofreaied and correct detects pavement-height (height of the relative video camera in road surface), and the segmentation threshold being calculated by step 2 is partitioned into barrier, road surface and unknown point cloud.
Suppose that road surface is not have recessed and is barrier higher than the object of road plane.The one hour value of three-dimensional point cloud after proofreading and correct by finding step four, get the mean value of all values of this value in obstacle height error range, the pavement-height arriving for current detection, by drawing current final pavement-height with the pavement-height weighted calculation of front one-time detection.After repeated detection pavement-height, pavement-height value accuracy is improved.
Step 6: the segment that the some cloud that fusion steps five is cut apart and step 2 are cut apart reference map image, determine barrier, road surface and zone of ignorance, and mark shows.
Three-dimensional point cloud Fusion of Color information cut apart to three-dimensional point cloud the obstacle object point cloud, road surface point cloud and the zone of ignorance point cloud that obtain diverse location, colour reference figure is carried out cutting apart and obtaining image block by color image, merge cut apart the some cloud obtaining and cut apart the image block obtaining with contrast, on segment, occur that accounting reaches after certain value, determine that whether this segment is barrier, road surface, determines the position range of barrier, road surface and zone of ignorance.
What employing said method was designed cuts apart the also barrier automatic testing method of Fusion of Color information based on three-dimensional point cloud, as shown in Figure 1, comprise that binocular camera is demarcated, binocular is proofreaied and correct, binocular coupling and disparity computation, automatically estimate segmentation threshold, three-dimensional point cloud pavement-height detects, three-dimensional point cloud is cut apart, and three-dimensional point cloud Fusion of Color information detects barrier.Wherein
Binocular camera is demarcated: demarcate and comprise camera calibration and stereo calibration.Camera calibration mainly calculates tangential distortion, radial distortion, focal length and optical centre.Stereo calibration is the process of two video camera geometric relationships on computer memory, depends on and searches two rotation matrix and translation vectors between video camera.
Binocular is proofreaied and correct: video camera need to be corrected and three-dimensional correction, and rectification is to eliminate tangential and radial distortion.Three-dimensional correction is because two video cameras may have the imaging plane of coplanar and capable aligning accurately hardly, need to proofread and correct by the rotation matrix between two video cameras of three-dimensional demarcation and translation vector;
Binocular coupling and disparity computation: Stereo matching is mainly by finding out the corresponding relation between every pair of image, according to principle of triangulation, obtains disparity map; Obtaining after parallax information, can obtain easily depth information and the three-dimensional information of original image according to projection model.It is also the problem of most critical that Stereo Matching Technology is generally considered the most difficult in stereoscopic vision, and at present Stereo Matching Algorithm is difficult point and the focus in computer vision, and algorithm is a lot, but general step is: coupling cost is calculated, the stack of coupling cost, parallax obtains, parallax refinement;
Automatically estimation segmentation threshold: in the minimum-depth range accuracy allowing, the two video camera spacings, the focometer that obtain according to the resolution of video camera and stereo calibration calculate from the minimum distance of video camera, from video camera maximum distance, maximum height and breadth extreme.And then in the accuracy rating allowing, calculate before and after barrier and left and right between interval, obstacle height error range.
Three-dimensional point cloud pavement-height detects: the three-dimensional point cloud that three-dimensionalreconstruction is obtained carries out pavement-height detection, and the height that meets specification error is considered as pavement-height.Detect pavement-height time error for the first time may be larger, carry out repeatedly after iterative detection, the detection degree of accuracy of pavement-height trend is stable, and along with nearly height detection value is several times upgraded and upgrades.
Three-dimensional point cloud is cut apart: obtain, after denser three-dimensional point cloud, rotating three-dimensional point cloud according to video camera relative to the luffing angle on ground at three-dimensionalreconstruction, make three-dimensional point cloud set out on a journey millet cake cloud and video camera in sustained height.By the minimum distance from video camera, from video camera maximum distance, maximum height, breadth extreme road surface, some cloud level degree and obstacle height error, cut apart three-dimensional point cloud to obtain road surface point cloud, obstacle object point cloud and zone of ignorance point cloud; By the interval between before and after barrier and left and right, cut apart obstacle object point cloud and obtain the obstacle object point cloud of each position; Zone of ignorance point cloud is to be made up of some cloud and the poor some cloud of precision of some coupling miscounts.
Three-dimensional point cloud Fusion of Color information detects barrier: three-dimensional point cloud Fusion of Color information cut apart to three-dimensional point cloud the obstacle object point cloud, road surface point cloud and the zone of ignorance point cloud that obtain diverse location, colour reference figure is carried out cutting apart and obtaining image block by color image, merge cut apart the some cloud obtaining and cut apart the image block obtaining with contrast, the correctness on judgement barrier and road surface, determine the position range of barrier, road surface and zone of ignorance, realize the detection of obstacles of high reliability.
Claims (10)
1. the binocular vision obstacle detection method of cutting apart based on three-dimensional point cloud, is characterized in that: comprise the steps:
(1) obtain two-way coloured image by two video cameras of diverse location, utilize stereo calibration method to demarcate binocular camera, calculate inside and outside parameter and the relative position relation of two video cameras, respectively two video cameras are eliminated to distortion and row or column aligning according to these parameters, make the imaging origin of two-way coloured image consistent, obtain the binocular colored views after proofreading and correct, obtain or pre-determine the angle of pitch and the height of video camera phase road pavement by sensor;
(2) to the binocular colored views obtaining after step (1) correction, select a wherein width view as reference map, to utilize Stereo Matching Algorithm to mate calculating to reference map, obtain preliminary disparity map; Reference map is carried out to color images, obtain the segment that color is close; The video camera phase road pavement position relationship data that obtain according to step (1), the threshold value that calculation level cloud is cut apart, is included in minimum distance, maximum distance, maximum height and the breadth extreme that under certain precision, can detect barrier;
(3) after the disparity map calculating in step (2), utilize step (1) to calculate two video camera relative position relation data (re-projection matrix) and carry out disparity map three-dimensionalreconstruction, calculate three-dimensional point cloud;
(4) position data on the relative ground of video camera obtaining according to step (1), proofreaies and correct the three-dimensional point cloud that calculated by step (3), make three-dimensional point cloud Road millet cake cloud with respect to video camera in sustained height;
(5) three-dimensional point cloud after step (4) correction is detected to pavement-height (height of the relative video camera in road surface), the segmentation threshold being calculated by step (2) is partitioned into barrier, road surface and unknown point cloud;
(6) segment that the some cloud that fusion steps (5) is cut apart and step (2) are cut apart reference map image, determine barrier, road surface and zone of ignorance, and mark shows.
2. the binocular vision obstacle detection method of cutting apart based on three-dimensional point cloud according to claim 1, it is characterized in that: described binocular camera is the two-way video equipment of being combined by the camera of two same specifications, can obtain as required different picture pixels, disposal route is original RGB triple channel colour picture.
3. according to the binocular vision obstacle detection method of cutting apart based on three-dimensional point cloud described in claim 1 or 2, it is characterized in that: described automatic detection three-dimensional segmentation threshold method, be according to the resolution of video camera and stereo calibration data, calculate minimum distance, maximum distance, maximum height, breadth extreme and the segmentation threshold that under certain precision, can detect barrier.
4. according to the binocular vision obstacle detection method of cutting apart based on three-dimensional point cloud described in claim 1 and 3, it is characterized in that: the described three-dimensional point cloud that utilizes detects pavement-height, to utilize the method for loop iteration to improve the accuracy that detects pavement-height, be under barrier in hypothesis higher than the object on road surface simultaneously, be partitioned into obstacle object point cloud, road surface point cloud and the zone of ignorance point cloud of diverse location by pavement-height and three-dimensional point cloud segmentation threshold.
5. according to the binocular vision obstacle detection method of cutting apart based on three-dimensional point cloud described in claim 1 and 4, it is characterized in that: the acquisition of described barrier, road surface and zone of ignorance, be that image block after utilizing step (5) to cut apart the three-dimensional point cloud obtaining to finish image and cut apart is determined, and mark show.
6. the binocular vision obstacle detection method of cutting apart based on three-dimensional point cloud according to claim 1, is characterized in that: described binocular camera is demarcated: demarcate and comprise camera calibration and stereo calibration;
Camera calibration mainly calculates tangential distortion, radial distortion, focal length and optical centre;
Stereo calibration is the process of two video camera geometric relationships on computer memory, depends on and searches two rotation matrix and translation vectors between video camera;
Binocular is proofreaied and correct: video camera need to be corrected and three-dimensional correction, and rectification is to eliminate tangential and radial distortion;
Three-dimensional correction is because two video cameras may have the imaging plane of coplanar and capable aligning accurately hardly, need to proofread and correct by the rotation matrix between two video cameras of three-dimensional demarcation and translation vector.
7. the binocular vision obstacle detection method of cutting apart based on three-dimensional point cloud according to claim 1, it is characterized in that: described binocular coupling and disparity computation: Stereo matching is mainly by finding out the corresponding relation between every pair of image, according to principle of triangulation, obtain disparity map; Obtaining after parallax information, can obtain easily depth information and the three-dimensional information of original image according to projection model;
It is also the problem of most critical that Stereo Matching Technology is generally considered the most difficult in stereoscopic vision, and at present Stereo Matching Algorithm is difficult point and the focus in computer vision, and algorithm is a lot, but general step is: coupling cost is calculated, the stack of coupling cost, parallax obtains, parallax refinement.
8. the binocular vision obstacle detection method of cutting apart based on three-dimensional point cloud according to claim 1, it is characterized in that: described automatic estimation segmentation threshold: in the minimum-depth range accuracy allowing, the two video camera spacings, the focometer that obtain according to the resolution of video camera and stereo calibration calculate from the minimum distance of video camera, from video camera maximum distance, maximum height and breadth extreme, and then calculate interval, obstacle height error range before and after barrier and between left and right in the accuracy rating allowing.
9. the binocular vision obstacle detection method of cutting apart based on three-dimensional point cloud according to claim 1, it is characterized in that: described three-dimensional point cloud pavement-height detects: the three-dimensional point cloud that three-dimensionalreconstruction is obtained carries out pavement-height detection, and the height that meets specification error is considered as pavement-height; Detect pavement-height time error for the first time may be larger, carry out repeatedly after iterative detection, the detection degree of accuracy of pavement-height trend is stable, and along with nearly height detection value is several times upgraded and upgrades.
10. the binocular vision obstacle detection method of cutting apart based on three-dimensional point cloud according to claim 1, is characterized in that:
Described three-dimensional point cloud is cut apart: obtain, after denser three-dimensional point cloud, rotating three-dimensional point cloud according to video camera relative to the luffing angle on ground at three-dimensionalreconstruction, make three-dimensional point cloud set out on a journey millet cake cloud and video camera in sustained height;
, cut apart three-dimensional point cloud to obtain road surface point cloud, obstacle object point cloud and zone of ignorance point cloud from the minimum distance of video camera, from video camera maximum distance, maximum height, breadth extreme road surface, some cloud level degree and obstacle height error by barrier; By the interval between before and after barrier and left and right, cut apart obstacle object point cloud and obtain the obstacle object point cloud of each position; Zone of ignorance point cloud is to be made up of some cloud and the poor some cloud of precision of some coupling miscounts;
Three-dimensional point cloud Fusion of Color information detects barrier: three-dimensional point cloud Fusion of Color information cut apart to three-dimensional point cloud the obstacle object point cloud, road surface point cloud and the zone of ignorance point cloud that obtain diverse location, colour reference figure is carried out cutting apart and obtaining image block by color image, merge cut apart the some cloud obtaining and cut apart the image block obtaining with contrast, the correctness on judgement barrier and road surface, determine the position range of barrier, road surface and zone of ignorance, realize the detection of obstacles of high reliability.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410146864.4A CN103955920B (en) | 2014-04-14 | 2014-04-14 | Binocular vision obstacle detection method based on three-dimensional point cloud segmentation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410146864.4A CN103955920B (en) | 2014-04-14 | 2014-04-14 | Binocular vision obstacle detection method based on three-dimensional point cloud segmentation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103955920A true CN103955920A (en) | 2014-07-30 |
CN103955920B CN103955920B (en) | 2017-04-12 |
Family
ID=51333189
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410146864.4A Expired - Fee Related CN103955920B (en) | 2014-04-14 | 2014-04-14 | Binocular vision obstacle detection method based on three-dimensional point cloud segmentation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103955920B (en) |
Cited By (77)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104392486A (en) * | 2014-11-25 | 2015-03-04 | 西安理工大学 | Point-cloud scene rebuilding method |
CN104794713A (en) * | 2015-04-15 | 2015-07-22 | 同济大学 | Greenhouse crop digital-imaging method based on ARM and binocular vision |
CN105701961A (en) * | 2014-11-24 | 2016-06-22 | 南京酷派软件技术有限公司 | Walking safety prompting method, system and terminal |
CN105818167A (en) * | 2015-01-22 | 2016-08-03 | 通用汽车环球科技运作有限责任公司 | Method for calibrating an articulated end effector employing a remote digital camera |
CN105955275A (en) * | 2016-05-26 | 2016-09-21 | 华讯方舟科技有限公司 | Robot path programming method and system |
CN106056569A (en) * | 2015-04-17 | 2016-10-26 | 丰田自动车株式会社 | Traveling road surface detection device and traveling road surface detection method |
CN106197452A (en) * | 2016-07-21 | 2016-12-07 | 触景无限科技(北京)有限公司 | A kind of visual pattern processing equipment and system |
CN106197429A (en) * | 2016-07-21 | 2016-12-07 | 触景无限科技(北京)有限公司 | A kind of Multi-information acquisition location equipment and system |
CN106225723A (en) * | 2016-07-25 | 2016-12-14 | 浙江零跑科技有限公司 | A kind of many trains splice angle measuring method based on backsight binocular camera |
CN106558096A (en) * | 2015-09-30 | 2017-04-05 | 株式会社日立大厦系统 | The modeling method of the model building device, program and cage guide of three dimensional point cloud |
CN106650708A (en) * | 2017-01-19 | 2017-05-10 | 南京航空航天大学 | Visual detection method and system for automatic driving obstacles |
CN106650640A (en) * | 2016-12-05 | 2017-05-10 | 浙江大学 | Negative obstacle detection method based on local structure feature of laser radar point cloud |
CN106875472A (en) * | 2017-01-16 | 2017-06-20 | 成都信息工程大学 | A kind of 3D dental imagings modeling method |
CN107016147A (en) * | 2015-10-27 | 2017-08-04 | 光州科学技术院 | Method for searching for installation space of article and computer-readable recording medium having program recorded thereon for the same |
CN107169986A (en) * | 2017-05-23 | 2017-09-15 | 北京理工大学 | A kind of obstacle detection method and system |
CN107367767A (en) * | 2017-06-27 | 2017-11-21 | 上海卓思智能科技股份有限公司 | A kind of vent cabinet window foreign matter detecting method and system and a kind of controller |
CN107818293A (en) * | 2016-09-14 | 2018-03-20 | 北京百度网讯科技有限公司 | Method and apparatus for handling cloud data |
CN107909010A (en) * | 2017-10-27 | 2018-04-13 | 北京中科慧眼科技有限公司 | A kind of road barricade object detecting method and device |
CN107962549A (en) * | 2016-10-20 | 2018-04-27 | 因特利格雷特总部有限责任公司 | 3D-2D vision systems for robot carton unloading |
CN108091161A (en) * | 2016-11-23 | 2018-05-29 | 罗伯特·博世有限公司 | For detecting the method and system for the protrusion object being located in parking lot |
CN108245385A (en) * | 2018-01-16 | 2018-07-06 | 曹醒龙 | A kind of device for helping visually impaired people's trip |
CN108269281A (en) * | 2016-12-30 | 2018-07-10 | 无锡顶视科技有限公司 | Avoidance technical method based on binocular vision |
CN108416834A (en) * | 2018-01-08 | 2018-08-17 | 长春理工大学 | Transparent objects surface three dimension reconstructing method, device and system |
CN108489454A (en) * | 2018-03-22 | 2018-09-04 | 沈阳上博智像科技有限公司 | Depth distance measurement method, device, computer readable storage medium and electronic equipment |
CN108573215A (en) * | 2018-03-16 | 2018-09-25 | 海信集团有限公司 | Reflective road method for detecting area, device and terminal |
CN108648219A (en) * | 2018-04-08 | 2018-10-12 | 浙江大承机器人科技有限公司 | A kind of barrier based on binocular and area of feasible solutions detection method |
CN108885791A (en) * | 2018-07-06 | 2018-11-23 | 深圳前海达闼云端智能科技有限公司 | ground detection method, related device and computer readable storage medium |
CN108986162A (en) * | 2018-06-28 | 2018-12-11 | 四川斐讯信息技术有限公司 | Vegetable and background segment method based on Inertial Measurement Unit and visual information |
CN109060839A (en) * | 2018-08-08 | 2018-12-21 | 广州爱迪绿色印刷科技有限公司 | The three-dimensional detection method and device of paper surface quality morphology |
CN109141364A (en) * | 2018-08-01 | 2019-01-04 | 北京进化者机器人科技有限公司 | Obstacle detection method, system and robot |
CN109143215A (en) * | 2018-08-28 | 2019-01-04 | 重庆邮电大学 | It is a kind of that source of early warning and method are cooperateed with what V2X was communicated based on binocular vision |
CN109190704A (en) * | 2018-09-06 | 2019-01-11 | 中国科学院深圳先进技术研究院 | The method and robot of detection of obstacles |
CN109522847A (en) * | 2018-11-20 | 2019-03-26 | 中车株洲电力机车有限公司 | A kind of track and road barricade object detecting method based on depth map |
CN109598896A (en) * | 2018-11-16 | 2019-04-09 | 佛山市第人民医院(中山大学附属佛山医院) | Sick bed collision early warning method, system and device |
CN109700465A (en) * | 2019-01-07 | 2019-05-03 | 广东体达康医疗科技有限公司 | A kind of mobile three-dimensional wound scanning device and its workflow |
CN109947108A (en) * | 2019-04-01 | 2019-06-28 | 杭州晶一智能科技有限公司 | Road condition predicting method in front of mobile robot |
CN110036370A (en) * | 2016-12-19 | 2019-07-19 | 日立汽车系统株式会社 | Electronic control unit, electronic control system and electronic control method |
CN110109457A (en) * | 2019-04-29 | 2019-08-09 | 北方民族大学 | A kind of intelligent sound blind-guidance robot control method and control system |
CN110197097A (en) * | 2018-02-24 | 2019-09-03 | 北京图森未来科技有限公司 | A kind of port area monitoring method and system, central control system |
CN110488805A (en) * | 2018-05-15 | 2019-11-22 | 武汉小狮科技有限公司 | A kind of unmanned vehicle obstacle avoidance system and method based on 3D stereoscopic vision |
CN110610501A (en) * | 2019-09-18 | 2019-12-24 | 中国大恒(集团)有限公司北京图像视觉技术分公司 | Point cloud segmentation method and device |
CN110667474A (en) * | 2018-07-02 | 2020-01-10 | 北京四维图新科技股份有限公司 | General obstacle detection method and device and automatic driving system |
CN110703753A (en) * | 2019-10-16 | 2020-01-17 | 北京京东乾石科技有限公司 | Path planning method and device, electronic equipment and storage medium |
CN110893617A (en) * | 2018-09-13 | 2020-03-20 | 深圳市优必选科技有限公司 | Obstacle detection method and device and storage device |
CN110928301A (en) * | 2019-11-19 | 2020-03-27 | 北京小米智能科技有限公司 | Method, device and medium for detecting tiny obstacles |
CN111047636A (en) * | 2019-10-29 | 2020-04-21 | 轻客智能科技(江苏)有限公司 | Obstacle avoidance system and method based on active infrared binocular vision |
CN111149067A (en) * | 2017-03-23 | 2020-05-12 | Abb瑞士股份有限公司 | Detecting robot positioning in a workspace via fiducials |
CN111160233A (en) * | 2019-12-27 | 2020-05-15 | 中国科学院苏州纳米技术与纳米仿生研究所 | Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance |
CN111260715A (en) * | 2020-01-20 | 2020-06-09 | 深圳市普渡科技有限公司 | Depth map processing method, small obstacle detection method and system |
CN111260773A (en) * | 2020-01-20 | 2020-06-09 | 深圳市普渡科技有限公司 | Three-dimensional reconstruction method, detection method and detection system for small obstacles |
WO2020118623A1 (en) * | 2018-12-13 | 2020-06-18 | Continental Automotive Gmbh | Method and system for generating an environment model for positioning |
CN111368607A (en) * | 2018-12-26 | 2020-07-03 | 北京欣奕华科技有限公司 | Robot, obstacle detection method and detection device |
CN111366917A (en) * | 2020-03-13 | 2020-07-03 | 北京百度网讯科技有限公司 | Method, device and equipment for detecting travelable area and computer readable storage medium |
CN111374597A (en) * | 2018-12-28 | 2020-07-07 | 珠海市一微半导体有限公司 | Method and device for avoiding line of cleaning robot, storage medium and cleaning robot |
CN111667540A (en) * | 2020-06-09 | 2020-09-15 | 中国电子科技集团公司第五十四研究所 | Multi-camera system calibration method based on pedestrian head recognition |
CN111742242A (en) * | 2019-06-11 | 2020-10-02 | 深圳市大疆创新科技有限公司 | Point cloud processing method, system, device and storage medium |
CN111735433A (en) * | 2019-03-25 | 2020-10-02 | 杭州海康威视数字技术股份有限公司 | Method and device for establishing two-dimensional map |
CN111797929A (en) * | 2020-07-07 | 2020-10-20 | 金陵科技学院 | Binocular robot obstacle feature detection method based on CNN and PSO |
CN111862234A (en) * | 2020-07-22 | 2020-10-30 | 中国科学院上海微系统与信息技术研究所 | Binocular camera self-calibration method and system |
CN111890358A (en) * | 2020-07-01 | 2020-11-06 | 浙江大华技术股份有限公司 | Binocular obstacle avoidance method and device, storage medium and electronic device |
US10861177B2 (en) | 2015-11-11 | 2020-12-08 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for binocular stereo vision |
US10882468B1 (en) | 2019-10-29 | 2021-01-05 | Deere & Company | Work vehicle composite panoramic vision systems |
CN112418103A (en) * | 2020-11-24 | 2021-02-26 | 中国人民解放军火箭军工程大学 | Bridge crane hoisting safety anti-collision system and method based on dynamic binocular vision |
CN112487916A (en) * | 2020-11-25 | 2021-03-12 | 上海电力大学 | Binocular three-dimensional identification system for electrical equipment |
CN112784799A (en) * | 2021-02-01 | 2021-05-11 | 三一机器人科技有限公司 | AGV (automatic guided vehicle) backward pallet and obstacle identification method and device and AGV |
CN112818999A (en) * | 2021-02-10 | 2021-05-18 | 桂林电子科技大学 | Complex scene 3D point cloud semantic segmentation method based on convolutional neural network |
CN112884845A (en) * | 2021-01-14 | 2021-06-01 | 济南浪潮高新科技投资发展有限公司 | Indoor robot obstacle positioning method based on single camera |
CN113343745A (en) * | 2021-02-26 | 2021-09-03 | 北京中科慧眼科技有限公司 | Binocular camera-based remote target detection method and system and intelligent terminal |
CN113409242A (en) * | 2021-02-23 | 2021-09-17 | 杭州哲域智能科技有限公司 | Intelligent monitoring method for point cloud of rail intersection bow net |
CN113470049A (en) * | 2021-07-06 | 2021-10-01 | 吉林省田车科技有限公司 | Complete target extraction method based on structured color point cloud segmentation |
CN113610741A (en) * | 2021-08-18 | 2021-11-05 | 梅卡曼德(北京)机器人科技有限公司 | Point cloud processing method and device based on laser line scanning |
CN114418851A (en) * | 2022-01-18 | 2022-04-29 | 长沙慧联智能科技有限公司 | Multi-view 3D panoramic all-around viewing system and splicing method |
CN114898072A (en) * | 2022-06-14 | 2022-08-12 | 中国农业大学 | Crop water stress detection method based on 3D temperature characteristics |
WO2023179405A1 (en) * | 2022-03-21 | 2023-09-28 | 深圳市正浩创新科技股份有限公司 | Obstacle identification method, device and storage medium |
CN116912403A (en) * | 2023-07-03 | 2023-10-20 | 上海鱼微阿科技有限公司 | XR equipment and obstacle information sensing method thereof |
CN118298287A (en) * | 2024-04-17 | 2024-07-05 | 华能澜沧江水电股份有限公司 | Method for monitoring limnoperna fortunei of underwater structure |
CN118537566A (en) * | 2024-07-26 | 2024-08-23 | 成都航维智芯科技有限公司 | 3D point cloud segmentation method |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107909009B (en) * | 2017-10-27 | 2021-09-17 | 北京中科慧眼科技有限公司 | Obstacle detection method and device based on road surface learning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070071311A1 (en) * | 2005-09-28 | 2007-03-29 | Deere & Company, A Delaware Corporation | Method for processing stereo vision data using image density |
CN101887589A (en) * | 2010-06-13 | 2010-11-17 | 东南大学 | Stereoscopic vision-based real low-texture image reconstruction method |
CN101976455A (en) * | 2010-10-08 | 2011-02-16 | 东南大学 | Color image three-dimensional reconstruction method based on three-dimensional matching |
-
2014
- 2014-04-14 CN CN201410146864.4A patent/CN103955920B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070071311A1 (en) * | 2005-09-28 | 2007-03-29 | Deere & Company, A Delaware Corporation | Method for processing stereo vision data using image density |
CN101887589A (en) * | 2010-06-13 | 2010-11-17 | 东南大学 | Stereoscopic vision-based real low-texture image reconstruction method |
CN101976455A (en) * | 2010-10-08 | 2011-02-16 | 东南大学 | Color image three-dimensional reconstruction method based on three-dimensional matching |
Non-Patent Citations (2)
Title |
---|
COSMIN D. PANTILIE ETAL: ""Real-time Obstacle Detection in Complex Scenarios Using Dense"", 《2010 13TH INTERNATIONAL IEEE ANNUAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS MADEIRA ISLAND, PORTUGAL, SEPTEMBER 19-22,2010》 * |
曾日金,袁华: ""双目立体视觉的障碍物检测方法综述"", 《大众科技》 * |
Cited By (121)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105701961A (en) * | 2014-11-24 | 2016-06-22 | 南京酷派软件技术有限公司 | Walking safety prompting method, system and terminal |
CN104392486B (en) * | 2014-11-25 | 2017-07-28 | 西安理工大学 | One kind point cloud scene reconstruction method |
CN104392486A (en) * | 2014-11-25 | 2015-03-04 | 西安理工大学 | Point-cloud scene rebuilding method |
CN105818167A (en) * | 2015-01-22 | 2016-08-03 | 通用汽车环球科技运作有限责任公司 | Method for calibrating an articulated end effector employing a remote digital camera |
CN105818167B (en) * | 2015-01-22 | 2018-10-23 | 通用汽车环球科技运作有限责任公司 | The method that hinged end effector is calibrated using long distance digital camera |
CN104794713A (en) * | 2015-04-15 | 2015-07-22 | 同济大学 | Greenhouse crop digital-imaging method based on ARM and binocular vision |
CN104794713B (en) * | 2015-04-15 | 2017-07-11 | 同济大学 | Chamber crop digitalized image method based on ARM and binocular vision |
CN106056569B (en) * | 2015-04-17 | 2019-03-29 | 丰田自动车株式会社 | Track detection device and track detection method |
CN106056569A (en) * | 2015-04-17 | 2016-10-26 | 丰田自动车株式会社 | Traveling road surface detection device and traveling road surface detection method |
CN106558096B (en) * | 2015-09-30 | 2019-05-21 | 株式会社日立大厦系统 | Model building device, storage medium and the cage guide modeling method of three dimensional point cloud |
CN106558096A (en) * | 2015-09-30 | 2017-04-05 | 株式会社日立大厦系统 | The modeling method of the model building device, program and cage guide of three dimensional point cloud |
CN107016147B (en) * | 2015-10-27 | 2020-11-24 | 光州科学技术院 | Setting space searching method and computer-readable recording medium |
CN107016147A (en) * | 2015-10-27 | 2017-08-04 | 光州科学技术院 | Method for searching for installation space of article and computer-readable recording medium having program recorded thereon for the same |
US10861177B2 (en) | 2015-11-11 | 2020-12-08 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for binocular stereo vision |
CN105955275B (en) * | 2016-05-26 | 2021-07-13 | 华讯方舟科技有限公司 | Robot path planning method and system |
CN105955275A (en) * | 2016-05-26 | 2016-09-21 | 华讯方舟科技有限公司 | Robot path programming method and system |
CN106197452A (en) * | 2016-07-21 | 2016-12-07 | 触景无限科技(北京)有限公司 | A kind of visual pattern processing equipment and system |
CN106197429A (en) * | 2016-07-21 | 2016-12-07 | 触景无限科技(北京)有限公司 | A kind of Multi-information acquisition location equipment and system |
CN106225723B (en) * | 2016-07-25 | 2019-03-29 | 浙江零跑科技有限公司 | A kind of hinged angle measuring method of multiple row vehicle based on backsight binocular camera |
CN106225723A (en) * | 2016-07-25 | 2016-12-14 | 浙江零跑科技有限公司 | A kind of many trains splice angle measuring method based on backsight binocular camera |
CN107818293A (en) * | 2016-09-14 | 2018-03-20 | 北京百度网讯科技有限公司 | Method and apparatus for handling cloud data |
US10460172B2 (en) | 2016-09-14 | 2019-10-29 | Beijing Baidu Netcom Science and Technology Co., Ltd | Method and apparatus for labeling point cloud data |
CN107962549A (en) * | 2016-10-20 | 2018-04-27 | 因特利格雷特总部有限责任公司 | 3D-2D vision systems for robot carton unloading |
CN108091161A (en) * | 2016-11-23 | 2018-05-29 | 罗伯特·博世有限公司 | For detecting the method and system for the protrusion object being located in parking lot |
CN108091161B (en) * | 2016-11-23 | 2022-02-01 | 罗伯特·博世有限公司 | Method and system for detecting protruding objects located in a parking lot |
CN106650640B (en) * | 2016-12-05 | 2020-03-03 | 浙江大学 | Negative obstacle detection method based on laser radar point cloud local structure characteristics |
CN106650640A (en) * | 2016-12-05 | 2017-05-10 | 浙江大学 | Negative obstacle detection method based on local structure feature of laser radar point cloud |
US11318929B2 (en) | 2016-12-19 | 2022-05-03 | Hitachi Astemo, Ltd. | Electronic control apparatus, electronic control system, and electronic control method |
CN110036370A (en) * | 2016-12-19 | 2019-07-19 | 日立汽车系统株式会社 | Electronic control unit, electronic control system and electronic control method |
CN110036370B (en) * | 2016-12-19 | 2022-04-12 | 日立安斯泰莫株式会社 | Electronic control device, electronic control system, and electronic control method |
CN108269281A (en) * | 2016-12-30 | 2018-07-10 | 无锡顶视科技有限公司 | Avoidance technical method based on binocular vision |
CN106875472A (en) * | 2017-01-16 | 2017-06-20 | 成都信息工程大学 | A kind of 3D dental imagings modeling method |
CN106650708B (en) * | 2017-01-19 | 2023-08-11 | 南京航空航天大学 | Automatic driving obstacle vision detection method and system |
CN106650708A (en) * | 2017-01-19 | 2017-05-10 | 南京航空航天大学 | Visual detection method and system for automatic driving obstacles |
CN111149067A (en) * | 2017-03-23 | 2020-05-12 | Abb瑞士股份有限公司 | Detecting robot positioning in a workspace via fiducials |
CN107169986A (en) * | 2017-05-23 | 2017-09-15 | 北京理工大学 | A kind of obstacle detection method and system |
CN107169986B (en) * | 2017-05-23 | 2019-09-17 | 北京理工大学 | A kind of obstacle detection method and system |
CN107367767A (en) * | 2017-06-27 | 2017-11-21 | 上海卓思智能科技股份有限公司 | A kind of vent cabinet window foreign matter detecting method and system and a kind of controller |
CN107909010A (en) * | 2017-10-27 | 2018-04-13 | 北京中科慧眼科技有限公司 | A kind of road barricade object detecting method and device |
CN107909010B (en) * | 2017-10-27 | 2022-03-18 | 北京中科慧眼科技有限公司 | Road obstacle detection method and device |
CN108416834A (en) * | 2018-01-08 | 2018-08-17 | 长春理工大学 | Transparent objects surface three dimension reconstructing method, device and system |
CN108245385A (en) * | 2018-01-16 | 2018-07-06 | 曹醒龙 | A kind of device for helping visually impaired people's trip |
CN110197097B (en) * | 2018-02-24 | 2024-04-19 | 北京图森智途科技有限公司 | Harbor district monitoring method and system and central control system |
CN110197097A (en) * | 2018-02-24 | 2019-09-03 | 北京图森未来科技有限公司 | A kind of port area monitoring method and system, central control system |
CN108573215A (en) * | 2018-03-16 | 2018-09-25 | 海信集团有限公司 | Reflective road method for detecting area, device and terminal |
CN108573215B (en) * | 2018-03-16 | 2021-08-03 | 海信集团有限公司 | Road reflective area detection method and device and terminal |
CN108489454A (en) * | 2018-03-22 | 2018-09-04 | 沈阳上博智像科技有限公司 | Depth distance measurement method, device, computer readable storage medium and electronic equipment |
CN108648219B (en) * | 2018-04-08 | 2022-02-22 | 浙江大承机器人科技有限公司 | Binocular-based obstacle and feasible area detection method |
CN108648219A (en) * | 2018-04-08 | 2018-10-12 | 浙江大承机器人科技有限公司 | A kind of barrier based on binocular and area of feasible solutions detection method |
CN110488805A (en) * | 2018-05-15 | 2019-11-22 | 武汉小狮科技有限公司 | A kind of unmanned vehicle obstacle avoidance system and method based on 3D stereoscopic vision |
CN108986162B (en) * | 2018-06-28 | 2022-02-22 | 杭州吉吉知识产权运营有限公司 | Dish and background segmentation method based on inertial measurement unit and visual information |
CN108986162A (en) * | 2018-06-28 | 2018-12-11 | 四川斐讯信息技术有限公司 | Vegetable and background segment method based on Inertial Measurement Unit and visual information |
CN110667474A (en) * | 2018-07-02 | 2020-01-10 | 北京四维图新科技股份有限公司 | General obstacle detection method and device and automatic driving system |
CN110667474B (en) * | 2018-07-02 | 2021-02-26 | 北京四维图新科技股份有限公司 | General obstacle detection method and device and automatic driving system |
CN108885791A (en) * | 2018-07-06 | 2018-11-23 | 深圳前海达闼云端智能科技有限公司 | ground detection method, related device and computer readable storage medium |
CN109141364B (en) * | 2018-08-01 | 2020-11-03 | 北京进化者机器人科技有限公司 | Obstacle detection method and system and robot |
CN109141364A (en) * | 2018-08-01 | 2019-01-04 | 北京进化者机器人科技有限公司 | Obstacle detection method, system and robot |
CN109060839A (en) * | 2018-08-08 | 2018-12-21 | 广州爱迪绿色印刷科技有限公司 | The three-dimensional detection method and device of paper surface quality morphology |
CN109060839B (en) * | 2018-08-08 | 2021-01-29 | 广州爱迪绿色印刷科技有限公司 | Three-dimensional detection method and device for paper surface morphology quality |
CN109143215A (en) * | 2018-08-28 | 2019-01-04 | 重庆邮电大学 | It is a kind of that source of early warning and method are cooperateed with what V2X was communicated based on binocular vision |
CN109190704A (en) * | 2018-09-06 | 2019-01-11 | 中国科学院深圳先进技术研究院 | The method and robot of detection of obstacles |
CN110893617A (en) * | 2018-09-13 | 2020-03-20 | 深圳市优必选科技有限公司 | Obstacle detection method and device and storage device |
CN109598896A (en) * | 2018-11-16 | 2019-04-09 | 佛山市第人民医院(中山大学附属佛山医院) | Sick bed collision early warning method, system and device |
CN109522847A (en) * | 2018-11-20 | 2019-03-26 | 中车株洲电力机车有限公司 | A kind of track and road barricade object detecting method based on depth map |
WO2020118623A1 (en) * | 2018-12-13 | 2020-06-18 | Continental Automotive Gmbh | Method and system for generating an environment model for positioning |
CN111368607B (en) * | 2018-12-26 | 2023-06-02 | 北京欣奕华科技有限公司 | Robot, obstacle detection method and detection device |
CN111368607A (en) * | 2018-12-26 | 2020-07-03 | 北京欣奕华科技有限公司 | Robot, obstacle detection method and detection device |
CN111374597A (en) * | 2018-12-28 | 2020-07-07 | 珠海市一微半导体有限公司 | Method and device for avoiding line of cleaning robot, storage medium and cleaning robot |
CN111374597B (en) * | 2018-12-28 | 2021-08-24 | 珠海市一微半导体有限公司 | Method and device for avoiding line of cleaning robot, storage medium and cleaning robot |
CN109700465A (en) * | 2019-01-07 | 2019-05-03 | 广东体达康医疗科技有限公司 | A kind of mobile three-dimensional wound scanning device and its workflow |
CN111735433A (en) * | 2019-03-25 | 2020-10-02 | 杭州海康威视数字技术股份有限公司 | Method and device for establishing two-dimensional map |
CN109947108B (en) * | 2019-04-01 | 2021-11-26 | 深圳市祺丰智能机器人科技有限公司 | Method for predicting road condition in front of mobile robot |
CN109947108A (en) * | 2019-04-01 | 2019-06-28 | 杭州晶一智能科技有限公司 | Road condition predicting method in front of mobile robot |
CN110109457A (en) * | 2019-04-29 | 2019-08-09 | 北方民族大学 | A kind of intelligent sound blind-guidance robot control method and control system |
CN111742242A (en) * | 2019-06-11 | 2020-10-02 | 深圳市大疆创新科技有限公司 | Point cloud processing method, system, device and storage medium |
CN110610501B (en) * | 2019-09-18 | 2022-04-29 | 中国大恒(集团)有限公司北京图像视觉技术分公司 | Point cloud segmentation method and device |
CN110610501A (en) * | 2019-09-18 | 2019-12-24 | 中国大恒(集团)有限公司北京图像视觉技术分公司 | Point cloud segmentation method and device |
CN110703753A (en) * | 2019-10-16 | 2020-01-17 | 北京京东乾石科技有限公司 | Path planning method and device, electronic equipment and storage medium |
CN110703753B (en) * | 2019-10-16 | 2022-11-08 | 北京京东乾石科技有限公司 | Path planning method and device, electronic equipment and storage medium |
US10882468B1 (en) | 2019-10-29 | 2021-01-05 | Deere & Company | Work vehicle composite panoramic vision systems |
CN111047636B (en) * | 2019-10-29 | 2024-04-09 | 轻客小觅机器人科技(成都)有限公司 | Obstacle avoidance system and obstacle avoidance method based on active infrared binocular vision |
CN111047636A (en) * | 2019-10-29 | 2020-04-21 | 轻客智能科技(江苏)有限公司 | Obstacle avoidance system and method based on active infrared binocular vision |
CN110928301B (en) * | 2019-11-19 | 2023-06-30 | 北京小米智能科技有限公司 | Method, device and medium for detecting tiny obstacle |
CN110928301A (en) * | 2019-11-19 | 2020-03-27 | 北京小米智能科技有限公司 | Method, device and medium for detecting tiny obstacles |
CN111160233B (en) * | 2019-12-27 | 2023-04-18 | 中国科学院苏州纳米技术与纳米仿生研究所 | Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance |
CN111160233A (en) * | 2019-12-27 | 2020-05-15 | 中国科学院苏州纳米技术与纳米仿生研究所 | Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance |
CN111260715A (en) * | 2020-01-20 | 2020-06-09 | 深圳市普渡科技有限公司 | Depth map processing method, small obstacle detection method and system |
CN111260773B (en) * | 2020-01-20 | 2023-10-13 | 深圳市普渡科技有限公司 | Three-dimensional reconstruction method, detection method and detection system for small obstacle |
WO2021147548A1 (en) * | 2020-01-20 | 2021-07-29 | 深圳市普渡科技有限公司 | Three-dimensional reconstruction method, detection method and system for small obstacle, and robot and medium |
CN111260715B (en) * | 2020-01-20 | 2023-09-08 | 深圳市普渡科技有限公司 | Depth map processing method, small obstacle detection method and system |
CN111260773A (en) * | 2020-01-20 | 2020-06-09 | 深圳市普渡科技有限公司 | Three-dimensional reconstruction method, detection method and detection system for small obstacles |
CN111366917A (en) * | 2020-03-13 | 2020-07-03 | 北京百度网讯科技有限公司 | Method, device and equipment for detecting travelable area and computer readable storage medium |
CN111366917B (en) * | 2020-03-13 | 2022-07-15 | 北京百度网讯科技有限公司 | Method, device and equipment for detecting travelable area and computer readable storage medium |
CN111667540A (en) * | 2020-06-09 | 2020-09-15 | 中国电子科技集团公司第五十四研究所 | Multi-camera system calibration method based on pedestrian head recognition |
CN111667540B (en) * | 2020-06-09 | 2023-04-18 | 中国电子科技集团公司第五十四研究所 | Multi-camera system calibration method based on pedestrian head recognition |
CN111890358A (en) * | 2020-07-01 | 2020-11-06 | 浙江大华技术股份有限公司 | Binocular obstacle avoidance method and device, storage medium and electronic device |
CN111890358B (en) * | 2020-07-01 | 2022-06-14 | 浙江大华技术股份有限公司 | Binocular obstacle avoidance method and device, storage medium and electronic device |
CN111797929B (en) * | 2020-07-07 | 2023-08-22 | 金陵科技学院 | Binocular robot obstacle feature detection method based on CNN and PSO |
CN111797929A (en) * | 2020-07-07 | 2020-10-20 | 金陵科技学院 | Binocular robot obstacle feature detection method based on CNN and PSO |
CN111862234B (en) * | 2020-07-22 | 2023-10-20 | 中国科学院上海微系统与信息技术研究所 | Binocular camera self-calibration method and system |
CN111862234A (en) * | 2020-07-22 | 2020-10-30 | 中国科学院上海微系统与信息技术研究所 | Binocular camera self-calibration method and system |
CN112418103A (en) * | 2020-11-24 | 2021-02-26 | 中国人民解放军火箭军工程大学 | Bridge crane hoisting safety anti-collision system and method based on dynamic binocular vision |
CN112487916B (en) * | 2020-11-25 | 2022-08-23 | 上海电力大学 | Binocular three-dimensional identification system for electrical equipment |
CN112487916A (en) * | 2020-11-25 | 2021-03-12 | 上海电力大学 | Binocular three-dimensional identification system for electrical equipment |
CN112884845A (en) * | 2021-01-14 | 2021-06-01 | 济南浪潮高新科技投资发展有限公司 | Indoor robot obstacle positioning method based on single camera |
CN112784799A (en) * | 2021-02-01 | 2021-05-11 | 三一机器人科技有限公司 | AGV (automatic guided vehicle) backward pallet and obstacle identification method and device and AGV |
CN112818999A (en) * | 2021-02-10 | 2021-05-18 | 桂林电子科技大学 | Complex scene 3D point cloud semantic segmentation method based on convolutional neural network |
CN113409242A (en) * | 2021-02-23 | 2021-09-17 | 杭州哲域智能科技有限公司 | Intelligent monitoring method for point cloud of rail intersection bow net |
CN113343745B (en) * | 2021-02-26 | 2024-06-18 | 北京中科慧眼科技有限公司 | Remote target detection method and system based on binocular camera and intelligent terminal |
CN113343745A (en) * | 2021-02-26 | 2021-09-03 | 北京中科慧眼科技有限公司 | Binocular camera-based remote target detection method and system and intelligent terminal |
CN113470049B (en) * | 2021-07-06 | 2022-05-20 | 吉林省田车科技有限公司 | Complete target extraction method based on structured color point cloud segmentation |
CN113470049A (en) * | 2021-07-06 | 2021-10-01 | 吉林省田车科技有限公司 | Complete target extraction method based on structured color point cloud segmentation |
CN113610741A (en) * | 2021-08-18 | 2021-11-05 | 梅卡曼德(北京)机器人科技有限公司 | Point cloud processing method and device based on laser line scanning |
CN114418851A (en) * | 2022-01-18 | 2022-04-29 | 长沙慧联智能科技有限公司 | Multi-view 3D panoramic all-around viewing system and splicing method |
WO2023179405A1 (en) * | 2022-03-21 | 2023-09-28 | 深圳市正浩创新科技股份有限公司 | Obstacle identification method, device and storage medium |
CN114898072B (en) * | 2022-06-14 | 2024-05-31 | 中国农业大学 | Crop water stress detection method based on 3D temperature characteristics |
CN114898072A (en) * | 2022-06-14 | 2022-08-12 | 中国农业大学 | Crop water stress detection method based on 3D temperature characteristics |
CN116912403A (en) * | 2023-07-03 | 2023-10-20 | 上海鱼微阿科技有限公司 | XR equipment and obstacle information sensing method thereof |
CN116912403B (en) * | 2023-07-03 | 2024-05-10 | 玩出梦想(上海)科技有限公司 | XR equipment and obstacle information sensing method thereof |
CN118298287A (en) * | 2024-04-17 | 2024-07-05 | 华能澜沧江水电股份有限公司 | Method for monitoring limnoperna fortunei of underwater structure |
CN118537566A (en) * | 2024-07-26 | 2024-08-23 | 成都航维智芯科技有限公司 | 3D point cloud segmentation method |
Also Published As
Publication number | Publication date |
---|---|
CN103955920B (en) | 2017-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103955920A (en) | Binocular vision obstacle detection method based on three-dimensional point cloud segmentation | |
US11900627B2 (en) | Image annotation | |
WO2021004548A1 (en) | Vehicle speed intelligent measurement method based on binocular stereo vision system | |
Häne et al. | Obstacle detection for self-driving cars using only monocular cameras and wheel odometry | |
US20220092797A1 (en) | Intelligent Vehicle Trajectory Measurement Method Based on Binocular Stereo Vision System | |
Schreiber et al. | Laneloc: Lane marking based localization using highly accurate maps | |
ES2949030T3 (en) | Procedure and device to determine the geographical position and orientation of a vehicle | |
JP6899368B2 (en) | Methods and systems for generating and using localization reference data | |
Guo et al. | A low-cost solution for automatic lane-level map generation using conventional in-car sensors | |
Broggi et al. | The single frame stereo vision system for reliable obstacle detection used during the 2005 DARPA grand challenge on TerraMax | |
JP2023126882A (en) | System for navigating vehicle autonomously, vehicle and method which navigates vehicle | |
CN104729485B (en) | A kind of vision positioning method based on vehicle-mounted panoramic image Yu streetscape map match | |
US10909395B2 (en) | Object detection apparatus | |
US10872246B2 (en) | Vehicle lane detection system | |
CN104700414A (en) | Rapid distance-measuring method for pedestrian on road ahead on the basis of on-board binocular camera | |
CN103971404A (en) | 3D real-scene copying device having high cost performance | |
Oniga et al. | Curb detection for driving assistance systems: A cubic spline-based approach | |
Oniga et al. | Polynomial curb detection based on dense stereovision for driving assistance | |
Guo et al. | Automatic lane-level map generation for advanced driver assistance systems using low-cost sensors | |
WO2023070113A1 (en) | Validating an sfm map using lidar point clouds | |
David et al. | Orientation descriptors for localization in urban environments | |
Hara et al. | Vehicle localization based on the detection of line segments from multi-camera images | |
Yuan et al. | Estimation of vehicle pose and position with monocular camera at urban road intersections | |
Teoh et al. | Ground plane detection for autonomous vehicle in rainforest terrain | |
Boschenriedter et al. | Multi-session visual roadway mapping |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170412 Termination date: 20190414 |