CN112465831A - Curve scene perception method, system and device based on binocular stereo camera - Google Patents

Curve scene perception method, system and device based on binocular stereo camera Download PDF

Info

Publication number
CN112465831A
CN112465831A CN202011276838.5A CN202011276838A CN112465831A CN 112465831 A CN112465831 A CN 112465831A CN 202011276838 A CN202011276838 A CN 202011276838A CN 112465831 A CN112465831 A CN 112465831A
Authority
CN
China
Prior art keywords
scene
driving
road surface
points
curve
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011276838.5A
Other languages
Chinese (zh)
Other versions
CN112465831B (en
Inventor
孙钊
裴姗姗
王欣亮
李建
王鹏
罗杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Smarter Eye Technology Co Ltd
Original Assignee
Beijing Smarter Eye Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Smarter Eye Technology Co Ltd filed Critical Beijing Smarter Eye Technology Co Ltd
Priority to CN202011276838.5A priority Critical patent/CN112465831B/en
Publication of CN112465831A publication Critical patent/CN112465831A/en
Application granted granted Critical
Publication of CN112465831B publication Critical patent/CN112465831B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Operations Research (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a curve scene perception method, a curve scene perception system and a curve scene perception device based on binocular stereo cameras, wherein the method comprises the following steps: constructing a road surface equation under a three-dimensional world coordinate, and obtaining a point cloud data set to be detected after removing the road surface according to the road surface equation; screening data points in a point cloud data set to be detected, and obtaining a plurality of mark points, wherein each mark point forms a driving area boundary point; judging a current driving scene based on the coordinate values of the boundary points of the driving area; and estimating the current driving attitude according to the current driving scene. Therefore, the technical problem that estimation of the vehicle running posture is difficult due to the fact that estimation of the vehicle running posture needs to depend on external input information in the prior art is solved.

Description

Curve scene perception method, system and device based on binocular stereo camera
Technical Field
The invention relates to the technical field of automatic driving, in particular to a curve scene perception method, a curve scene perception system and a curve scene perception device based on a binocular stereo camera.
Background
In recent years, with the rapid development of AI technology, development of automated driving has been more and more intensive and extensive. However, in an automatic driving (or driving assistance) task, a vision-based sensing system often needs to sense the driving posture of the vehicle in the current scene by means of externally connecting an inertia measurement unit, accessing vehicle chassis control information, combining high-precision map positioning and the like. That is to say, in the prior art, the current vehicle driving posture needs to be estimated by relying on other additional external input information, which often limits the use of the visual perception system, and causes inconvenience in visual perception during automatic driving or auxiliary driving.
Disclosure of Invention
Therefore, the embodiment of the invention provides a curve scene sensing method, system and device based on a binocular stereo camera, and at least part of the technical problem that estimation of vehicle driving postures is difficult due to the fact that estimation of vehicle driving postures needs to depend on external input information in the prior art is solved.
In order to achieve the above object, the embodiments of the present invention provide the following technical solutions:
a curve scene perception method based on a binocular stereo camera comprises the following steps:
constructing a road surface equation under a three-dimensional world coordinate, and obtaining a point cloud data set to be detected after removing the road surface according to the road surface equation;
screening data points in a point cloud data set to be detected, and obtaining a plurality of mark points, wherein each mark point forms a driving area boundary point;
judging a current driving scene based on the coordinate values of the boundary points of the driving area;
and estimating the current driving attitude according to the current driving scene.
Further, the constructing a road surface equation under the three-dimensional world coordinate and obtaining the point cloud data set to be detected after removing the road surface according to the road surface equation specifically comprises:
acquiring parallax information of a binocular stereo camera;
converting the parallax information into corresponding three-dimensional point cloud information;
estimating the current road surface condition according to the three-dimensional point cloud information, and constructing a road surface equation under a three-dimensional world coordinate based on an estimation result;
and according to the road surface equation, performing road surface area segmentation on the three-dimensional point cloud information to obtain a point cloud data set to be detected after the road surface is removed.
Further, the screening of the data points in the point cloud data set to be detected to obtain a plurality of mark points, each of which forms a boundary point of a driving area, specifically includes:
carrying out plane projection on the data in the point cloud data set to be detected, and dividing a plurality of projection grid areas on a projection plane;
filling numerical values in the point cloud data set to be detected in each projection grid area respectively to obtain the number of space points falling in the current projection grid area;
and screening the projection grids in the field of view, and obtaining a plurality of mark points forming the boundary points of the driving area according to the screening result.
Further, the screening of the projection grids in the field of view and obtaining a plurality of mark points forming the boundary points of the driving area according to the screening result specifically include:
carrying out threshold screening on the projection grids in the field of view;
judging that the number of accumulated space points in the projection grid is greater than a threshold value, setting the position of the current projection grid to be 'true', and otherwise, setting the position of the current projection grid to be 'false';
taking a coordinate system of a binocular stereo camera as a reference system, taking the optical axis direction of a left eye camera as the Z-axis direction, the base line direction of the binocular stereo camera as the X-axis direction, and the vertical direction as the Y direction, selecting an O point as a rotation center for an XOZ projection area, traversing the whole field of view area according to a fixed increment within a horizontal visual angle range by taking one boundary as a starting point until the boundary line of the other field of view is reached;
in the process of traversing the field area, taking the point O as the starting point of the virtual ray, emitting a light ray, and if the ray meets the grid marked as true for the first time, keeping the mark of the first grid position as true and taking the mark as a mark point; and all the grids at other positions are set to be false along the light direction, so that all mark points which are screened to be true are used as boundary points of the driving area.
Further, the determining the current driving scene based on the coordinate values of the driving area boundary points specifically includes:
according to the coordinate values (z, x) of each mark point in the boundary points of the driving area, fitting a quadratic curve equation x as c0+c1×z+c2×z2Wherein c is0、c1And c2Is a coefficient;
coefficient of curve equation [ c ] obtained from fitting0,c1,c2]And judging a driving scene:
determination c2Is absolutely greater than or equal to the first threshold, then the current vehicle is in a curve scene with a turning radius of about c0
Determination c2Is less than a first threshold value, and c1Is greater than or equal to the second threshold, the current vehicle is in a curve scene, the turning radius is about c0
Determination c2Is less than a first threshold value, and c1Is less than the second threshold, the current vehicle is in a straight road scene.
Further, the estimating the current driving posture according to the current driving scene specifically includes:
in a curve scene, when c2Is absolutely greater than or equal to a first threshold value, and c1Is less than c0When the vehicle turns left, the current vehicle turns left, and the yaw angle is arctan (| c)1|/|c0|);
In a curve scene, when c2Is absolutely greater than or equal to a first threshold value, and c1Is greater than or equal to c0When the vehicle turns to the right, the current vehicle turns to the right, and the yaw angle is-arctan (| c)1|/|c0|);
In a curve scene, when c2Is less than a first threshold value, and c1Is greater than or equal to a second threshold value, and c1Is less than c0When the vehicle turns left, the current vehicle turns left, and the yaw angle is arctan (| c)1|/|c0|);
In a curve scene, when c2Is less than a first threshold value, and c1Is greater than or equal to a second threshold value, and c1Is greater than or equal to c0When the vehicle turns right, the current vehicle turns right, and the yaw angle is-arctan (| c1|/| c0 |);
when the vehicle is in a straight road scene, the yaw angle is 0.
The invention also provides a curve scene perception system based on a binocular stereo camera, which is used for implementing the method, and the system comprises:
the data set acquisition unit is used for constructing a road surface equation under the three-dimensional world coordinate and obtaining a point cloud data set to be detected after the road surface is removed according to the road surface equation;
the boundary point marking unit is used for screening data points in the point cloud data set to be detected and obtaining a plurality of marking points, and each marking point forms a driving area boundary point;
a driving scene determination unit for determining a current driving scene based on the coordinate values of the driving region boundary points;
and the driving attitude estimation unit is used for estimating the current driving attitude according to the current driving scene.
Further, the driving scene determination unit is specifically configured to:
according to the coordinate values (z, x) of each mark point in the boundary points of the driving area, fitting a quadratic curve equation x as c0+c1×z+c2×z2Wherein c is0、c1And c2Is a coefficient;
coefficient of curve equation [ c ] obtained from fitting0,c1,c2]And judging a driving scene:
determination c2Is absolutely greater than or equal to the first threshold, then the current vehicle is in a curve scene with a turning radius of about c0
Determination c2Is less than a first threshold value, and c1Is greater than or equal to the second threshold, the current vehicle is in a curve scene, the turning radius is about c0
Determination c2Is less than a first threshold value, and c1Is less than the second threshold, the current vehicle is in a straight road scene.
The invention also provides a curve scene perception device based on the binocular stereo camera, and the device comprises: the system comprises a data acquisition unit, a processor and a memory;
the data acquisition unit is used for acquiring data; the memory is to store one or more program instructions; the processor, configured to execute one or more program instructions to perform the method of any of claims 1-6.
The present invention also provides a computer readable storage medium having embodied therein one or more program instructions for executing the method as described above.
According to the curve scene sensing method, the curve scene sensing system and the curve scene sensing device based on the binocular stereo camera, a road surface equation under a three-dimensional world coordinate is established, a point cloud data set to be detected is obtained after the road surface is removed according to the road surface equation, data points in the point cloud data set to be detected are screened, a plurality of mark points are obtained, each mark point forms a driving area boundary point, a current driving scene is judged based on the coordinate value of the driving area boundary point, and therefore the current driving posture is estimated according to the current driving scene. Therefore, in automatic driving or auxiliary driving, the current vehicle driving posture can be estimated without depending on other additional external input information; therefore, the technical problem that estimation of the vehicle running posture is difficult due to the fact that estimation of the vehicle running posture needs to depend on external input information in the prior art is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
The structures, ratios, sizes, and the like shown in the present specification are only used for matching with the contents disclosed in the specification, so as to be understood and read by those skilled in the art, and are not used to limit the conditions that the present invention can be implemented, so that the present invention has no technical significance, and any structural modifications, changes in the ratio relationship, or adjustments of the sizes, without affecting the effects and the achievable by the present invention, should still fall within the range that the technical contents disclosed in the present invention can cover.
FIG. 1 is a flowchart of a curve scene perception method based on a binocular stereo camera according to an embodiment of the present invention;
fig. 2 is a block diagram of a curve scene perception system based on binocular stereo cameras according to an embodiment of the present invention.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
According to the curve scene perception method based on the binocular stereo camera, the estimation of the driving posture of the vehicle is realized through a self algorithm, so that the external input information is not required, and the rapidness and the accuracy of posture estimation in an automatic driving or auxiliary driving scene are improved.
In a specific embodiment, as shown in fig. 1, the method for curve scene perception based on a binocular stereo camera provided by the invention comprises the following steps:
s1: and constructing a road surface equation under the three-dimensional world coordinate, and obtaining a point cloud data set to be detected after removing the road surface according to the road surface equation. Specifically, parallax information of a binocular stereo camera is acquired; converting the parallax information into corresponding three-dimensional point cloud information; estimating the current road surface condition according to the three-dimensional point cloud information, and constructing a road surface equation under a three-dimensional world coordinate based on an estimation result; and according to the road surface equation, performing road surface area segmentation on the three-dimensional point cloud information to obtain a point cloud data set to be detected after the road surface is removed.
That is to say, in the implementation process of the method, parallax information disp is firstly obtained through a binocular stereo vision sensor, and then the parallax information is converted into corresponding three-dimensional point cloud information; according to the point cloud information, Road surface estimation is achieved, and a Road surface equation Road under a three-dimensional world coordinate is constructed; and finally, according to a road surface equation, carrying out road surface area segmentation on the point cloud information to obtain a point cloud data set pts to be detected after the road surface is removed.
S2: and screening data points in the point cloud data set to be detected to obtain a plurality of mark points, wherein each mark point forms a driving area boundary point.
Specifically, after a point cloud data set pts to be detected is obtained, carrying out plane projection on data in the point cloud data set to be detected, and dividing a plurality of projection grid areas on a projection plane; filling numerical values in the point cloud data set to be detected in each projection grid area respectively to obtain the number of space points falling in the current projection grid area; and screening the projection grids in the field of view, and obtaining a plurality of mark points forming the boundary points of the driving area according to the screening result.
Screening projection grids in a field of view, and obtaining a plurality of mark points forming boundary points of a driving area according to a screening result, wherein the specific steps comprise:
carrying out threshold screening on the projection grids in the field of view;
judging that the number of accumulated space points in the projection grid is greater than a threshold value, setting the position of the current projection grid to be 'true', and otherwise, setting the position of the current projection grid to be 'false';
taking a coordinate system of a binocular stereo camera as a reference system, taking the optical axis direction of a left eye camera as the Z-axis direction, the base line direction of the binocular stereo camera as the X-axis direction, and the vertical direction as the Y direction, selecting an O point as a rotation center for an XOZ projection area, traversing the whole field of view area according to a fixed increment within a horizontal visual angle range by taking one boundary as a starting point until the boundary line of the other field of view is reached;
in the process of traversing the field area, taking the point O as the starting point of the virtual ray, emitting a light ray, and if the ray meets the grid marked as true for the first time, keeping the mark of the first grid position as true and taking the mark as a mark point; and all the grids at other positions are set to be false along the light direction, so that all mark points which are screened to be true are used as boundary points of the driving area.
That is, the coordinate system of the binocular stereo camera is taken as a reference system, the optical axis direction of the left eye camera is the Z-axis direction, the baseline direction of the binocular stereo camera is the X-axis direction, and the vertical direction is the Y direction. When a boundary point P of a driving area is constructed, firstly carrying out XOZ plane projection on pts data, and dividing m rows and m columns of projection grid areas grid on a projection plane according to a physical scale; each projection grid represents a small area on the XOZ plane, and the numerical values filled in the grid positions represent the number of spatial points which fall in the current small area when the pts data are projected to the XOZ plane. And (3) carrying out threshold screening on the projection grid in the field range, and setting the grid position as 'true' when the number of accumulated points in the grid is greater than a preset threshold, otherwise, setting the grid position as 'false'. And aiming at the projection area of the XOZ, selecting an O point as a rotation center, traversing the whole field area according to a fixed increment theta by taking one boundary as a starting point in the range of a horizontal view angle until reaching the other field boundary line. In the process of traversing the field area, taking the point O as the starting point of the virtual ray, emitting a light ray, and if the ray meets the grid marked as true for the first time, keeping the mark of the first grid position as true; along the light direction, all grids at other positions are set as 'false', and the mark points obtained by screening by the scheme are called driving area boundary points P.
S3: judging a current driving scene based on the coordinate values of the boundary points of the driving area; specifically, a quadratic curve equation x is fitted to c according to the coordinate values (z, x) of each marking point in the driving area boundary points0+c1×z+c2×z2Wherein c is0、c1And c2Is a coefficient; coefficient of curve equation [ c ] obtained from fitting0,c1,c2]Judging a driving scene; determination c2Is absolutely greater than or equal to the first threshold, then the current vehicle is in a curve scene with a turning radius of about c0(ii) a Determination c2Is less than a first threshold value, and c1Is greater than or equal to the second threshold, the current vehicle is in a curve scene, the turning radius is about c0(ii) a Determination c2Is less than a first threshold value, and c1Is less than the second threshold, the current vehicle is in a straight road scene.
S4: and estimating the current driving attitude according to the current driving scene. In particular, in a curve scene, when c2Is absolutely greater than or equal to a first threshold value, and c1Is less than c0When the vehicle turns left, the current vehicle turns left, and the yaw angle is arctan (| c)1|/|c0I)); in a curve scene, when c2Is absolutely greater than or equal to a first threshold value, and c1Is greater than or equal to c0When the vehicle turns to the right, the current vehicle turns to the right, and the yaw angle is-arctan (| c)1|/|c0I)); in a curve scene, when c2Is less than a first threshold value, and c1Is greater than or equal toIs equal to the second threshold value, and c1Is less than c0When the vehicle turns left, the current vehicle turns left, and the yaw angle is arctan (| c)1|/|c0I)); in a curve scene, when c2Is less than a first threshold value, and c1Is greater than or equal to a second threshold value, and c1Is greater than or equal to c0When the vehicle turns right, the current vehicle turns right, and the yaw angle is-arctan (| c1|/| c0 |); when the vehicle is in a straight road scene, the yaw angle is 0.
In a specific embodiment, the curve scene sensing method based on the binocular stereo camera provided by the invention comprises the steps of constructing a road surface equation under a three-dimensional world coordinate, obtaining a point cloud data set to be detected after removing a road surface according to the road surface equation, screening data points in the point cloud data set to be detected, obtaining a plurality of mark points, forming a driving area boundary point by each mark point, judging a current driving scene based on coordinate values of the driving area boundary point, and estimating a current driving posture according to the current driving scene. Therefore, in automatic driving or auxiliary driving, the current vehicle driving posture can be estimated without depending on other additional external input information; therefore, the technical problem that estimation of the vehicle running posture is difficult due to the fact that estimation of the vehicle running posture needs to depend on external input information in the prior art is solved.
In addition to the above method, the present invention further provides a curve scene sensing system based on a binocular stereo camera, for implementing the above method, as shown in fig. 2, the system comprising:
and the data set acquisition unit 100 is used for constructing a road surface equation under the three-dimensional world coordinate and obtaining a point cloud data set to be detected after the road surface is removed according to the road surface equation. The data set acquisition unit 100 is specifically configured to acquire parallax information of a binocular stereo camera; converting the parallax information into corresponding three-dimensional point cloud information; estimating the current road surface condition according to the three-dimensional point cloud information, and constructing a road surface equation under a three-dimensional world coordinate based on an estimation result; and according to the road surface equation, performing road surface area segmentation on the three-dimensional point cloud information to obtain a point cloud data set to be detected after the road surface is removed.
That is to say, in the implementation process of the method, the data set obtaining unit 100 first obtains the disparity information disp through the binocular stereo vision sensor, and then converts the disparity information into corresponding three-dimensional point cloud information; according to the point cloud information, Road surface estimation is achieved, and a Road surface equation Road under a three-dimensional world coordinate is constructed; and finally, according to a road surface equation, carrying out road surface area segmentation on the point cloud information to obtain a point cloud data set pts to be detected after the road surface is removed.
And the boundary point marking unit 200 is configured to screen data points in the point cloud data set to be detected, and obtain a plurality of marking points, where each marking point forms a driving area boundary point. The boundary point marking unit 200 is specifically configured to, after obtaining a point cloud data set pts to be detected, perform plane projection on data in the point cloud data set to be detected, and divide a plurality of projection grid areas on a projection plane; filling numerical values in the point cloud data set to be detected in each projection grid area respectively to obtain the number of space points falling in the current projection grid area; and screening the projection grids in the field of view, and obtaining a plurality of mark points forming the boundary points of the driving area according to the screening result.
Screening projection grids in a field of view, and obtaining a plurality of mark points forming boundary points of a driving area according to a screening result, wherein the specific steps comprise:
carrying out threshold screening on the projection grids in the field of view;
judging that the number of accumulated space points in the projection grid is greater than a threshold value, setting the position of the current projection grid to be 'true', and otherwise, setting the position of the current projection grid to be 'false';
taking a coordinate system of a binocular stereo camera as a reference system, taking the optical axis direction of a left eye camera as the Z-axis direction, the base line direction of the binocular stereo camera as the X-axis direction, and the vertical direction as the Y direction, selecting an O point as a rotation center for an XOZ projection area, traversing the whole field of view area according to a fixed increment within a horizontal visual angle range by taking one boundary as a starting point until the boundary line of the other field of view is reached;
in the process of traversing the field area, taking the point O as the starting point of the virtual ray, emitting a light ray, and if the ray meets the grid marked as true for the first time, keeping the mark of the first grid position as true and taking the mark as a mark point; and all the grids at other positions are set to be false along the light direction, so that all mark points which are screened to be true are used as boundary points of the driving area.
That is, the coordinate system of the binocular stereo camera is taken as a reference system, the optical axis direction of the left eye camera is the Z-axis direction, the baseline direction of the binocular stereo camera is the X-axis direction, and the vertical direction is the Y direction. When a boundary point P of a driving area is constructed, firstly carrying out XOZ plane projection on pts data, and dividing m rows and m columns of projection grid areas grid on a projection plane according to a physical scale; each projection grid represents a small area on the XOZ plane, and the numerical values filled in the grid positions represent the number of spatial points which fall in the current small area when the pts data are projected to the XOZ plane. And (3) carrying out threshold screening on the projection grid in the field range, and setting the grid position as 'true' when the number of accumulated points in the grid is greater than a preset threshold, otherwise, setting the grid position as 'false'. And aiming at the projection area of the XOZ, selecting an O point as a rotation center, traversing the whole field area according to a fixed increment theta by taking one boundary as a starting point in the range of a horizontal view angle until reaching the other field boundary line. In the process of traversing the field area, taking the point O as the starting point of the virtual ray, emitting a light ray, and if the ray meets the grid marked as true for the first time, keeping the mark of the first grid position as true; along the light direction, all grids at other positions are set as 'false', and the mark points obtained by screening by the scheme are called driving area boundary points P.
A driving scene determination unit 300 for determining a current driving scene based on the coordinate values of the driving region boundary points; the driving scenario determination unit 300 is specifically configured to:
according to the coordinate values (z, x) of each mark point in the boundary points of the driving area, fitting a quadratic curve equation x as c0+c1×z+c2×z2Wherein c is0、c1And c2Is a coefficient;
coefficient of curve equation [ c ] obtained from fitting0,c1,c2]And judging a driving scene:
determination c2Is absolutely greater than or equal to the first threshold, then the current vehicle is in a curve scene with a turning radius of about c0
Determination c2Is less than a first threshold value, and c1Is greater than or equal to the second threshold, the current vehicle is in a curve scene, the turning radius is about c0
Determination c2Is less than a first threshold value, and c1Is less than the second threshold, the current vehicle is in a straight road scene.
And a driving posture estimation unit 400 for estimating a current driving posture according to the current driving scene. The driving posture estimation unit 400 is specifically used for estimating the driving posture in a curve scene when c is2Is absolutely greater than or equal to a first threshold value, and c1Is less than c0When the vehicle turns left, the current vehicle turns left, and the yaw angle is arctan (| c)1|/|c0I)); in a curve scene, when c2Is absolutely greater than or equal to a first threshold value, and c1Is greater than or equal to c0When the vehicle turns to the right, the current vehicle turns to the right, and the yaw angle is-arctan (| c)1|/|c0I)); in a curve scene, when c2Is less than a first threshold value, and c1Is greater than or equal to a second threshold value, and c1Is less than c0When the vehicle turns left, the current vehicle turns left, and the yaw angle is arctan (| c)1|/|c0I)); in a curve scene, when c2Is less than a first threshold value, and c1Is greater than or equal to a second threshold value, and c1Is greater than or equal to c0When the vehicle turns right, the current vehicle turns right, and the yaw angle is-arctan (| c1|/| c0 |); when the vehicle is in a straight road scene, the yaw angle is 0.
In a specific embodiment, the curve scene sensing system based on the binocular stereo camera provided by the invention obtains a point cloud data set to be detected after removing a road surface by constructing a road surface equation under a three-dimensional world coordinate according to the road surface equation, screens data points in the point cloud data set to be detected, obtains a plurality of mark points, forms a driving area boundary point by each mark point, judges a current driving scene based on a coordinate value of the driving area boundary point, and estimates a current driving posture according to the current driving scene. Therefore, in automatic driving or auxiliary driving, the current vehicle driving posture can be estimated without depending on other additional external input information; therefore, the technical problem that estimation of the vehicle running posture is difficult due to the fact that estimation of the vehicle running posture needs to depend on external input information in the prior art is solved.
The invention also provides a curve scene perception device based on the binocular stereo camera, and the device comprises: the system comprises a data acquisition unit, a processor and a memory;
the data acquisition unit is used for acquiring data; the memory is to store one or more program instructions; the processor is configured to execute one or more program instructions to perform the method as described above.
In correspondence with the above embodiments, embodiments of the present invention also provide a computer storage medium containing one or more program instructions therein. Wherein the one or more program instructions are for executing the method as described above by a binocular camera depth calibration system.
In an embodiment of the invention, the processor may be an integrated circuit chip having signal processing capability. The Processor may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The processor reads the information in the storage medium and completes the steps of the method in combination with the hardware.
The storage medium may be a memory, for example, which may be volatile memory or nonvolatile memory, or which may include both volatile and nonvolatile memory.
The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory.
The volatile Memory may be a Random Access Memory (RAM) which serves as an external cache. By way of example and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), SLDRAM (SLDRAM), and Direct Rambus RAM (DRRAM).
The storage media described in connection with the embodiments of the invention are intended to comprise, without being limited to, these and any other suitable types of memory.
Those skilled in the art will appreciate that the functionality described in the present invention may be implemented in a combination of hardware and software in one or more of the examples described above. When software is applied, the corresponding functionality may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The above embodiments are only for illustrating the embodiments of the present invention and are not to be construed as limiting the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made on the basis of the embodiments of the present invention shall be included in the scope of the present invention.

Claims (10)

1. A curve scene perception method based on a binocular stereo camera is characterized by comprising the following steps:
constructing a road surface equation under a three-dimensional world coordinate, and obtaining a point cloud data set to be detected after removing the road surface according to the road surface equation;
screening data points in a point cloud data set to be detected, and obtaining a plurality of mark points, wherein each mark point forms a driving area boundary point;
judging a current driving scene based on the coordinate values of the boundary points of the driving area;
and estimating the current driving attitude according to the current driving scene.
2. The binocular stereo camera-based curve scene perception method according to claim 1, wherein a road surface equation under three-dimensional world coordinates is constructed, and a point cloud data set to be detected after a road surface is removed is obtained according to the road surface equation, and specifically the method comprises the following steps:
acquiring parallax information of a binocular stereo camera;
converting the parallax information into corresponding three-dimensional point cloud information;
estimating the current road surface condition according to the three-dimensional point cloud information, and constructing a road surface equation under a three-dimensional world coordinate based on an estimation result;
and according to the road surface equation, performing road surface area segmentation on the three-dimensional point cloud information to obtain a point cloud data set to be detected after the road surface is removed.
3. The binocular stereo camera-based curve scene perception method according to claim 1, wherein the screening of the data points in the point cloud data set to be detected and the obtaining of a plurality of marker points, each of the marker points forming a driving area boundary point, specifically comprises:
carrying out plane projection on the data in the point cloud data set to be detected, and dividing a plurality of projection grid areas on a projection plane;
filling numerical values in the point cloud data set to be detected in each projection grid area respectively to obtain the number of space points falling in the current projection grid area;
and screening the projection grids in the field of view, and obtaining a plurality of mark points forming the boundary points of the driving area according to the screening result.
4. The binocular stereo camera-based curve scene perception method according to claim 1, wherein the screening of the projection grids within the field of view and the obtaining of a plurality of marker points forming the boundary points of the driving area according to the screening result specifically include:
carrying out threshold screening on the projection grids in the field of view;
judging that the number of accumulated space points in the projection grid is greater than a threshold value, setting the position of the current projection grid to be 'true', and otherwise, setting the position of the current projection grid to be 'false';
taking a coordinate system of a binocular stereo camera as a reference system, taking the optical axis direction of a left eye camera as the Z-axis direction, the base line direction of the binocular stereo camera as the X-axis direction, and the vertical direction as the Y direction, selecting an O point as a rotation center for an XOZ projection area, traversing the whole field of view area according to a fixed increment within a horizontal visual angle range by taking one boundary as a starting point until the boundary line of the other field of view is reached;
in the process of traversing the field area, taking the point O as the starting point of the virtual ray, emitting a light ray, and if the ray meets the grid marked as true for the first time, keeping the mark of the first grid position as true and taking the mark as a mark point; and all the grids at other positions are set to be false along the light direction, so that all mark points which are screened to be true are used as boundary points of the driving area.
5. The binocular stereo camera-based curve scene perception method according to claim 1, wherein the judging of the current driving scene based on the coordinate values of the driving area boundary points specifically includes:
according to the coordinate values (z, x) of each mark point in the boundary points of the driving area, fitting a quadratic curve equation x as c0+c1×z+c2×z2Wherein c is0、c1And c2Is a coefficient;
coefficient of curve equation [ c ] obtained from fitting0,c1,c2]And judging a driving scene:
determination c2Is absolutely greater than or equal to the first threshold, then the current vehicle is in a curve scene with a turning radius of about c0
Determination c2Is less than a first threshold value, and c1Is greater than or equal to the second threshold, the current vehicle is in a curve scene, the turning radius is about c0
Determination c2Is less than a first threshold value, and c1Is less than the second threshold, the current vehicle is in a straight road scene.
6. The binocular stereo camera based curve scene perception method according to claim 1, wherein estimating a current driving posture according to a current driving scene specifically comprises:
in a curve scene, when c2Is absolutely greater than or equal to a first threshold value, and c1Is less than c0When the vehicle turns left, the current vehicle turns left, and the yaw angle is arctan (| c)1|/|c0|);
In a curve scene, when c2Is absolutely greater than or equal to a first threshold value, and c1Is greater than or equal to c0When the vehicle turns to the right, the current vehicle turns to the right, and the yaw angle is-arctan (| c)1|/|c0|);
In a curve scene, when c2Is less than a first threshold value, and c1Is greater than or equal to a second threshold value, and c1Is less than c0When the vehicle turns left, the current vehicle turns left, and the yaw angle is arctan (| c)1|/|c0|);
In a curve scene, when c2Is less than a first threshold value, and c1Is greater than or equal to a second threshold value, and c1Is greater than or equal to c0When the vehicle turns right, the current vehicle turns right, and the yaw angle is-arctan (| c1|/| c0 |);
when the vehicle is in a straight road scene, the yaw angle is 0.
7. A binocular stereo camera based curve scene perception system for implementing the method according to any one of claims 1-6, the system comprising:
the data set acquisition unit is used for constructing a road surface equation under the three-dimensional world coordinate and obtaining a point cloud data set to be detected after the road surface is removed according to the road surface equation;
the boundary point marking unit is used for screening data points in the point cloud data set to be detected and obtaining a plurality of marking points, and each marking point forms a driving area boundary point;
a driving scene determination unit for determining a current driving scene based on the coordinate values of the driving region boundary points;
and the driving attitude estimation unit is used for estimating the current driving attitude according to the current driving scene.
8. The binocular stereo camera based curve scene perception system according to claim 7, wherein the driving scene determination unit is specifically configured to:
according to the coordinate values (z, x) of each mark point in the boundary points of the driving area, fitting a quadratic curve equation x as c0+c1×z+c2×z2Wherein c is0、c1And c2Is a coefficient;
coefficient of curve equation [ c ] obtained from fitting0,c1,c2]And judging a driving scene:
determination c2Is absolutely greater than or equal to the first threshold, then the current vehicle is in a curve scene with a turning radius of about c0
Determination c2Is less than a first threshold value, and c1Is greater than or equal to the second threshold, the current vehicle is in a curve scene, the turning radius is about c0
Determination c2Is less than a first threshold value, and c1Is less than the second threshold, the current vehicle is in a straight road scene.
9. A curve scene perception device based on a binocular stereo camera is characterized in that the device comprises: the system comprises a data acquisition unit, a processor and a memory;
the data acquisition unit is used for acquiring data; the memory is to store one or more program instructions; the processor, configured to execute one or more program instructions to perform the method of any of claims 1-6.
10. A computer-readable storage medium having one or more program instructions embodied therein for performing the method of any of claims 1-6.
CN202011276838.5A 2020-11-16 2020-11-16 Bend scene sensing method, system and device based on binocular stereo camera Active CN112465831B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011276838.5A CN112465831B (en) 2020-11-16 2020-11-16 Bend scene sensing method, system and device based on binocular stereo camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011276838.5A CN112465831B (en) 2020-11-16 2020-11-16 Bend scene sensing method, system and device based on binocular stereo camera

Publications (2)

Publication Number Publication Date
CN112465831A true CN112465831A (en) 2021-03-09
CN112465831B CN112465831B (en) 2023-10-20

Family

ID=74837527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011276838.5A Active CN112465831B (en) 2020-11-16 2020-11-16 Bend scene sensing method, system and device based on binocular stereo camera

Country Status (1)

Country Link
CN (1) CN112465831B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113140002A (en) * 2021-03-22 2021-07-20 北京中科慧眼科技有限公司 Road condition detection method and system based on binocular stereo camera and intelligent terminal
CN113267137A (en) * 2021-05-28 2021-08-17 北京易航远智科技有限公司 Real-time measurement method and device for tire deformation
CN115205501A (en) * 2022-08-10 2022-10-18 小米汽车科技有限公司 Method, device, equipment and medium for displaying road surface condition

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103098111A (en) * 2010-09-24 2013-05-08 丰田自动车株式会社 Track estimation device and program
CN106846369A (en) * 2016-12-14 2017-06-13 广州市联奥信息科技有限公司 Vehicular turn condition discrimination method and device based on binocular vision
CN107358168A (en) * 2017-06-21 2017-11-17 海信集团有限公司 A kind of detection method and device in vehicle wheeled region, vehicle electronic device
CN108267747A (en) * 2017-01-03 2018-07-10 中交宇科(北京)空间信息技术有限公司 Road feature extraction method and apparatus based on laser point cloud
US20200003869A1 (en) * 2018-07-02 2020-01-02 Beijing Didi Infinity Technology And Development Co., Ltd. Vehicle navigation system using pose estimation based on point cloud
CN110834630A (en) * 2019-10-22 2020-02-25 中国第一汽车股份有限公司 Vehicle driving control method and device, vehicle and storage medium
CN111007531A (en) * 2019-12-24 2020-04-14 电子科技大学 Road edge detection method based on laser point cloud data
CN111832373A (en) * 2019-05-28 2020-10-27 北京伟景智能科技有限公司 Automobile driving posture detection method based on multi-view vision

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103098111A (en) * 2010-09-24 2013-05-08 丰田自动车株式会社 Track estimation device and program
CN106846369A (en) * 2016-12-14 2017-06-13 广州市联奥信息科技有限公司 Vehicular turn condition discrimination method and device based on binocular vision
CN108267747A (en) * 2017-01-03 2018-07-10 中交宇科(北京)空间信息技术有限公司 Road feature extraction method and apparatus based on laser point cloud
CN107358168A (en) * 2017-06-21 2017-11-17 海信集团有限公司 A kind of detection method and device in vehicle wheeled region, vehicle electronic device
US20200003869A1 (en) * 2018-07-02 2020-01-02 Beijing Didi Infinity Technology And Development Co., Ltd. Vehicle navigation system using pose estimation based on point cloud
CN111832373A (en) * 2019-05-28 2020-10-27 北京伟景智能科技有限公司 Automobile driving posture detection method based on multi-view vision
CN110834630A (en) * 2019-10-22 2020-02-25 中国第一汽车股份有限公司 Vehicle driving control method and device, vehicle and storage medium
CN111007531A (en) * 2019-12-24 2020-04-14 电子科技大学 Road edge detection method based on laser point cloud data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
石金进: "基于视觉的智能车辆道路识别与障碍物检测方法研究", 《中国博士学位论文全文数据库工程科技Ⅱ辑》, no. 1 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113140002A (en) * 2021-03-22 2021-07-20 北京中科慧眼科技有限公司 Road condition detection method and system based on binocular stereo camera and intelligent terminal
CN113267137A (en) * 2021-05-28 2021-08-17 北京易航远智科技有限公司 Real-time measurement method and device for tire deformation
CN113267137B (en) * 2021-05-28 2023-02-03 北京易航远智科技有限公司 Real-time measurement method and device for tire deformation
CN115205501A (en) * 2022-08-10 2022-10-18 小米汽车科技有限公司 Method, device, equipment and medium for displaying road surface condition

Also Published As

Publication number Publication date
CN112465831B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN112465831A (en) Curve scene perception method, system and device based on binocular stereo camera
CN108805934B (en) External parameter calibration method and device for vehicle-mounted camera
WO2018142900A1 (en) Information processing device, data management device, data management system, method, and program
CN112906449A (en) Dense disparity map-based road surface pothole detection method, system and equipment
US20200193641A1 (en) Method and apparatus for calibrating the extrinsic parameter of an image sensor
CN114495043B (en) Method and system for detecting up-and-down slope road conditions based on binocular vision system and intelligent terminal
JP6515650B2 (en) Calibration apparatus, distance measuring apparatus and calibration method
CN114509045A (en) Wheel area elevation detection method and system
CN112562093B (en) Object detection method, electronic medium, and computer storage medium
CN110926408A (en) Short-distance measuring method, device and system based on characteristic object and storage medium
CN113965742B (en) Dense disparity map extraction method and system based on multi-sensor fusion and intelligent terminal
CN110969666A (en) Binocular camera depth calibration method, device and system and storage medium
CN108389228B (en) Ground detection method, device and equipment
CN113140002B (en) Road condition detection method and system based on binocular stereo camera and intelligent terminal
CN111382591A (en) Binocular camera ranging correction method and vehicle-mounted equipment
CN113284194A (en) Calibration method, device and equipment for multiple RS (remote sensing) equipment
CN111563936A (en) Camera external parameter automatic calibration method and automobile data recorder
CN111754574A (en) Distance testing method, device and system based on binocular camera and storage medium
CN111627067B (en) Calibration method of binocular camera and vehicle-mounted equipment
CN115100621A (en) Ground scene detection method and system based on deep learning network
EP3629292A1 (en) Reference point selection for extrinsic parameter calibration
CN113674275B (en) Dense disparity map-based road surface unevenness detection method and system and intelligent terminal
CN115546314A (en) Sensor external parameter calibration method and device, equipment and storage medium
CN114821497A (en) Method, device and equipment for determining position of target object and storage medium
CN112767498A (en) Camera calibration method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant