CN114037977A - Road vanishing point detection method, device, equipment and storage medium - Google Patents

Road vanishing point detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN114037977A
CN114037977A CN202210012039.XA CN202210012039A CN114037977A CN 114037977 A CN114037977 A CN 114037977A CN 202210012039 A CN202210012039 A CN 202210012039A CN 114037977 A CN114037977 A CN 114037977A
Authority
CN
China
Prior art keywords
vanishing point
image
road
vehicle
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210012039.XA
Other languages
Chinese (zh)
Other versions
CN114037977B (en
Inventor
俞吉
刘国清
杨广
王启程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Youjia Innovation Technology Co.,Ltd.
Original Assignee
Shenzhen Minieye Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Minieye Innovation Technology Co Ltd filed Critical Shenzhen Minieye Innovation Technology Co Ltd
Priority to CN202210012039.XA priority Critical patent/CN114037977B/en
Publication of CN114037977A publication Critical patent/CN114037977A/en
Application granted granted Critical
Publication of CN114037977B publication Critical patent/CN114037977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a road vanishing point detection method, a device, equipment and a storage medium, wherein a front image and vehicle inertia data of a vehicle during running are obtained, an image passable area of the front image is determined, the image passable area is subjected to coordinate extraction, and the current coordinate of a road vanishing point is determined, so that the edge pixel coordinate of the image passable area can be detected for calculating a vanishing point coordinate without depending on a detected parallel lane line; and based on a preset Kalman filter, dynamically detecting the road vanishing point by using the vehicle inertia data and the current coordinate to obtain a coordinate change track of the road vanishing point so as to stably track the vanishing point by using an inertia sensor, thereby avoiding the condition that the vanishing point detection accuracy is reduced due to the fact that the road belongs to a horizontal plane and ensuring the detection accuracy of the vanishing point.

Description

Road vanishing point detection method, device, equipment and storage medium
Technical Field
The application relates to the technical field of intelligent auxiliary driving, in particular to a road vanishing point detection method, device, equipment and storage medium.
Background
In an advanced assistant driving system, a camera is an important sensor for acquiring images and realizing recognition, tracking, distance measurement and speed measurement of targets such as vehicles, pedestrians, lane lines and the like. The monocular vision is required to rely on dynamic real-time road vanishing points when tracking, ranging and speed measuring are carried out on a target. Vanishing point means that in perspective projection, a projection of a bundle of parallel lines parallel to the projection plane may remain parallel, while a projection of parallel lines not parallel to the projection plane may converge to a point, which is called vanishing point.
At present, the dynamic detection method for road vanishing points mainly comprises the following steps: 1. assuming that the road surface is horizontal, the installation position and the angle of the camera on the vehicle are fixed and unchanged, so the vanishing point position is fixed and unchanged; 2. and detecting mutually parallel lane lines, and calculating vanishing points through intersection points of the lane lines on the image. However, the road surface cannot be a horizontal plane, so that the method 1 has a large error in many scenes; however, when there is no clear lane line or the lane lines are not parallel, the method 2 cannot realize dynamic vanishing point detection.
Disclosure of Invention
The application provides a road vanishing point detection method, a road vanishing point detection device, road vanishing point detection equipment and a storage medium, and aims to solve the technical problem that dynamic detection of the current road vanishing point is poor in detection accuracy.
In order to solve the above technical problem, an embodiment of the present application provides a method for detecting a road vanishing point, including:
acquiring a front image and vehicle inertia data when a vehicle runs;
determining an image passable area of the front image, wherein the image passable area is an area range where the vehicle is seen through to the image in the normal driving road range of the physical world;
extracting coordinates of the passable area of the image, and determining the current coordinates of the vanishing points of the road;
and based on a preset Kalman filter, dynamically detecting the road vanishing point by using the vehicle inertia data and the current coordinate to obtain the coordinate change track of the road vanishing point.
According to the embodiment, the front image and the vehicle inertia data of the vehicle during running are obtained, the image passable area of the front image is determined, the coordinate extraction is carried out on the image passable area, and the current coordinate of the vanishing point of the road is determined, so that the edge pixel coordinate of the image passable area can be detected and used for calculating the vanishing point coordinate without depending on the detection of parallel lane lines; and based on a preset Kalman filter, dynamically detecting the vanishing point of the road by using the inertia data of the vehicle and the current coordinate to obtain the coordinate change track of the vanishing point of the road, and stably tracking the vanishing point by using an inertia sensor, so that the condition that the vanishing point detection accuracy is reduced due to the fact that the road belongs to a horizontal plane is avoided, and the vanishing point detection accuracy is ensured.
In one embodiment, determining an image passable area for a front image comprises:
and segmenting the front image by using a preset image segmentation model to obtain an edge pixel coordinate set of the passable area of the image.
In one embodiment, the coordinate extraction of the image passable area is performed, and the current coordinate of the road vanishing point is determined, which comprises the following steps:
according to the edge pixel abscissa of the image passable area, carrying out segmentation extraction on the edge pixel ordinate of the image passable area to obtain a plurality of minimum peaks of the edge pixel ordinate;
and performing mean value operation on the plurality of minimum peak values to obtain a first vertical coordinate of the passable area of the image, wherein the edge pixel horizontal coordinate and the first vertical coordinate form the current coordinate of the road vanishing point.
In one embodiment, acquiring front images and vehicle inertia data while a vehicle is traveling includes:
acquiring a front image at the time k when the vehicle runs by using a camera on the vehicle;
vehicle inertia data from time k-1 to time k is acquired using inertial sensors on the vehicle.
In one embodiment, based on a preset kalman filter, the method for dynamically detecting a road vanishing point by using vehicle inertia data and a current coordinate to obtain a coordinate change trajectory of the road vanishing point includes:
predicting a second ordinate of the road vanishing point by using the vehicle inertia data;
determining a target state covariance matrix of a Kalman filter;
and dynamically detecting the road vanishing point according to the second vertical coordinate and the target state covariance matrix to obtain the coordinate change track of the road vanishing point.
In one embodiment, predicting a second ordinate of a road vanishing point using vehicle inertial data comprises:
and predicting a second ordinate of the road vanishing point according to the longitudinal normalized focal length of the camera of the vehicle and the vehicle inertia data.
In one embodiment, determining a target state covariance matrix for a Kalman filter comprises:
determining a second state covariance matrix of the Kalman filter at the current moment according to a first state covariance matrix of the Kalman filter at the previous moment and system noise of the Kalman filter at the current moment;
determining a gain matrix of a Kalman filter according to vanishing point noise of a passable area of an image;
and updating the second state covariance matrix by using the gain matrix to obtain a target state covariance matrix.
In a second aspect, an embodiment of the present application provides a device for detecting a road vanishing point, including:
the acquisition module is used for acquiring a front image and vehicle inertia data when the vehicle runs;
the determining module is used for determining an image passable area of the front image, wherein the image passable area is an area range where the vehicle is seen through to the image in a normal driving road range of a physical world;
the extraction module is used for extracting coordinates of the image passable area and determining the current coordinates of the road vanishing points;
and the detection module is used for dynamically detecting the road vanishing point by using the vehicle inertia data and the current coordinate based on a preset Kalman filter to obtain the coordinate change track of the road vanishing point.
In a third aspect, an embodiment of the present application provides a terminal device, including a processor and a memory, where the memory is used to store a computer program, and the computer program is executed by the processor to implement the method for detecting a road vanishing point according to the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the method for detecting a road vanishing point according to the first aspect.
Please refer to the relevant description of the first aspect for the beneficial effects of the second to fourth aspects, which are not repeated herein.
Drawings
Fig. 1 is a schematic flowchart of a road vanishing point detecting method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an image coordinate system provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a road vanishing point detecting device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As described in the related art, the dynamic detection method for road vanishing points mainly includes: 1. assuming that the road surface is horizontal, the installation position and the angle of the camera on the vehicle are fixed and unchanged, so the vanishing point position is fixed and unchanged; 2. and detecting mutually parallel lane lines, and calculating vanishing points through intersection points of the lane lines on the image. However, the road surface cannot be a horizontal plane, so that the method 1 has a large error in many scenes; however, when there is no clear lane line or the lane lines are not parallel, the method 2 cannot realize dynamic vanishing point detection.
Therefore, the embodiment of the application provides a method, a device, equipment and a storage medium for detecting a road vanishing point, wherein a front image and vehicle inertia data when a vehicle runs are obtained, an image passable area of the front image is determined, the image passable area is subjected to coordinate extraction, and the current coordinate of the road vanishing point is determined, so that the edge pixel coordinate of the image passable area can be detected for calculating a vanishing point coordinate without depending on the detection of parallel lane lines; and based on a preset Kalman filter, dynamically detecting the road vanishing point by using the vehicle inertia data and the current coordinate to obtain a coordinate change track of the road vanishing point so as to stably track the vanishing point by using an inertia sensor, thereby avoiding the condition that the vanishing point detection accuracy is reduced due to the fact that the road belongs to a horizontal plane and ensuring the detection accuracy of the vanishing point.
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for detecting a road vanishing point according to an embodiment of the present disclosure. The road vanishing point detection method can be applied to terminal equipment, and the terminal equipment comprises equipment such as but not limited to a vehicle-mounted terminal, a tablet computer and a notebook computer. As shown in fig. 1, the method for detecting a road vanishing point of the present embodiment includes steps S101 to S104, which are detailed as follows:
in step S101, a front image and vehicle inertia data are acquired while the vehicle is traveling.
In this step, a camera and an inertial sensor are provided on the vehicle, and the inertial sensor may be a gyroscope. When the vehicle runs, a front image in front of the vehicle is collected in real time through the camera, and vehicle inertia data is collected through the inertia sensor.
Step S102, determining an image passable area of the front image, wherein the image passable area is an area range where the vehicle is seen through to the image in a normal driving road range of the physical world.
In this step, an image passable region of the front image is predicted using an image segmentation model based on a depth learning algorithm. Optionally, an image segmentation model is constructed based on a deep learning algorithm, so that the image segmentation model is trained by using the road image until the image segmentation model reaches a preset convergence condition, and the trained image segmentation model is obtained; and inputting the front image into the trained image segmentation model, and outputting an image passable area. In the embodiment, the image passable area of the front image is determined, and the edge pixel coordinates of the image passable area can be detected by using the image passable area, so that the vanishing point coordinates are calculated without depending on the detection of the parallel lane lines.
And step S103, extracting coordinates of the image passable area, and determining the current coordinates of the road vanishing points.
In this step, as shown in fig. 2, the current coordinate includes a pixel abscissa and a pixel ordinate in the image coordinate system, where v in fig. 2 is the pixel ordinate, and u is the pixel abscissa. Segmenting the image passable area by using the edge pixel abscissa according to the pixel points of the image passable area, extracting the image passable area in a segmented manner to obtain the pixel ordinate peak value of the passable area vertex in each segment, and calculating the mean value of the pixel ordinate peak values of the passable area vertex to obtain the pixel ordinate of the vanishing point.
And S104, based on a preset Kalman filter, dynamically detecting the road vanishing point by using the vehicle inertia data and the current coordinate to obtain a coordinate change track of the road vanishing point.
In the step, a Kalman filter is utilized, and the vehicle inertia data and the current coordinate are subjected to fusion processing so as to track the coordinate of the road vanishing point at the current moment by utilizing the real-time vehicle inertia data and the current coordinate. The embodiment stably tracks the vanishing point by utilizing the inertial sensor, thereby avoiding the condition that the vanishing point detection accuracy is reduced due to the fact that the assumed road belongs to a horizontal plane, and ensuring the detection accuracy of the vanishing point.
In an embodiment, on the basis of the embodiment shown in fig. 1, the step S101 includes:
acquiring a front image at the time k when the vehicle runs by using a camera on the vehicle;
and acquiring vehicle inertia data from the moment k-1 to the moment k by using an inertia sensor on the vehicle.
In this embodiment, an image at time k is acquired by a camera and recorded as
Figure 11689DEST_PATH_IMAGE001
(ii) a And acquiring gyroscope data from the moment k-1 to the moment k through an inertial sensor. Optionally, the gyroscope data is angular velocity data of the image in the direction of the longitudinal axis
Figure 535074DEST_PATH_IMAGE002
In an embodiment, based on the embodiment shown in fig. 1, the step S102 includes:
and segmenting the front image by using a preset image segmentation model to obtain an edge pixel coordinate set of the image passable area.
In the embodiment, the image passable area is predicted by using an image segmentation model based on a deep learning algorithm. The image passable area generally adopts the pixel coordinate of the passable area edge
Figure 777968DEST_PATH_IMAGE003
Where v is the ordinate of the pixel in the image coordinate system, u is the abscissa of the pixel in the image coordinate system, and the subscript i represents the ith point of the passable region edge, so that the aggregate of all the edge points of the passable region of the image is
Figure 19593DEST_PATH_IMAGE004
In an embodiment, on the basis of the embodiment shown in fig. 1, the step S103 includes:
according to the edge pixel abscissa of the image passable area, carrying out segmentation extraction on the edge pixel ordinate of the image passable area to obtain a plurality of minimum peak values of the edge pixel ordinate;
and carrying out mean value operation on a plurality of the minimum peak values to obtain a first vertical coordinate of the image passable area, wherein the edge pixel horizontal coordinate and the first vertical coordinate form the current coordinate of the road vanishing point.
In the present embodiment, as shown in fig. 2, according to the abscissa of the edge pixel of the passable area of the image, the minimum m peak values of the ordinate of the vertex of the passable area are extracted in segments
Figure 357034DEST_PATH_IMAGE005
Then, performing mean value operation on the m peak values to obtain an edge pixel ordinate of the road vanishing point at the k moment, namely a first ordinate
Figure 505DEST_PATH_IMAGE006
In an embodiment, based on the embodiment shown in fig. 1, the step S104 includes:
predicting a second ordinate of the road vanishing point by using the vehicle inertia data;
determining a target state covariance matrix of the Kalman filter;
and dynamically detecting the road vanishing point according to the second vertical coordinate and the target state covariance matrix to obtain a coordinate change track of the road vanishing point.
In this embodiment, the target state covariance matrix is determined based on the state covariance matrix at the previous time. And filtering and correcting the measured first ordinate by using a second ordinate obtained by predicting the vehicle inertia data through a Kalman filter so as to improve the detection accuracy of the road vanishing point.
Optionally, the kalman filter performs filtering correction to obtain the ordinate of the road at the time k:
Figure 148720DEST_PATH_IMAGE007
. Wherein
Figure 877642DEST_PATH_IMAGE008
The ordinate of the edge pixel of the road vanishing point at the time k is combined with the abscissa of the edge pixel to obtain the position coordinate of the road vanishing point at the time k,
Figure 956456DEST_PATH_IMAGE009
for the second ordinate of the road vanishing point at time k,
Figure 516751DEST_PATH_IMAGE010
the gain matrix of the Kalman filter at the moment k is taken as the first ordinate of the road vanishing point at the moment k. It can be understood that the road vanishing point coordinates at different moments can be obtained by calculating the first ordinate, the second ordinate and the gain matrix at different moments, that is, the coordinate change trajectory is obtained.
Optionally, the predicting a second ordinate of the road vanishing point using the vehicle inertial data includes: and predicting a second ordinate of the road vanishing point according to the longitudinal normalized focal length of the camera of the vehicle and the vehicle inertia data.
In this alternative embodiment, the second ordinate of the road vanishing point at time k is predicted using gyroscope data:
Figure 52085DEST_PATH_IMAGE013
wherein
Figure 934591DEST_PATH_IMAGE014
Figure 287075DEST_PATH_IMAGE015
Is the second ordinate of the instant k-1,
Figure 291940DEST_PATH_IMAGE016
and
Figure 729874DEST_PATH_IMAGE017
respectively the time points at time k and k-1,
Figure 960611DEST_PATH_IMAGE018
is the longitudinal normalized focal length of the camera.
In an embodiment, the determining the target state covariance matrix of the kalman filter includes:
determining a second state covariance matrix of the Kalman filter at the current moment according to a first state covariance matrix of the Kalman filter at the previous moment and system noise of the Kalman filter at the current moment;
determining a gain matrix of the Kalman filter according to vanishing point noise of the image passable area;
and updating the second state covariance matrix by using the gain matrix to obtain the target state covariance matrix.
In this embodiment, the first state covariance matrix at time k is predicted:
Figure 433181DEST_PATH_IMAGE019
wherein
Figure 608947DEST_PATH_IMAGE020
Is the target state covariance matrix at the previous time instant,
Figure 347227DEST_PATH_IMAGE021
is the system noise; calculating a gain matrix:
Figure 571535DEST_PATH_IMAGE022
wherein
Figure 695349DEST_PATH_IMAGE023
Is the noise of vanishing point ordinate calculated from the passable area of each frame image; updating the state covariance matrix:
Figure 979700DEST_PATH_IMAGE024
in order to implement the road vanishing point detection method corresponding to the method embodiment, corresponding functions and technical effects are achieved. Referring to fig. 3, fig. 3 is a block diagram illustrating a structure of a device for detecting a road vanishing point according to an embodiment of the present application. For convenience of explanation, only the parts related to the present embodiment are shown, and the detection apparatus for a road vanishing point provided in the embodiment of the present application includes:
an obtaining module 301, configured to obtain a front image and vehicle inertia data when a vehicle is running;
a determining module 302, configured to determine an image passable area of the front image, where the image passable area is an area range where vehicles are seen through in an image in a normal driving road range of a physical world;
an extraction module 303, configured to perform coordinate extraction on the image passable area, and determine a current coordinate of a road vanishing point;
and the detection module 304 is configured to perform dynamic detection on the road vanishing point by using the vehicle inertia data and the current coordinate based on a preset kalman filter, so as to obtain a coordinate change trajectory of the road vanishing point.
In an embodiment, the determining module 302 includes:
and the segmentation unit is used for segmenting the front image by using a preset image segmentation model to obtain an edge pixel coordinate set of the image passable area.
In one embodiment, the extracting module 303 includes:
the extraction unit is used for carrying out segmentation extraction on the edge pixel ordinate of the image passable area according to the edge pixel abscissa of the image passable area to obtain a plurality of minimum peaks of the edge pixel ordinate;
and the operation unit is used for performing mean operation on the minimum peak values to obtain a first vertical coordinate of the image passable area, and the edge pixel horizontal coordinate and the first vertical coordinate form the current coordinate of the road vanishing point.
In an embodiment, the obtaining module 301 includes:
the first acquisition unit is used for acquiring a front image at the time k when the vehicle runs by using a camera on the vehicle;
and the second acquisition unit is used for acquiring vehicle inertia data from the moment k-1 to the moment k by using an inertia sensor on the vehicle.
In one embodiment, the detection module 304 includes:
the prediction unit is used for predicting a second ordinate of the road vanishing point by using the vehicle inertia data;
the determining unit is used for determining a target state covariance matrix of the Kalman filter;
and the detection unit is used for dynamically detecting the road vanishing point according to the second vertical coordinate and the target state covariance matrix to obtain a coordinate change track of the road vanishing point.
In an embodiment, the prediction unit is specifically configured to:
and predicting a second ordinate of the road vanishing point according to the longitudinal normalized focal length of the camera of the vehicle and the vehicle inertia data.
In one embodiment, the determining unit includes:
the first determining subunit is used for determining a second state covariance matrix of the Kalman filter at the current moment according to a first state covariance matrix of the Kalman filter at the previous moment and the system noise at the current moment;
the second determining subunit is used for determining a gain matrix of the Kalman filter according to the vanishing point noise of the image passable area;
and the updating subunit is used for updating the second state covariance matrix by using the gain matrix to obtain the target state covariance matrix.
The road vanishing point detecting device can implement the road vanishing point detecting method of the method embodiment. The alternatives in the above-described method embodiments are also applicable to this embodiment and will not be described in detail here. The rest of the embodiments of the present application may refer to the contents of the above method embodiments, and in this embodiment, details are not described again.
Fig. 4 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 4, the terminal device 4 of this embodiment includes: at least one processor 40 (only one shown in fig. 4), a memory 41, and a computer program 42 stored in the memory 41 and executable on the at least one processor 40, the processor 40 implementing the steps of any of the method embodiments described above when executing the computer program 42.
The terminal device 4 may be a computing device such as a smart phone, a tablet computer, a desktop computer, and a cloud server. The terminal device may include, but is not limited to, a processor 40, a memory 41. Those skilled in the art will appreciate that fig. 4 is merely an example of the terminal device 4, and does not constitute a limitation of the terminal device 4, and may include more or less components than those shown, or combine some components, or different components, such as an input-output device, a network access device, and the like.
The Processor 40 may be a Central Processing Unit (CPU), and the Processor 40 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may in some embodiments be an internal storage unit of the terminal device 4, such as a hard disk or a memory of the terminal device 4. In other embodiments, the memory 41 may also be an external storage device of the terminal device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided on the terminal device 4. Further, the memory 41 may also include both an internal storage unit and an external storage device of the terminal device 4. The memory 41 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 41 may also be used to temporarily store data that has been output or is to be output.
In addition, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in any of the method embodiments described above.
The embodiments of the present application provide a computer program product, which when running on a terminal device, enables the terminal device to implement the steps in the above method embodiments when executed.
In several embodiments provided herein, it will be understood that each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a terminal device to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are further detailed to explain the objects, technical solutions and advantages of the present application, and it should be understood that the above-mentioned embodiments are only examples of the present application and are not intended to limit the scope of the present application. It should be understood that any modifications, equivalents, improvements and the like, which come within the spirit and principle of the present application, may occur to those skilled in the art and are intended to be included within the scope of the present application.

Claims (10)

1. A method for detecting a road vanishing point is characterized by comprising the following steps:
acquiring a front image and vehicle inertia data when a vehicle runs;
determining an image passable area of the front image, wherein the image passable area is an area range where vehicles are seen through to the image in a normal driving road range of a physical world;
extracting coordinates of the image passable area, and determining the current coordinates of the road vanishing points;
and based on a preset Kalman filter, dynamically detecting the road vanishing point by using the vehicle inertia data and the current coordinate to obtain a coordinate change track of the road vanishing point.
2. The method of claim 1, wherein the determining an image passable area of the front image comprises:
and segmenting the front image by using a preset image segmentation model to obtain an edge pixel coordinate set of the image passable area.
3. The method for detecting a vanishing point on a road according to claim 1, wherein the extracting coordinates of the image passable area to determine the current coordinates of the vanishing point on the road comprises:
according to the edge pixel abscissa of the image passable area, carrying out segmentation extraction on the edge pixel ordinate of the image passable area to obtain a plurality of minimum peak values of the edge pixel ordinate;
and carrying out mean value operation on a plurality of the minimum peak values to obtain a first vertical coordinate of the image passable area, wherein the edge pixel horizontal coordinate and the first vertical coordinate form the current coordinate of the road vanishing point.
4. The method of claim 1, wherein the acquiring forward images and vehicle inertial data while the vehicle is traveling comprises:
acquiring a front image at the time k when the vehicle runs by using a camera on the vehicle;
and acquiring vehicle inertia data from the moment k-1 to the moment k by using an inertia sensor on the vehicle.
5. The method for detecting a road vanishing point according to any one of claims 1 to 4, wherein the dynamically detecting the road vanishing point by using the vehicle inertia data and the current coordinate based on a preset Kalman filter to obtain a coordinate change trajectory of the road vanishing point comprises:
predicting a second ordinate of the road vanishing point by using the vehicle inertia data;
determining a target state covariance matrix of the Kalman filter;
and dynamically detecting the road vanishing point according to the second vertical coordinate and the target state covariance matrix to obtain a coordinate change track of the road vanishing point.
6. The method of claim 5, wherein predicting the second ordinate of the road vanishing point using the vehicle inertial data comprises:
and predicting a second ordinate of the road vanishing point according to the longitudinal normalized focal length of the camera of the vehicle and the vehicle inertia data.
7. The method of claim 1, wherein the determining the target state covariance matrix of the kalman filter comprises:
determining a second state covariance matrix of the Kalman filter at the current moment according to a first state covariance matrix of the Kalman filter at the previous moment and system noise of the Kalman filter at the current moment;
determining a gain matrix of the Kalman filter according to vanishing point noise of the image passable area;
and updating the second state covariance matrix by using the gain matrix to obtain the target state covariance matrix.
8. A detection device of road vanishing point, characterized by comprising:
the acquisition module is used for acquiring a front image and vehicle inertia data when the vehicle runs;
the determining module is used for determining an image passable area of the front image, wherein the image passable area is an area range where vehicles are seen through in the image in a normal driving road range of the physical world;
the extraction module is used for extracting coordinates of the image passable area and determining the current coordinates of the road vanishing points;
and the detection module is used for dynamically detecting the road vanishing point by using the vehicle inertia data and the current coordinate based on a preset Kalman filter to obtain a coordinate change track of the road vanishing point.
9. A terminal device, characterized in that it comprises a processor and a memory for storing a computer program which, when executed by the processor, implements the method of road vanishing point detection according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that it stores a computer program which, when executed by a processor, implements the method of road vanishing point detection according to any one of claims 1 to 7.
CN202210012039.XA 2022-01-07 2022-01-07 Road vanishing point detection method, device, equipment and storage medium Active CN114037977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210012039.XA CN114037977B (en) 2022-01-07 2022-01-07 Road vanishing point detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210012039.XA CN114037977B (en) 2022-01-07 2022-01-07 Road vanishing point detection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114037977A true CN114037977A (en) 2022-02-11
CN114037977B CN114037977B (en) 2022-04-26

Family

ID=80147417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210012039.XA Active CN114037977B (en) 2022-01-07 2022-01-07 Road vanishing point detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114037977B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557616A (en) * 2024-01-11 2024-02-13 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Method, device and equipment for determining pitch angle and estimating depth of monocular camera

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100080467A1 (en) * 2006-11-08 2010-04-01 Kazuyuki Sakurai Vanishing point detecting system, vanishing point detecting method, and vanishing point detecting program
WO2016151976A1 (en) * 2015-03-26 2016-09-29 パナソニックIpマネジメント株式会社 Moving body detection device, image processing device, moving body detection method, and integrated circuit
CN108647572A (en) * 2018-04-04 2018-10-12 华南理工大学 A kind of lane departure warning method based on Hough transformation
CN108932472A (en) * 2018-05-23 2018-12-04 中国汽车技术研究中心有限公司 A kind of automatic Pilot running region method of discrimination based on lane detection
CN110414329A (en) * 2019-06-19 2019-11-05 上海眼控科技股份有限公司 A kind of vehicle heading judgment method based on image
US20200125861A1 (en) * 2018-10-22 2020-04-23 Toyota Jidosha Kabushiki Kaisha Road line detection device and road line detection method
CN111797809A (en) * 2020-07-20 2020-10-20 吉林大学 Driver vision fusion method for automatic driving trajectory tracking
CN112146620A (en) * 2020-11-25 2020-12-29 腾讯科技(深圳)有限公司 Target object ranging method and device
CN112907678A (en) * 2021-01-25 2021-06-04 深圳佑驾创新科技有限公司 Vehicle-mounted camera external parameter attitude dynamic estimation method and device and computer equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100080467A1 (en) * 2006-11-08 2010-04-01 Kazuyuki Sakurai Vanishing point detecting system, vanishing point detecting method, and vanishing point detecting program
WO2016151976A1 (en) * 2015-03-26 2016-09-29 パナソニックIpマネジメント株式会社 Moving body detection device, image processing device, moving body detection method, and integrated circuit
CN108647572A (en) * 2018-04-04 2018-10-12 华南理工大学 A kind of lane departure warning method based on Hough transformation
CN108932472A (en) * 2018-05-23 2018-12-04 中国汽车技术研究中心有限公司 A kind of automatic Pilot running region method of discrimination based on lane detection
US20200125861A1 (en) * 2018-10-22 2020-04-23 Toyota Jidosha Kabushiki Kaisha Road line detection device and road line detection method
CN110414329A (en) * 2019-06-19 2019-11-05 上海眼控科技股份有限公司 A kind of vehicle heading judgment method based on image
CN111797809A (en) * 2020-07-20 2020-10-20 吉林大学 Driver vision fusion method for automatic driving trajectory tracking
CN112146620A (en) * 2020-11-25 2020-12-29 腾讯科技(深圳)有限公司 Target object ranging method and device
CN112907678A (en) * 2021-01-25 2021-06-04 深圳佑驾创新科技有限公司 Vehicle-mounted camera external parameter attitude dynamic estimation method and device and computer equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557616A (en) * 2024-01-11 2024-02-13 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Method, device and equipment for determining pitch angle and estimating depth of monocular camera
CN117557616B (en) * 2024-01-11 2024-04-02 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Method, device and equipment for determining pitch angle and estimating depth of monocular camera

Also Published As

Publication number Publication date
CN114037977B (en) 2022-04-26

Similar Documents

Publication Publication Date Title
JP6670071B2 (en) Vehicle image recognition system and corresponding method
EP3633539A2 (en) Method for position detection, device, and storage medium
WO2018177026A1 (en) Device and method for determining road edge
US11205284B2 (en) Vehicle-mounted camera pose estimation method, apparatus, and system, and electronic device
CN109345593B (en) Camera posture detection method and device
US11436815B2 (en) Method for limiting object detection area in a mobile system equipped with a rotation sensor or a position sensor with an image sensor, and apparatus for performing the same
JP2020052585A (en) Lane line recognition device
CN111213153A (en) Target object motion state detection method, device and storage medium
WO2017130285A1 (en) Vehicle determination apparatus, vehicle determination method, and vehicle determination program
CN112927309A (en) Vehicle-mounted camera calibration method and device, vehicle-mounted camera and storage medium
CN112529935B (en) Target vehicle motion state identification method and device
CN114037977B (en) Road vanishing point detection method, device, equipment and storage medium
CN115493612A (en) Vehicle positioning method and device based on visual SLAM
US20200193184A1 (en) Image processing device and image processing method
CN114972427A (en) Target tracking method based on monocular vision, terminal equipment and storage medium
CN114919584A (en) Motor vehicle fixed point target distance measuring method and device and computer readable storage medium
CN116863124B (en) Vehicle attitude determination method, controller and storage medium
CN112902911B (en) Ranging method, device, equipment and storage medium based on monocular camera
JP2013069045A (en) Image recognition device, image recognition method, and image recognition program
CN116772858A (en) Vehicle positioning method, device, positioning equipment and storage medium
CN116703979A (en) Target tracking method, device, terminal and storage medium
CN110570680A (en) Method and system for determining position of object using map information
EP3288260A1 (en) Image processing device, imaging device, equipment control system, equipment, image processing method, and carrier means
CN114943836A (en) Trailer angle detection method and device and electronic equipment
JP2004038760A (en) Traveling lane recognition device for vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Floor 25, Block A, Zhongzhou Binhai Commercial Center Phase II, No. 9285, Binhe Boulevard, Shangsha Community, Shatou Street, Futian District, Shenzhen, Guangdong 518000

Patentee after: Shenzhen Youjia Innovation Technology Co.,Ltd.

Address before: 518051 401, building 1, Shenzhen new generation industrial park, No. 136, Zhongkang Road, Meidu community, Meilin street, Futian District, Shenzhen, Guangdong Province

Patentee before: SHENZHEN MINIEYE INNOVATION TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address