CN110197104B - Distance measurement method and device based on vehicle - Google Patents

Distance measurement method and device based on vehicle Download PDF

Info

Publication number
CN110197104B
CN110197104B CN201810162648.7A CN201810162648A CN110197104B CN 110197104 B CN110197104 B CN 110197104B CN 201810162648 A CN201810162648 A CN 201810162648A CN 110197104 B CN110197104 B CN 110197104B
Authority
CN
China
Prior art keywords
image
image acquisition
acquisition assembly
vehicle
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810162648.7A
Other languages
Chinese (zh)
Other versions
CN110197104A (en
Inventor
谭伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201810162648.7A priority Critical patent/CN110197104B/en
Publication of CN110197104A publication Critical patent/CN110197104A/en
Application granted granted Critical
Publication of CN110197104B publication Critical patent/CN110197104B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/12Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • G07C5/0866Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera

Abstract

The invention relates to a distance measuring method and device based on a vehicle, and belongs to the field of intelligent driving. The method comprises the following steps: after two frames of images containing a target object are acquired through an image acquisition assembly, determining at least one pair of matched feature points of the target object in the two frames of images, wherein the two frames of images are used for reflecting the condition of the surrounding environment of a vehicle, and the image acquisition assembly is arranged on the vehicle; establishing a pose transformation model of the image acquisition assembly by adopting the deflection angle of the vehicle and the speed of the vehicle; the coordinates of the target three-dimensional point in the three-dimensional coordinate system where the vehicle is located are determined based on the pose transformation model of the at least one pair of matched feature points and the image acquisition assembly, the problems of poor reliability and low driving safety of distance measurement in the related technology are solved, and the reliability and the driving safety of distance measurement are improved and the method is used for vehicle distance measurement.

Description

Distance measurement method and device based on vehicle
Technical Field
The invention relates to the field of intelligent driving, in particular to a distance measuring method and device based on a vehicle.
Background
An Advanced Driver Assistance System (ADAS) is used to collect environmental data inside and outside a vehicle in real time, process the environmental data, and enable a Driver to detect a possible danger in the shortest time based on the processing result, thereby improving the safety of the vehicle. The vehicle-based ranging function is one of the main functions in the ADAS, which is to measure the distance between the vehicle in front and the host vehicle mainly by a camera.
In the related art, in order to measure the distance between the preceding vehicle and the host vehicle, the inherent characteristics of the preceding vehicle are generally studied, and the object is detected by learning the inherent characteristics, so that the position information of the preceding vehicle is obtained, and the distance between the preceding vehicle and the host vehicle is obtained. Common intrinsic features may include shading of the bottom of the vehicle, symmetry of the vehicle edges, brightness of vehicle pixels or texture of the vehicle, etc.
However, the existing distance measurement method based on vehicles can only measure distance for vehicles ahead on a road, cannot determine the position information of any three-dimensional point in the space where the vehicle is located, and has poor reliability of distance measurement and low driving safety.
Disclosure of Invention
The embodiment of the invention provides a distance measuring method and device based on a vehicle, which can solve the problems of poor reliability and driving safety of vehicle distance measurement in the related technology. The technical scheme is as follows:
according to a first aspect of embodiments of the present invention, there is provided a vehicle-based ranging method, the method comprising:
after two frames of images containing a target object are acquired through an image acquisition assembly, determining at least one pair of matched feature points of the target object in the two frames of images, wherein the two frames of images are used for reflecting the condition of the surrounding environment of a vehicle, and the image acquisition assembly is arranged on the vehicle;
establishing a pose transformation model of the image acquisition assembly by adopting the deflection angle of the vehicle and the speed of the vehicle;
and determining the coordinates of a target three-dimensional point in a three-dimensional coordinate system of the vehicle based on the at least one pair of matched feature points and the pose transformation model of the image acquisition assembly.
Optionally, the two frame images include a first frame image and a second frame image, the first frame image is acquired earlier than the second frame image,
before the establishing of the pose transformation model of the image acquisition assembly by adopting the deflection angle of the vehicle and the speed of the vehicle, the method further comprises the following steps:
determining a first position of a blanking point in the first frame image and a second position of the blanking point in the second frame image, wherein the blanking point is an intersection point of an extension line of a left lane line and an extension line of a right lane line in the image;
determining a first pitch angle of the image acquisition assembly based on the first position;
determining a second pitch angle of the image acquisition assembly based on the second position;
the establishing of the pose transformation model of the image acquisition assembly by adopting the deflection angle of the vehicle and the speed of the vehicle comprises the following steps:
and when the first pitch angle and the second pitch angle are not equal, establishing a pose transformation model of the image acquisition assembly by adopting the deflection angle of the vehicle and the speed of the vehicle based on the first pitch angle and the second pitch angle.
Optionally, the first position and the second position of the blanking point are characterized by an ordinate of the blanking point in a two-dimensional coordinate system of the image,
the determining a first pitch angle of the image acquisition assembly based on the first position comprises:
determining a first pitch angle of the image acquisition assembly based on a vertical coordinate of the blanking point in the first frame image by adopting a pitch angle calculation formula;
the determining a second pitch angle of the image acquisition assembly based on the second position comprises:
determining a second pitch angle of the image acquisition assembly based on a vertical coordinate of the blanking point in the second frame image by using the pitch angle calculation formula;
wherein, the pitch angle calculation formula is as follows:
α=-arctan((y0-yv) /f), the alpha is the pitch angle of the image acquisition assembly, and the y0For the ordinate of the optical center of the image acquisition assembly in the image, yvAnd f is the ordinate of the blanking point in the image, and the equivalent focal length of the image acquisition assembly.
Optionally, the establishing a pose transformation model of the image capturing assembly by using the deflection angle of the vehicle and the speed of the vehicle includes:
acquiring a function of 6 degrees of freedom of the image acquisition assembly based on the motion state of the vehicle when shooting operation is executed by adopting the deflection angle of the vehicle and the speed of the vehicle;
taking the function of the 6 degrees of freedom as a pose transformation model of the image acquisition assembly;
the function of the 6 degrees of freedom includes a change amount of a pitch angle of the image acquisition assembly, a change amount of a deflection angle of the image acquisition assembly, a change amount of a rotation angle of the image acquisition assembly, a lateral displacement of the image acquisition assembly, a displacement of the image acquisition assembly generated along a vertical direction of the image acquisition assembly, and a displacement of the image acquisition assembly generated along an optical axis direction of the image acquisition assembly from an acquisition time of the first frame image to an acquisition time of the second frame image.
Optionally, the at least one pair of matching feature points comprises a pair of matching feature points,
the determining at least one pair of matching feature points of the target object in the two frames of images comprises:
extracting first feature points in the first frame image, and determining a first feature descriptor for expressing the first feature points;
extracting second feature points in the second frame image, and determining a second feature descriptor for expressing the second feature points;
calculating the similarity of the first feature descriptor and the second feature descriptor;
and when the similarity is greater than the preset similarity, taking the first feature point and the second feature point as a pair of matched feature points.
Optionally, the determining, based on the at least one pair of matched feature points and the pose transformation model of the image capturing assembly, coordinates of a target three-dimensional point in a three-dimensional coordinate system in which the vehicle is located includes:
determining a first matrix of the image acquisition assembly when acquiring the first frame of image;
determining a second matrix of the image acquisition assembly when the second frame of image is acquired according to the pose transformation model;
and determining the coordinates of the target three-dimensional point according to the coordinates of the first characteristic point, the first matrix, the coordinates of the second characteristic point and the second matrix.
Optionally, the determining a first position of a blanking point in the first frame image includes:
determining a first equation corresponding to a left lane line and a second equation corresponding to a right lane line in the first frame image;
determining a first position of the blanking point in the first frame image based on the first equation and the second equation.
Optionally, after acquiring two frames of images including a target object by an image acquisition component, before determining at least one pair of matching feature points of the target object in the two frames of images, the method further includes:
acquiring the first frame image;
determining the target object in the first frame image;
acquiring the second frame image;
determining a target object in the second frame image based on the first frame image and the determined target object.
Optionally, the determining the target object in the first frame image includes:
and determining the position and the category of the target object in the first frame image by adopting a deep learning algorithm.
Optionally, the determining a target object in the second frame image based on the first frame image and the determined target object includes:
and determining a target object in the second frame image based on the first frame image and the determined target object by adopting a target tracking algorithm.
Optionally, the image acquisition assembly is a monocular camera.
According to a second aspect of embodiments of the present invention, there is provided a vehicle-based ranging apparatus, the apparatus comprising:
the characteristic point determining module is used for determining at least one pair of matched characteristic points of a target object in two frames of images after the two frames of images containing the target object are acquired through an image acquisition assembly, the two frames of images are used for reflecting the condition of the surrounding environment of a vehicle, and the image acquisition assembly is arranged on the vehicle;
the model establishing module is used for establishing a pose transformation model of the image acquisition assembly by adopting the deflection angle of the vehicle and the speed of the vehicle;
and the coordinate determination module is used for determining the coordinates of a target three-dimensional point in the three-dimensional coordinates of the vehicle based on the at least one pair of matched feature points and the pose transformation model of the image acquisition assembly.
Optionally, the two frames of images include a first frame of image and a second frame of image, the first frame of image is collected earlier than the second frame of image, and the apparatus further includes:
the position determining module is used for determining a first position of a blanking point in the first frame image and a second position of the blanking point in the second frame image, wherein the blanking point is an intersection point of an extension line of a left lane line and an extension line of a right lane line in the image;
a first pitch angle determination module to determine a first pitch angle of the image acquisition assembly based on the first position;
a second pitch angle determination module for determining a second pitch angle of the image capture assembly based on the second position;
the model building module is used for:
and when the first pitch angle and the second pitch angle are not equal, establishing a pose transformation model of the image acquisition assembly by adopting the deflection angle of the vehicle and the speed of the vehicle based on the first pitch angle and the second pitch angle.
Optionally, the first position and the second position of the blanking point are characterized by an ordinate of the blanking point in a two-dimensional coordinate system of the image,
the first pitch angle determination module to:
determining a first pitch angle of the image acquisition assembly based on a vertical coordinate of the blanking point in the first frame image by adopting a pitch angle calculation formula;
the second pitch angle determination module is configured to:
determining a second pitch angle of the image acquisition assembly based on a vertical coordinate of the blanking point in the second frame image by using the pitch angle calculation formula;
wherein, the pitch angle calculation formula is as follows:
α=-arctan((y0-yv) /f), the alpha is the pitch angle of the image acquisition assembly, and the y0For the ordinate of the optical center of the image acquisition assembly in the image, yvAnd f is the ordinate of the blanking point in the image, and the equivalent focal length of the image acquisition assembly.
Optionally, the model building module is configured to:
acquiring a function of 6 degrees of freedom of the image acquisition assembly based on the motion state of the vehicle when shooting operation is executed by adopting the deflection angle of the vehicle and the speed of the vehicle;
taking the function of the 6 degrees of freedom as a pose transformation model of the image acquisition assembly;
the function of the 6 degrees of freedom includes a change amount of a pitch angle of the image acquisition assembly, a change amount of a deflection angle of the image acquisition assembly, a change amount of a rotation angle of the image acquisition assembly, a lateral displacement of the image acquisition assembly, a displacement of the image acquisition assembly generated along a vertical direction of the image acquisition assembly, and a displacement of the image acquisition assembly generated along an optical axis direction of the image acquisition assembly from an acquisition time of the first frame image to an acquisition time of the second frame image.
Optionally, the at least one pair of matching feature points comprises a pair of matching feature points,
the feature point determination module is configured to:
extracting first feature points in the first frame image, and determining a first feature descriptor for expressing the first feature points;
extracting second feature points in the second frame image, and determining a second feature descriptor for expressing the second feature points;
calculating the similarity of the first feature descriptor and the second feature descriptor;
and when the similarity is greater than the preset similarity, taking the first feature point and the second feature point as a pair of matched feature points.
Optionally, the coordinate determination module is configured to:
determining a first matrix of the image acquisition assembly when acquiring the first frame of image;
determining a second matrix of the image acquisition assembly when the second frame of image is acquired according to the pose transformation model;
and determining the coordinates of the target three-dimensional point according to the coordinates of the first characteristic point, the first matrix, the coordinates of the second characteristic point and the second matrix.
Optionally, the position determining module is configured to:
determining a first equation corresponding to a left lane line and a second equation corresponding to a right lane line in the first frame image;
determining a first position of the blanking point in the first frame image based on the first equation and the second equation.
Optionally, the apparatus further comprises:
the image acquisition module is used for acquiring the first frame image;
a target object determination module for determining the target object in the first frame image;
the image acquisition module is further used for acquiring the second frame image;
the target object determination module is further configured to determine a target object in the second frame image based on the first frame image and the determined target object.
Optionally, the target object determining module is configured to:
and determining the position and the category of the target object in the first frame image by adopting a deep learning algorithm.
Optionally, the target object determining module is configured to:
and determining a target object in the second frame image based on the first frame image and the determined target object by adopting a target tracking algorithm.
Optionally, the image acquisition assembly is a monocular camera.
According to a third aspect of embodiments of the present invention, there is provided a computer device comprising a processor, a communication interface, a memory and a communication bus,
the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is configured to execute the computer program stored in the memory to implement the vehicle-based ranging method according to the first aspect.
According to a fourth aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored therein a computer program which, when executed by a processor, implements the vehicle-based ranging method of the first aspect.
According to a fifth aspect of embodiments of the present invention, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the vehicle-based ranging method provided by the first aspect described above.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
after two frames of images containing a target object are acquired through the image acquisition assembly, the image acquisition assembly can determine at least one pair of matching feature points of the target object in the two frames of images, then a pose transformation model of the image acquisition assembly is established by adopting the deflection angle of a vehicle and the speed of the vehicle, and then the coordinates of a target three-dimensional point in a three-dimensional coordinate system space where the vehicle is located are determined based on the at least one pair of matching feature points and the pose transformation model of the image acquisition assembly.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
In order to illustrate the embodiments of the present invention more clearly, the drawings that are needed in the description of the embodiments will be briefly described below, it being apparent that the drawings in the following description are only some embodiments of the invention, and that other drawings may be derived from those drawings by a person skilled in the art without inventive effort.
FIG. 1 is a schematic diagram of an implementation environment in which various embodiments of the present invention are implemented;
FIG. 2 is a flow chart illustrating a method of a vehicle-based ranging method according to an exemplary embodiment;
FIG. 3 is a flow chart illustrating a method of another vehicle-based ranging method according to an exemplary embodiment;
FIG. 4 is a schematic diagram of the positions of two target objects in the first frame image in the embodiment shown in FIG. 3;
FIG. 5 is a schematic diagram of the motion trajectories of two target objects in the embodiment shown in FIG. 3;
FIG. 6 is a flow chart of a method for determining at least one pair of matching feature points of the target object in the embodiment shown in FIG. 3;
FIG. 7 is a flow chart of a method for determining at least one pair of matching feature points of the target object in the embodiment shown in FIG. 3;
FIG. 8 is a flow chart of a method for modeling pose transformation of an image capture assembly in the embodiment of FIG. 3;
FIG. 9 is a flow chart of a method for determining the coordinates of a target three-dimensional point in the embodiment shown in FIG. 3;
FIG. 10 is a schematic illustration of a pair of matching feature points in the embodiment of FIG. 3;
FIG. 11 is a flowchart illustrating a method of yet another vehicle-based ranging method in accordance with an exemplary embodiment;
FIG. 12 is a schematic illustration of blanking points in the embodiment of FIG. 11;
FIG. 13 is a flow chart of a method of determining a first location of a blanking point in the embodiment of FIG. 11;
FIG. 14 is a block diagram illustrating a vehicle-based ranging device in accordance with an exemplary embodiment;
FIG. 15 is a block diagram illustrating yet another vehicle-based ranging device in accordance with an exemplary embodiment;
FIG. 16 is a block diagram illustrating a computer device according to an example embodiment.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a schematic diagram of an implementation environment according to various embodiments of the present invention is shown. As shown in fig. 1, the environment of this embodiment includes a vehicle 01, and an image capturing assembly for measuring a distance between an object in front of the vehicle 01 and the vehicle is disposed on the vehicle 01. In the embodiment of the present invention, the image capturing component may be a monocular camera, and the monocular camera may be used to identify and measure the category of the target object, for example, may identify that the target object is a vehicle, a pedestrian, a bicycle, a motorcycle, a traffic sign, a road sign or a signal lamp.
FIG. 2 is a flow chart illustrating a method of a vehicle-based ranging method that may be performed by an image capture assembly disposed on the vehicle shown in FIG. 1, according to an exemplary embodiment. Referring to fig. 2, the method may include the following steps:
step 201, after two frames of images are acquired by an image acquisition assembly, determining at least one pair of matching feature points of the target object in the two frames of images, wherein the two frames of images are used for reflecting the condition of the surrounding environment of the vehicle, and the image acquisition assembly is arranged on the vehicle.
Step 202, establishing a pose transformation model of the image acquisition assembly by adopting the deflection angle of the vehicle and the speed of the vehicle.
And step 203, determining the coordinates of the target three-dimensional point in the three-dimensional coordinate system where the vehicle is located based on the at least one pair of matched feature points and the pose transformation model of the image acquisition assembly.
The three-dimensional coordinate system may be a coordinate system centered on the image capturing component, and correspondingly, the coordinates of the target three-dimensional point are the coordinates of the target three-dimensional point in the coordinate system centered on the image capturing component.
After the coordinates of the target three-dimensional point in the three-dimensional coordinate system where the vehicle is located are determined, the distance between the target three-dimensional point and the vehicle can be obtained, and then the distance measurement operation of the target three-dimensional point is completed.
In summary, according to the distance measurement method based on the vehicle provided by the embodiment of the invention, after two frames of images including the target object are acquired by the image acquisition assembly, at least one pair of matching feature points of the target object in the two frames of images is determined, then the pose transformation model of the image acquisition assembly is established by adopting the deflection angle of the vehicle and the speed of the vehicle, and then the coordinates of the target three-dimensional point in the three-dimensional coordinate system space where the vehicle is located are determined based on the at least one pair of matching feature points and the pose transformation model of the image acquisition assembly.
Optionally, in a first implementation manner, assuming that the vehicle does not longitudinally bump, the pitch angle of the image acquisition assembly is not affected, and in this case, the pitch angle of the image acquisition assembly does not change, and at this time, a pose transformation model of the image acquisition assembly may be established by using the yaw angle of the vehicle and the speed of the vehicle, so as to determine the coordinates of the target three-dimensional point in the three-dimensional coordinate system where the vehicle is located.
In a second implementation manner, considering that a longitudinal bump phenomenon of a vehicle occurs, the pitch angle of the image acquisition assembly is affected by the longitudinal bump of the vehicle, under the condition, the pitch angle of the image acquisition assembly changes, at this time, a pose transformation model of the image acquisition assembly can be established by adopting the deflection angle of the vehicle and the speed of the vehicle based on the change condition of the pitch angle of the image acquisition assembly, so as to determine the coordinates of a target three-dimensional point in a three-dimensional coordinate system where the vehicle is located, and further improve the accuracy of the coordinates of the target three-dimensional point. The following describes the vehicle-based ranging method by taking these two realizable modes as examples.
In a first implementation, for example, referring to fig. 3, the vehicle-based ranging method may include the following steps:
step 301, collecting a first frame image.
The image acquisition assembly arranged on the vehicle acquires a first frame of image, and the first frame of image is used for reflecting the surrounding environment of the vehicle. For example, the image capture assembly may be a monocular camera.
Step 302, determining a target object in the first frame image.
Optionally, step 302 may include: and determining the position and the category of the target object in the first frame image by adopting a deep learning algorithm.
The deep learning algorithm may be an offline learning algorithm or an online learning algorithm. For example, when determining the position of the target object, a two-dimensional coordinate system may be established with the upper left corner of the first frame image as the origin. Wherein, the position of the target object in the first frame image can be represented by (x, y, w, h), x represents the abscissa of the target object in the first frame image, y represents the ordinate of the target object in the first frame image, w represents the width of the target object in the first frame image, and h represents the height of the target object in the first frame image. The category of the target object may be a vehicle, a pedestrian, a bicycle, a motorcycle, a traffic sign, a road sign or a signal light, etc.
FIG. 4 illustrates two target objects in a first frame imageSchematic representation of the location. The position of the target object M1 in the first frame image is (x)1,y1,w1,h1) That is, the abscissa of the target object M1 in the first frame image is x1Ordinate is y1Width of w1Height of h1. The position of the target object M2 in the first frame image is (x)2,y2,w2,h2) That is, the abscissa of the target object M2 in the first frame image is x2Ordinate is y2Width of w2Height of h2
Step 302 is used to detect a target object, and the specific process may refer to the related art.
And step 303, acquiring a second frame image.
The second frame image is used to reflect the vehicle surroundings.
Step 304, determining a target object in the second frame image based on the first frame image and the determined target object.
Optionally, step 304 may include: and determining a target object in the second frame image based on the first frame image and the determined target object by adopting a target tracking algorithm.
For example, the target tracking algorithm may be an optical flow-based tracking algorithm, a color-based tracking algorithm, or a CSK-based tracking algorithm, which is not limited by the embodiment of the present invention. The target tracking algorithm can be used for correlating the characteristics of the target object such as position, shape, appearance and the like at different moments to obtain the motion trail of the target object.
Fig. 5 is a schematic diagram illustrating the motion trajectories of two target objects, where the position of the target object M1 in the first frame image is (x)1,y1,w1,h1) The position in the second frame image is (x)1’,y1’,w1’,h1') to a host; the position of the target object M2 in the first frame image is (x)2,y2,w2,h2) The position in the second frame image is (x)2’,y2’,w2’,h2'). In FIG. 5 aThe size of the first frame image is equal to that of the second frame image.
Step 304 is used to track the target object, and the specific process may refer to the related art.
And step 305, determining at least one pair of matched feature points of the target object in the two frames of images.
The two frame images include the first frame image in step 301 and the second frame image in step 303.
In this step, at least one pair of matching feature points of the target object in the two frame images can be determined based on the positions of the target object determined in steps 302 and 304, and at least one pair of matching feature points of the target object can be determined based on the position of the target object, so that the matching error is reduced, and the resources are saved.
At least one pair of matching feature points may include a pair of matching feature points, may also include two pairs of matching feature points, and may further include three pairs of matching feature points, and now, taking the example that at least one pair of matching feature points includes a pair of matching feature points, the step 305 is described, as shown in fig. 6, the step 305 includes:
step 3051, extracting a first feature point in the first frame image, and determining a first feature descriptor for expressing the first feature point.
Optionally, the image acquisition component may extract the first feature point in the first frame image by using a mode of FAST from filtered segment test (FASTs) and the like, and express the first feature point by using the first feature descriptor. For example, the first Feature descriptor may be a Scale Invariant Feature Transform (SIFT) Feature descriptor.
Step 3052, extracting a second feature point in the second frame image, and determining a second feature descriptor for expressing the second feature point.
Optionally, the image capturing component may extract the second feature point in the second frame image by extracting the first feature point in step 3051, and determine the second feature descriptor expressing the second feature point by using the same method.
And step 3053, calculating the similarity of the first feature descriptor and the second feature descriptor.
The specific process of step 3053 can refer to the related art.
And step 3054, when the similarity is greater than the preset similarity, taking the first feature point and the second feature point as a pair of matched feature points.
Assuming that the similarity of the first and second feature descriptors calculated in step 3053 is 0.8 and the preset similarity is 0.5, the image acquisition component may regard the first and second feature points as a pair of matching feature points.
Taking two target objects shown in fig. 5 as an example, the position of the target object M1 in the first frame image is (x)1,y1,w1,h1) The position in the second frame image is (x)1’,y1’,w1’,h1') to a host; the position of the target object M2 in the first frame image is (x)2,y2,w2,h2) The position in the second frame image is (x)2’,y2’,w2’,h2') two pairs of matching feature points of the target object M1 and one pair of matching feature points of the target object M2 are obtained through the step 304, for example, as shown in fig. 7, the two pairs of matching feature points of the target object M1 are: a and A ', B and B'; a pair of matching feature points of the target object M2: c and C'.
Further, in order to improve the matching accuracy, a RANdom SAmple Consensus (RANSAC) algorithm may be used to perform the mismatch removal processing on the matching result. Reference may be made to the related art with respect to this process.
Step 305 is used to locate the key points of the target object, and the specific process may refer to the related art.
And step 306, establishing a pose transformation model of the image acquisition assembly by adopting the deflection angle of the vehicle and the speed of the vehicle.
In the step, the image acquisition assembly establishes a pose transformation model of the image acquisition assembly by adopting the deflection angle of the vehicle and the speed of the vehicle. The vehicle yaw angle and the vehicle speed may be transmitted to the image capturing component through a Controller Area Network (CAN) bus.
Optionally, as shown in fig. 8, step 306 may include:
step 3061, obtain the function of 6 degrees of freedom of the image acquisition assembly based on vehicle motion state while carrying out the shooting operation using the yaw angle of the vehicle and the speed of the vehicle.
Wherein the function of 6 degrees of freedom comprises the variation r of the pitch angle of the image acquisition assembly from the acquisition time of the first frame image to the acquisition time of the second frame imagexAmount of change r of deflection angle of image pickup elementyAmount of change r of rotation angle of image pickup elementzTransverse displacement t of the image acquisition assemblyxDisplacement t of the image capturing assembly along the vertical direction of the image capturing assemblyyDisplacement t of the image acquisition assembly along the optical axis of the image acquisition assemblyz. Wherein, the vertical direction of the image acquisition assembly is respectively perpendicular to the transverse direction and the optical axis direction of the image acquisition assembly, and the three directions accord with the right-hand rule.
In the related art, if an object is not limited at all, it can move in space, and has six degrees of freedom in directions: longitudinal, lateral, vertical, roll, pitch, and yaw. In the embodiment of the invention, the function of 6 degrees of freedom of the image acquisition assembly based on the motion state of the vehicle when the shooting operation is executed is obtained.
In the embodiment of the present invention, the image acquired at the time t1 is referred to as a first frame image, and the image acquired at the time t2 is referred to as a second frame image, 0<t1<t2, that is, the acquisition time of the first frame image is t1, and the acquisition time of the second frame image is t 2. Assuming that the vehicle obtains a yaw angle of the vehicle of yaw rad/s, a speed of the vehicle of v m/s, and a rotation angle of the image capturing assembly is constant, r iszIf the pitch angle of the image acquisition assembly is not changed, r is equal to 0, and the vehicle is not subjected to longitudinal bump, the pitch angle of the image acquisition assembly is unchangedxAs such, the function of the 6 degrees of freedom acquired by the image acquisition assembly is:
Figure BDA0001583469710000141
step 3062, the function of 6 degrees of freedom is used as a pose transformation model of the image acquisition assembly.
The image capture assembly takes the function of 6 degrees of freedom obtained in step 3061 as the pose transformation model of the image capture assembly, which can be shown in equation (1).
Step 306 is for modeling the motion state of the image capture assembly based on the motion state of the vehicle, the pose transformation model of the image capture assembly being for determining the coordinates of the target three-dimensional point in the three-dimensional coordinate system in which the vehicle is located.
And 307, determining the coordinates of the target three-dimensional point in the three-dimensional coordinate system where the vehicle is located based on the at least one pair of matched feature points and the pose transformation model of the image acquisition assembly.
The image capturing component determines coordinates of a target three-dimensional point in the three-dimensional coordinate system of the vehicle, which may be any three-dimensional point in the space, based on the at least one pair of matching feature points obtained in step 305 and the pose transformation model of the image capturing component obtained in step 306. The three-dimensional coordinate system may be centered on the vehicle or centered on the image capturing assembly, which is not limited in the embodiment of the present invention.
Optionally, as shown in fig. 9, step 307 may include:
step 3071, determining a first matrix of the image capture assembly when the first frame of image is captured.
For example, a pair of matching feature points of the target object is: a and A'. A is located in the first frame image and A' is located in the second frame image. The acquisition time of the first frame image is t1, and the acquisition time of the second frame image is t 2. Referring to FIG. 10, assume that at time t1, the center of the image capture assembly is at point O, and the coordinates of feature point A in the first frame of image are
Figure BDA0001583469710000142
At time t2, the imageThe center of the acquisition component is positioned at a point O ', and the coordinates of the characteristic point A' in the second frame image are
Figure BDA0001583469710000144
Assuming that the image acquisition component is a monocular camera, and a first matrix P of the monocular camera is established by the monocular camera based on a monocular camera imaging model and is used for representing an imaging relationship between a point in a world coordinate system and a point in a monocular camera coordinate system, a determination process of the first matrix of the monocular camera may refer to a related technology.
The first matrix P of image acquisition components may be represented as:
Figure BDA0001583469710000151
wherein x is0For the abscissa, y, of the optical center of the image-capturing element in the first frame image0The ordinate of the optical center of the image acquisition assembly in the first frame image is shown, and f is the equivalent focal length of the image acquisition assembly.
And 3072, determining a second matrix of the image acquisition assembly when the second frame of image is acquired according to the pose transformation model.
Taking fig. 10 as an example, determining, by the image capturing component according to the pose transformation model shown in formula (1) determined in step 3062, that a second matrix of the image capturing component when the image capturing component captures the second frame image is P ', where the second matrix is P' may be represented as:
Figure BDA0001583469710000152
wherein x is0For the abscissa, y, of the optical center of the image-capturing element in the first frame image0The ordinate of the optical center of the image acquisition assembly in the first frame image is shown, and f is the equivalent focal length of the image acquisition assembly. The expressions of M11, M12, M13, M14, M21, M22, M23, M24, M31, M32, M33, and M34 are:
Figure BDA0001583469710000153
the expressions of sx, cx, sy, cy, sz, and cz are:
Figure BDA0001583469710000161
wherein r in the expression (4) and the expression (5)x、ryAnd rzSee equation (1).
Step 3073, determining the coordinates of the target three-dimensional point according to the coordinates of the first characteristic point, the first matrix, the coordinates of the second characteristic point and the second matrix.
The target three-dimensional point is a three-dimensional point in a three-dimensional coordinate system where the vehicle is located, the three-dimensional coordinate system may be a coordinate system with the image acquisition assembly as a center, and the coordinates of the target three-dimensional point are coordinates of the target three-dimensional point in the coordinate system with the image acquisition assembly as the center.
In the related art, homogeneous coordinates are important contents in computer graphics, and linear geometric transformation can be more conveniently performed through the homogeneous coordinates. For example, if the coordinates of a certain point are (x, y, z), then the homogeneous coordinates of the certain point may be (x, y, z, 1).
In the embodiment of the present invention, in order to quickly determine the coordinates of the target three-dimensional point, the coordinates of the target three-dimensional point are expressed as homogeneous coordinates, and taking fig. 10 as an example, it is assumed that the coordinates of the target three-dimensional point W to be estimated are expressed as homogeneous coordinates, where the homogeneous coordinates are W ═ X, Y, Z,1]TThen the coordinates of the feature point A can be obtained
Figure BDA0001583469710000162
Relation (6) of first matrix P of image acquisition assembly and homogeneous coordinate of target three-dimensional point W and coordinate of characteristic point A
Figure BDA0001583469710000163
The relation (7) between the second matrix P' of the image acquisition assembly and the homogeneous coordinates of the target three-dimensional point W:
Figure BDA0001583469710000164
Figure BDA0001583469710000165
wherein, the above relation (6) can be equivalently expressed as:
Figure BDA0001583469710000166
then, will
Figure BDA0001583469710000167
After expansion, the relation (8) can be obtained:
Figure BDA0001583469710000168
wherein, any two equations in the three equations included in the relation (8) are linearly independent, piTA transpose of the elements of the ith (i ═ 1,2,3) row of the first matrix P representing the image acquisition assembly.
Similarly, the above relation (7) can be equivalently expressed as:
Figure BDA0001583469710000169
then, will
Figure BDA00015834697100001610
After expansion, the relation (9) can be obtained:
Figure BDA0001583469710000171
wherein any two equations of the three equations comprising relational expression (9) are linearly independent, p'iTA transpose of the i (i ═ 1,2,3) th row elements of the second matrix P' representing the image acquisition assembly.
Then, selecting four equations from the relational expression (8) and the relational expression (9) to form a form in which AW is equal to 0, and finally obtaining homogeneous coordinates of the target three-dimensional point W by a singular decomposition mode, wherein a matrix a is:
Figure BDA0001583469710000172
based on step 307, the image capturing component can perform ranging on any three-dimensional point in the three-dimensional coordinate system of the vehicle, not limited to vehicles or pedestrians on the road surface.
In summary, according to the distance measurement method based on the vehicle provided by the embodiment of the invention, after two frames of images including the target object are acquired by the image acquisition assembly, at least one pair of matching feature points of the target object in the two frames of images is determined, then the pose transformation model of the image acquisition assembly is established by adopting the deflection angle of the vehicle and the speed of the vehicle, and then the coordinates of the target three-dimensional point in the three-dimensional coordinate system space where the vehicle is located are determined based on the at least one pair of matching feature points and the pose transformation model of the image acquisition assembly.
In a second implementation, for example, referring to fig. 11, the method may include the following steps:
step 401, a first frame image is acquired.
The image acquisition assembly arranged on the vehicle acquires a first frame of image, and the first frame of image is used for reflecting the surrounding environment of the vehicle. For example, the image capture assembly may be a monocular camera.
Step 402, determining a target object in the first frame image.
Step 402 may refer to step 302.
And 403, acquiring a second frame image.
The second frame image is used to reflect the vehicle surroundings.
Step 404, determining a target object in the second frame image based on the first frame image and the determined target object.
Step 404 refers to step 304.
Step 405, determining at least one pair of matching feature points of the target object in the two frames of images.
Step 405 may refer to step 305.
The two frame images include the first frame image in step 401 and the second frame image in step 403.
Step 406, determining a first position of a blanking point in the first frame image and a second position of a blanking point in the second frame image.
The blanking point is an intersection point of an extended line of the left lane line and an extended line of the right lane line in the image, and fig. 12 exemplarily shows a schematic diagram of the blanking point. The left lane line is a lane line on the left side of the vehicle on the road, and the right lane line is a lane line on the right side of the vehicle on the road.
In practical application, a longitudinal jolt phenomenon usually occurs when a vehicle moves, so that a pitch angle of an image acquisition assembly changes, and when the pitch angle of the image acquisition assembly changes, a pose transformation model of the image acquisition assembly needs to be established based on the pitch angle of the image acquisition assembly. However, the vehicle cannot acquire the pitch angle through the CAN bus, so that in order to acquire the pitch angle of the image acquisition assembly, improve the precision of a pose transformation model of the image acquisition assembly, further improve the accuracy of coordinates of a target three-dimensional point and lead out a blanking point, the pitch angle of the image acquisition assembly is acquired based on the position of the blanking point in the method.
Optionally, as shown in fig. 13, determining a first position of a blanking point in the first frame image includes:
step 4061, determine a first equation corresponding to the left lane line and a second equation corresponding to the right lane line in the first frame image.
The first equation and the second equation are both linear equations.
Alternatively, lane lines (left lane lines or right lane lines) may be detected based on a deep learning semantic segmentation algorithm. For example, the foreground points of the lane line are segmented, and then clustering and RANSAC fitting processing are performed on the foreground points to obtain a linear equation corresponding to the lane line.
Step 4062, a first location of a blanking point in the first frame image is determined based on the first equation and the second equation.
For example, if the first equation corresponding to the left lane line in the first frame image is y-2 x +4 and the second equation corresponding to the right lane line is y-2 x +6, the coordinates of the blanking point, which is the intersection of the extension line of the left lane line and the extension line of the right lane line, may be (0.5, 5).
Likewise, referring to steps 4061 and 4062, a second position of the blanking point in the second frame image is determined.
Step 407, determining a first pitch angle of the image capturing assembly based on the first position.
Wherein the first position of the blanking point is characterized by the ordinate of the blanking point in the two-dimensional coordinate system of the first frame image. Accordingly, step 407 may include:
a first pitch angle of the image acquisition assembly is determined based on a vertical coordinate of the blanking point in the first frame of image using a pitch angle calculation formula. The pitch angle calculation formula is as follows: α ═ arctan ((y)0-yv) F), alpha is the pitch angle of the image acquisition assembly, y0For the ordinate, y, of the optical centre of the image-capturing element in the imagevIs the ordinate of the blanking point in the image, and f is the equivalent focal length of the image acquisition assembly.
A second pitch angle of the image capturing assembly is determined based on the second position, step 408.
The second position of the blanking point is characterized by the ordinate of the blanking point in the two-dimensional coordinate system of the second frame image. Accordingly, step 408 may include:
a second pitch angle of the image acquisition assembly is determined based on the vertical coordinate of the blanking point in the second frame image using the pitch angle calculation formula described in step 407.
And 409, when the first pitch angle and the second pitch angle are not equal, establishing a pose transformation model of the image acquisition assembly by adopting the deflection angle of the vehicle and the speed of the vehicle based on the first pitch angle and the second pitch angle.
Step 409 may include: when the first pitch angle and the second pitch angle are not equal, based on the first pitch angle and the second pitch angle, adopting the yaw angle of the vehicle and the speed of the vehicle to establish a function of 6 degrees of freedom of the image acquisition assembly based on the motion state of the vehicle when the shooting operation is executed; and taking the function of the 6 degrees of freedom as a pose transformation model of the image acquisition assembly.
Referring to step 3061, the function of 6 degrees of freedom includes the change r in the pitch angle of the image capturing assembly from the time of capturing the first frame image to the time of capturing the second frame imagexAmount of change r of deflection angle of image pickup elementyAmount of change r of rotation angle of image pickup elementzTransverse displacement t of the image acquisition assemblyxDisplacement t of the image capturing assembly along the vertical direction of the image capturing assemblyyDisplacement t of the image acquisition assembly along the optical axis of the image acquisition assemblyz
The acquisition time of the first frame image is t1, and the acquisition time of the second frame image is t 2. Suppose the image acquisition assembly has a pitch angle α at time t1t1At time t2, the pitch angle is αt2Then the variation r of the pitch angle of the image acquisition assemblyx=αt2t1. The function of 6 degrees of freedom obtained by the image acquisition assembly is:
Figure BDA0001583469710000201
formula (11) differs from formula (1) only in that rxAre different.
And step 410, determining coordinates of a target three-dimensional point in a three-dimensional coordinate system where the vehicle is located based on the at least one pair of matched feature points and the pose transformation model of the image acquisition assembly.
Step 410 may refer to step 307.
It should be noted that, the sequence of the steps of the vehicle-based distance measuring method provided in the embodiment of the present invention may be appropriately adjusted, and the steps may be increased or decreased according to the circumstances, and any method that can be easily conceived by a person skilled in the art within the technical scope of the present invention should be included in the protection scope of the present invention, and therefore, no further description is given.
In summary, according to the distance measurement method based on the vehicle provided by the embodiment of the invention, after two frames of images including a target object are acquired through an image acquisition assembly, at least one pair of matching feature points of the target object in the two frames of images is determined, then a pose transformation model of the image acquisition assembly is established by adopting the deflection angle of the vehicle and the speed of the vehicle based on the change condition of the pitch angle of the image acquisition assembly, and then the coordinates of the target three-dimensional point in the three-dimensional coordinate system space of the vehicle are determined based on the at least one pair of matching feature points and the pose transformation model of the image acquisition assembly.
FIG. 14 is a block diagram illustrating a vehicle-based ranging device that may be implemented by an image capture assembly disposed on a vehicle in the implementation environment shown in FIG. 1, according to an exemplary embodiment. Referring to fig. 14, the apparatus 500 includes:
the feature point determining module 510 is configured to determine at least one pair of matching feature points of the target object in two frames of images after the two frames of images including the target object are captured by the image capturing component, where the two frames of images are used for reflecting the surrounding environment of the vehicle, and the image capturing component is disposed on the vehicle.
And a model establishing module 520, configured to establish a pose transformation model of the image capturing assembly by using the deflection angle of the vehicle and the speed of the vehicle.
And a coordinate determination module 530, configured to determine coordinates of a target three-dimensional point in the three-dimensional coordinates where the vehicle is located based on the at least one pair of matching feature points and the pose transformation model of the image capture assembly.
The two frames of images include a first frame of image and a second frame of image, the first frame of image is acquired earlier than the second frame of image, and further, as shown in fig. 15, the apparatus 500 may further include:
the position determining module 540 is configured to determine a first position of a blanking point in the first frame image and a second position of a blanking point in the second frame image, where the blanking point is an intersection of an extension line of the left lane line and an extension line of the right lane line in the image.
A first pitch angle determination module 550 for determining a first pitch angle of the image capturing assembly based on the first position.
A second pitch angle determination module 560 for determining a second pitch angle of the image capturing assembly based on the second position.
A model building module 520 to:
and when the first pitch angle and the second pitch angle are not equal, establishing a pose transformation model of the image acquisition assembly by adopting the deflection angle of the vehicle and the speed of the vehicle based on the first pitch angle and the second pitch angle.
Wherein the first and second positions of the blanking point are characterized by the ordinate of the blanking point in the two-dimensional coordinate system of the image, and correspondingly, the first pitch angle determining module 550 is configured to:
determining a first pitch angle of the image acquisition assembly based on a vertical coordinate of a blanking point in the first frame image by adopting a pitch angle calculation formula;
a second pitch angle determination module 560 to:
determining a second pitch angle of the image acquisition assembly based on the vertical coordinate of the blanking point in the second frame image by adopting a pitch angle calculation formula;
wherein, the pitch angle computational formula is:
α=-arctan((y0-yv) F), alpha is the pitch angle of the image acquisition assembly, y0For the ordinate, y, of the optical centre of the image-capturing element in the imagevIs the ordinate of the blanking point in the image, and f is the equivalent focal length of the image acquisition assembly.
Optionally, the model building module 520 is configured to:
acquiring a function of 6 degrees of freedom of an image acquisition component based on a vehicle motion state when shooting operation is performed by adopting a deflection angle of a vehicle and a speed of the vehicle;
taking a function of 6 degrees of freedom as a pose transformation model of the image acquisition assembly;
the function of 6 degrees of freedom includes a change amount of a pitch angle of the image acquisition assembly, a change amount of a deflection angle of the image acquisition assembly, a change amount of a rotation angle of the image acquisition assembly, a change amount of a transverse displacement of the image acquisition assembly, a displacement of the image acquisition assembly generated along a vertical direction of the image acquisition assembly, and a displacement of the image acquisition assembly generated along an optical axis direction of the image acquisition assembly from an acquisition time of a first frame image to an acquisition time of a second frame image.
Illustratively, the at least one pair of matched feature points includes a pair of matched feature points, and accordingly, the feature point determination module 510 is configured to:
extracting first feature points in the first frame image, and determining a first feature descriptor for expressing the first feature points;
extracting second feature points in the second frame image, and determining a second feature descriptor for expressing the second feature points;
calculating the similarity of the first feature descriptor and the second feature descriptor;
and when the similarity is greater than the preset similarity, taking the first feature point and the second feature point as a pair of matched feature points.
Optionally, the coordinate determination module 530 is configured to:
determining a first matrix of an image acquisition assembly when acquiring a first frame of image;
determining a second matrix of the image acquisition assembly when a second frame of image is acquired according to the pose transformation model;
and determining the coordinates of the target three-dimensional point according to the coordinates of the first characteristic point, the first matrix, the coordinates of the second characteristic point and the second matrix.
Optionally, the position determining module 540 is configured to:
determining a first equation corresponding to a left lane line and a second equation corresponding to a right lane line in the first frame image;
a first position of a blanking point in the first frame image is determined based on the first equation and the second equation.
Further, as shown in fig. 15, the apparatus 500 may further include:
the image capturing module 570 is configured to capture a first frame of image.
A target object determining module 580 for determining the target object in the first frame image.
The image capturing module 570 is further configured to capture a second frame of image.
The target object determination module 580 is further configured to determine a target object in the second frame image based on the first frame image and the determined target object.
Optionally, the target object determining module 580 is configured to:
and determining the position and the category of the target object in the first frame image by adopting a deep learning algorithm.
Optionally, the target object determining module 580 is configured to:
and determining a target object in the second frame image based on the first frame image and the determined target object by adopting a target tracking algorithm.
In summary, according to the distance measuring device based on the vehicle provided by the embodiment of the invention, after two frames of images including the target object are acquired by the image acquisition assembly, at least one pair of matching feature points of the target object in the two frames of images is determined, then the pose transformation model of the image acquisition assembly is established by adopting the deflection angle of the vehicle and the speed of the vehicle, and then the coordinates of the target three-dimensional point in the three-dimensional coordinate system space where the vehicle is located are determined based on the at least one pair of matching feature points and the pose transformation model of the image acquisition assembly.
FIG. 16 is a block diagram illustrating a computer device that may be executed by the image capture component disposed on the vehicle in the implementation environment shown in FIG. 1, according to an exemplary embodiment. Referring to fig. 16, the computer apparatus 600 includes: a processor 601, a communication interface 602, a memory 603, and a communication bus 604.
The processor 601, the communication interface 602, and the memory 603 complete communication with each other through the communication bus 604; a memory 603 for storing a computer program 6031; a processor 601 configured to execute the computer program stored in the memory 603 to implement the vehicle-based ranging method shown in fig. 2,3 or 4.
Embodiments of the present invention further provide a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the vehicle-based distance measuring method shown in fig. 2,3 or 4 is implemented.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (22)

1. A vehicle-based ranging method, the method comprising:
after two frames of images containing a target object are acquired through an image acquisition assembly, determining at least one pair of matched feature points of the target object in the two frames of images, wherein the two frames of images are used for reflecting the condition of the surrounding environment of a vehicle, the image acquisition assembly is arranged on the vehicle, the two frames of images comprise a first frame of image and a second frame of image, and the first frame of image is acquired earlier than the second frame of image;
establishing a pose transformation model of the image acquisition assembly by adopting the deflection angle of the vehicle and the speed of the vehicle;
determining a first matrix of the image acquisition assembly when the first frame of image is acquired, wherein the first matrix is used for representing the imaging relation between points in a world coordinate system and points in a coordinate system of the image acquisition assembly;
determining a second matrix of the image acquisition assembly when acquiring the second frame of image according to the pose transformation model, wherein the second matrix passes through x0、y0F and a functional representation of 6 degrees of freedom of the image acquisition assembly based on the state of motion of the vehicle when performing a photographing operation, x0For the abscissa, y, of the optical center of the image acquisition assembly in the first frame image0The ordinate of the optical center of the image acquisition assembly in the first frame image is shown, and f is the equivalent focal length of the image acquisition assembly;
according to the relation among the coordinate of a first characteristic point, the coordinate of a first matrix and a target three-dimensional point, the relation among the coordinate of a second characteristic point, the coordinate of a second matrix and the coordinate of the target three-dimensional point, singular value decomposition is carried out, the coordinate of the target three-dimensional point in a three-dimensional coordinate system where the vehicle is located is determined, the first characteristic point belongs to the first frame image, the second characteristic point belongs to the second frame image, the first characteristic point and the second characteristic point are a pair of matched characteristic points, the three-dimensional coordinate system takes the image acquisition assembly as the center, and the target three-dimensional point belongs to the coordinate point in the three-dimensional coordinate system.
2. The method according to claim 1, characterized in that before the establishing of the pose transformation model of the image capture assembly using the yaw angle of the vehicle and the speed of the vehicle, the method further comprises:
determining a first position of a blanking point in the first frame image and a second position of the blanking point in the second frame image, wherein the blanking point is an intersection point of an extension line of a left lane line and an extension line of a right lane line in the image;
determining a first pitch angle of the image acquisition assembly based on the first position;
determining a second pitch angle of the image acquisition assembly based on the second position;
the establishing of the pose transformation model of the image acquisition assembly by adopting the deflection angle of the vehicle and the speed of the vehicle comprises the following steps:
and when the first pitch angle and the second pitch angle are not equal, establishing a pose transformation model of the image acquisition assembly by adopting the deflection angle of the vehicle and the speed of the vehicle based on the first pitch angle and the second pitch angle.
3. The method according to claim 2, characterized in that the first and second positions of the blanking point are characterized by an ordinate of the blanking point in a two-dimensional coordinate system of the image,
the determining a first pitch angle of the image acquisition assembly based on the first position comprises:
determining a first pitch angle of the image acquisition assembly based on a vertical coordinate of the blanking point in the first frame image by adopting a pitch angle calculation formula;
the determining a second pitch angle of the image acquisition assembly based on the second position comprises:
determining a second pitch angle of the image acquisition assembly based on a vertical coordinate of the blanking point in the second frame image by using the pitch angle calculation formula;
wherein, the pitch angle calculation formula is as follows:
α=-arctan((y0-yv) /f), the alpha is the pitch angle of the image acquisition assembly, and the y0For the ordinate of the optical center of the image acquisition assembly in the image, yvAnd f is the ordinate of the blanking point in the image, and the equivalent focal length of the image acquisition assembly.
4. The method of claim 2, wherein the modeling the pose transformation of the image capture assembly using the yaw angle of the vehicle and the speed of the vehicle comprises:
acquiring a function of 6 degrees of freedom of the image acquisition assembly based on the motion state of the vehicle when shooting operation is executed by adopting the deflection angle of the vehicle and the speed of the vehicle;
taking the function of the 6 degrees of freedom as a pose transformation model of the image acquisition assembly;
the function of the 6 degrees of freedom includes a change amount of a pitch angle of the image acquisition assembly, a change amount of a deflection angle of the image acquisition assembly, a change amount of a rotation angle of the image acquisition assembly, a lateral displacement of the image acquisition assembly, a displacement of the image acquisition assembly generated along a vertical direction of the image acquisition assembly, and a displacement of the image acquisition assembly generated along an optical axis direction of the image acquisition assembly from an acquisition time of the first frame image to an acquisition time of the second frame image.
5. The method of claim 2, wherein the at least one pair of matching feature points comprises a pair of matching feature points,
the determining at least one pair of matching feature points of the target object in the two frames of images comprises:
extracting first feature points in the first frame image, and determining a first feature descriptor for expressing the first feature points;
extracting second feature points in the second frame image, and determining a second feature descriptor for expressing the second feature points;
calculating the similarity of the first feature descriptor and the second feature descriptor;
and when the similarity is greater than the preset similarity, taking the first feature point and the second feature point as a pair of matched feature points.
6. The method of claim 2, wherein determining the first location of the blanking point in the first frame image comprises:
determining a first equation corresponding to a left lane line and a second equation corresponding to a right lane line in the first frame image;
determining a first position of the blanking point in the first frame image based on the first equation and the second equation.
7. The method of claim 2, wherein after acquiring two images containing a target object by an image acquisition component, before determining at least one pair of matching feature points of the target object in the two images, the method further comprises:
acquiring the first frame image;
determining the target object in the first frame image;
acquiring the second frame image;
determining a target object in the second frame image based on the first frame image and the determined target object.
8. The method of claim 7, wherein the determining the target object in the first frame of image comprises:
and determining the position and the category of the target object in the first frame image by adopting a deep learning algorithm.
9. The method of claim 7, wherein determining the target object in the second frame image based on the first frame image and the determined target object comprises:
and determining a target object in the second frame image based on the first frame image and the determined target object by adopting a target tracking algorithm.
10. The method according to any one of claims 1 to 9,
the image acquisition assembly is a monocular camera.
11. A vehicle-based ranging apparatus, the apparatus comprising:
the characteristic point determining module is used for determining at least one pair of matched characteristic points of a target object in two frames of images after the two frames of images containing the target object are acquired through an image acquisition assembly, the two frames of images are used for reflecting the condition of the surrounding environment of a vehicle, the image acquisition assembly is arranged on the vehicle, the two frames of images comprise a first frame of image and a second frame of image, and the first frame of image is acquired earlier than the second frame of image;
the model establishing module is used for establishing a pose transformation model of the image acquisition assembly by adopting the deflection angle of the vehicle and the speed of the vehicle;
a coordinate determination module for
Determining a first matrix of the image acquisition assembly when the first frame of image is acquired, wherein the first matrix is used for representing the imaging relation between points in a world coordinate system and points in a coordinate system of the image acquisition assembly;
determining a second matrix of the image acquisition assembly when acquiring the second frame of image according to the pose transformation model, wherein the second matrix passes through x0、y0F and a functional representation of 6 degrees of freedom of the image acquisition assembly based on the state of motion of the vehicle when performing a photographing operation, x0For the optical center of the image acquisition assembly in the first frame imageAbscissa of (5), y0The ordinate of the optical center of the image acquisition assembly in the first frame image is shown, and f is the equivalent focal length of the image acquisition assembly;
according to the relation among the coordinate of a first characteristic point, the coordinate of a first matrix and a target three-dimensional point, the relation among the coordinate of a second characteristic point, the coordinate of a second matrix and the coordinate of the target three-dimensional point, singular value decomposition is carried out, the coordinate of the target three-dimensional point in a three-dimensional coordinate system where the vehicle is located is determined, the first characteristic point belongs to the first frame image, the second characteristic point belongs to the second frame image, the first characteristic point and the second characteristic point are a pair of matched characteristic points, the three-dimensional coordinate system takes the image acquisition assembly as the center, and the target three-dimensional point belongs to the coordinate point in the three-dimensional coordinate system.
12. The apparatus of claim 11, further comprising:
the position determining module is used for determining a first position of a blanking point in the first frame image and a second position of the blanking point in the second frame image, wherein the blanking point is an intersection point of an extension line of a left lane line and an extension line of a right lane line in the image;
a first pitch angle determination module to determine a first pitch angle of the image acquisition assembly based on the first position;
a second pitch angle determination module for determining a second pitch angle of the image capture assembly based on the second position;
the model building module is used for:
and when the first pitch angle and the second pitch angle are not equal, establishing a pose transformation model of the image acquisition assembly by adopting the deflection angle of the vehicle and the speed of the vehicle based on the first pitch angle and the second pitch angle.
13. The apparatus according to claim 12, wherein the first and second positions of the blanking point are characterized by an ordinate of the blanking point in a two-dimensional coordinate system of the image,
the first pitch angle determination module to:
determining a first pitch angle of the image acquisition assembly based on a vertical coordinate of the blanking point in the first frame image by adopting a pitch angle calculation formula;
the second pitch angle determination module is configured to:
determining a second pitch angle of the image acquisition assembly based on a vertical coordinate of the blanking point in the second frame image by using the pitch angle calculation formula;
wherein, the pitch angle calculation formula is as follows:
α=-arctan((y0-yv) /f), the alpha is the pitch angle of the image acquisition assembly, and the y0For the ordinate of the optical center of the image acquisition assembly in the image, yvAnd f is the ordinate of the blanking point in the image, and the equivalent focal length of the image acquisition assembly.
14. The apparatus of claim 12, wherein the model building module is configured to:
acquiring a function of 6 degrees of freedom of the image acquisition assembly based on the motion state of the vehicle when shooting operation is executed by adopting the deflection angle of the vehicle and the speed of the vehicle;
taking the function of the 6 degrees of freedom as a pose transformation model of the image acquisition assembly;
the function of the 6 degrees of freedom includes a change amount of a pitch angle of the image acquisition assembly, a change amount of a deflection angle of the image acquisition assembly, a change amount of a rotation angle of the image acquisition assembly, a lateral displacement of the image acquisition assembly, a displacement of the image acquisition assembly generated along a vertical direction of the image acquisition assembly, and a displacement of the image acquisition assembly generated along an optical axis direction of the image acquisition assembly from an acquisition time of the first frame image to an acquisition time of the second frame image.
15. The apparatus of claim 12, wherein the at least one pair of matching feature points comprises a pair of matching feature points,
the feature point determination module is configured to:
extracting first feature points in the first frame image, and determining a first feature descriptor for expressing the first feature points;
extracting second feature points in the second frame image, and determining a second feature descriptor for expressing the second feature points;
calculating the similarity of the first feature descriptor and the second feature descriptor;
and when the similarity is greater than the preset similarity, taking the first feature point and the second feature point as a pair of matched feature points.
16. The apparatus of claim 12, wherein the location determination module is configured to:
determining a first equation corresponding to a left lane line and a second equation corresponding to a right lane line in the first frame image;
determining a first position of the blanking point in the first frame image based on the first equation and the second equation.
17. The apparatus of claim 12, further comprising:
the image acquisition module is used for acquiring the first frame image;
a target object determination module for determining the target object in the first frame image;
the image acquisition module is further used for acquiring the second frame image;
the target object determination module is further configured to determine a target object in the second frame image based on the first frame image and the determined target object.
18. The apparatus of claim 17, wherein the target object determination module is configured to:
and determining the position and the category of the target object in the first frame image by adopting a deep learning algorithm.
19. The apparatus of claim 17, wherein the target object determination module is configured to:
and determining a target object in the second frame image based on the first frame image and the determined target object by adopting a target tracking algorithm.
20. The apparatus of any one of claims 11 to 19,
the image acquisition assembly is a monocular camera.
21. A computer device comprising a processor, a communication interface, a memory, and a communication bus,
the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor, configured to execute the computer program stored in the memory, to implement the vehicle-based ranging method according to any one of claims 1 to 10.
22. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out a vehicle-based ranging method according to any one of claims 1 to 10.
CN201810162648.7A 2018-02-27 2018-02-27 Distance measurement method and device based on vehicle Active CN110197104B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810162648.7A CN110197104B (en) 2018-02-27 2018-02-27 Distance measurement method and device based on vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810162648.7A CN110197104B (en) 2018-02-27 2018-02-27 Distance measurement method and device based on vehicle

Publications (2)

Publication Number Publication Date
CN110197104A CN110197104A (en) 2019-09-03
CN110197104B true CN110197104B (en) 2022-03-29

Family

ID=67750861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810162648.7A Active CN110197104B (en) 2018-02-27 2018-02-27 Distance measurement method and device based on vehicle

Country Status (1)

Country Link
CN (1) CN110197104B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022188077A1 (en) * 2021-03-10 2022-09-15 华为技术有限公司 Distance measuring method and device
CN116543032B (en) * 2023-07-06 2023-11-21 中国第一汽车股份有限公司 Impact object ranging method, device, ranging equipment and storage medium
CN116612459B (en) * 2023-07-18 2023-11-17 小米汽车科技有限公司 Target detection method, target detection device, electronic equipment and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102343912A (en) * 2011-06-20 2012-02-08 中南大学 Method for estimating state during running process of automobile
CN102661733A (en) * 2012-05-28 2012-09-12 天津工业大学 Front vehicle ranging method based on monocular vision
CN105488454A (en) * 2015-11-17 2016-04-13 天津工业大学 Monocular vision based front vehicle detection and ranging method
CN105867373A (en) * 2016-04-07 2016-08-17 重庆大学 Mobile robot posture reckoning method and system based on laser radar data
CN106323241A (en) * 2016-06-12 2017-01-11 广东警官学院 Method for measuring three-dimensional information of person or object through monitoring video or vehicle-mounted camera
CN106515740A (en) * 2016-11-14 2017-03-22 江苏大学 Distributed electrically driven automobile travelling status parameter estimation algorithm based on ICDKF
CN106553195A (en) * 2016-11-25 2017-04-05 中国科学技术大学 Object 6DOF localization method and system during industrial robot crawl
CN106679633A (en) * 2016-12-07 2017-05-17 东华大学 Vehicle-mounted distance measuring system and vehicle-mounted distance measuring method
CN106909929A (en) * 2015-12-22 2017-06-30 比亚迪股份有限公司 Pedestrian's distance detection method and device
CN107330940A (en) * 2017-01-25 2017-11-07 问众智能信息科技(北京)有限公司 The method and apparatus that in-vehicle camera posture is estimated automatically
CN107390205A (en) * 2017-07-20 2017-11-24 清华大学 A kind of monocular vision vehicle odometry method that front truck feature is obtained using car networking
CN107389026A (en) * 2017-06-12 2017-11-24 江苏大学 A kind of monocular vision distance-finding method based on fixing point projective transformation
CN107688174A (en) * 2017-08-02 2018-02-13 北京纵目安驰智能科技有限公司 A kind of image distance-finding method, system, storage medium and vehicle-mounted visually-perceptible equipment

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2722646B1 (en) * 2011-06-14 2021-06-09 Nissan Motor Co., Ltd. Distance measurement device and environment map generation apparatus
CN102722886B (en) * 2012-05-21 2015-12-09 浙江捷尚视觉科技股份有限公司 A kind of video frequency speed-measuring method based on three-dimensional scaling and Feature Points Matching
JP6197388B2 (en) * 2013-06-11 2017-09-20 富士通株式会社 Distance measuring device, distance measuring method, and program
CN103412648B (en) * 2013-08-14 2016-03-02 浙江大学 The in-vehicle device interaction control device pointed to based on mobile device and method
CN103473774B (en) * 2013-09-09 2017-04-05 长安大学 A kind of vehicle positioning method based on pavement image characteristic matching
KR20150079098A (en) * 2013-12-31 2015-07-08 한국과학기술원 Filtering Methods of spatiotemporal 3D Vector for Robust visual odometry
CN104166995B (en) * 2014-07-31 2017-05-24 哈尔滨工程大学 Harris-SIFT binocular vision positioning method based on horse pace measurement
CN104299244B (en) * 2014-09-26 2017-07-25 东软集团股份有限公司 Obstacle detection method and device based on monocular camera
CN104792302A (en) * 2015-04-29 2015-07-22 深圳市保千里电子有限公司 Modeling method for measuring car distance
CN105046225B (en) * 2015-07-14 2018-09-18 安徽清新互联信息科技有限公司 A kind of vehicle distance detecting method based on tailstock detection
CN105182320A (en) * 2015-07-14 2015-12-23 安徽清新互联信息科技有限公司 Depth measurement-based vehicle distance detection method
CN106482709B (en) * 2015-08-25 2019-11-19 腾讯科技(深圳)有限公司 A kind of method, apparatus and system of distance survey
CN106408589B (en) * 2016-07-14 2019-03-29 浙江零跑科技有限公司 Based on the vehicle-mounted vehicle movement measurement method for overlooking camera
CN106204707B (en) * 2016-07-18 2019-02-15 中国科学院半导体研究所 A kind of monocular time domain topology matching three-D imaging method
CN106371459B (en) * 2016-08-31 2018-01-30 京东方科技集团股份有限公司 Method for tracking target and device
CN107305632B (en) * 2017-02-16 2020-06-12 武汉极目智能技术有限公司 Monocular computer vision technology-based target object distance measuring method and system
CN107679537B (en) * 2017-05-09 2019-11-19 北京航空航天大学 A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matching
CN107194339A (en) * 2017-05-15 2017-09-22 武汉星巡智能科技有限公司 Obstacle recognition method, equipment and unmanned vehicle
CN107481315A (en) * 2017-06-29 2017-12-15 重庆邮电大学 A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms
CN107560592B (en) * 2017-08-21 2020-08-18 河南中光学集团有限公司 Precise distance measurement method for photoelectric tracker linkage target
CN107705333B (en) * 2017-09-21 2021-02-26 歌尔股份有限公司 Space positioning method and device based on binocular camera

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102343912A (en) * 2011-06-20 2012-02-08 中南大学 Method for estimating state during running process of automobile
CN102661733A (en) * 2012-05-28 2012-09-12 天津工业大学 Front vehicle ranging method based on monocular vision
CN105488454A (en) * 2015-11-17 2016-04-13 天津工业大学 Monocular vision based front vehicle detection and ranging method
CN106909929A (en) * 2015-12-22 2017-06-30 比亚迪股份有限公司 Pedestrian's distance detection method and device
CN105867373A (en) * 2016-04-07 2016-08-17 重庆大学 Mobile robot posture reckoning method and system based on laser radar data
CN106323241A (en) * 2016-06-12 2017-01-11 广东警官学院 Method for measuring three-dimensional information of person or object through monitoring video or vehicle-mounted camera
CN106515740A (en) * 2016-11-14 2017-03-22 江苏大学 Distributed electrically driven automobile travelling status parameter estimation algorithm based on ICDKF
CN106553195A (en) * 2016-11-25 2017-04-05 中国科学技术大学 Object 6DOF localization method and system during industrial robot crawl
CN106679633A (en) * 2016-12-07 2017-05-17 东华大学 Vehicle-mounted distance measuring system and vehicle-mounted distance measuring method
CN107330940A (en) * 2017-01-25 2017-11-07 问众智能信息科技(北京)有限公司 The method and apparatus that in-vehicle camera posture is estimated automatically
CN107389026A (en) * 2017-06-12 2017-11-24 江苏大学 A kind of monocular vision distance-finding method based on fixing point projective transformation
CN107390205A (en) * 2017-07-20 2017-11-24 清华大学 A kind of monocular vision vehicle odometry method that front truck feature is obtained using car networking
CN107688174A (en) * 2017-08-02 2018-02-13 北京纵目安驰智能科技有限公司 A kind of image distance-finding method, system, storage medium and vehicle-mounted visually-perceptible equipment

Also Published As

Publication number Publication date
CN110197104A (en) 2019-09-03

Similar Documents

Publication Publication Date Title
CN112861653B (en) Method, system, equipment and storage medium for detecting fused image and point cloud information
WO2021004548A1 (en) Vehicle speed intelligent measurement method based on binocular stereo vision system
CN108520536B (en) Disparity map generation method and device and terminal
CN110910453B (en) Vehicle pose estimation method and system based on non-overlapping view field multi-camera system
CN113819890B (en) Distance measuring method, distance measuring device, electronic equipment and storage medium
CN110826499A (en) Object space parameter detection method and device, electronic equipment and storage medium
CN112037159B (en) Cross-camera road space fusion and vehicle target detection tracking method and system
CN109741241B (en) Fisheye image processing method, device, equipment and storage medium
CN107796373B (en) Distance measurement method based on monocular vision of front vehicle driven by lane plane geometric model
CN110197104B (en) Distance measurement method and device based on vehicle
CN113111887A (en) Semantic segmentation method and system based on information fusion of camera and laser radar
CN111178150A (en) Lane line detection method, system and storage medium
CN113408324A (en) Target detection method, device and system and advanced driving assistance system
CN114119992A (en) Multi-mode three-dimensional target detection method and device based on image and point cloud fusion
CN115410167A (en) Target detection and semantic segmentation method, device, equipment and storage medium
CN113743391A (en) Three-dimensional obstacle detection system and method applied to low-speed autonomous driving robot
CN113537047A (en) Obstacle detection method, obstacle detection device, vehicle and storage medium
CN114648639B (en) Target vehicle detection method, system and device
KR102003387B1 (en) Method for detecting and locating traffic participants using bird&#39;s-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
CN112802114A (en) Multi-vision sensor fusion device and method and electronic equipment
KR20160063039A (en) Method of Road Recognition using 3D Data
WO2021114775A1 (en) Object detection method, object detection device, terminal device, and medium
Xiong et al. A 3d estimation of structural road surface based on lane-line information
CN113743265A (en) Depth camera-based automatic driving travelable area detection method and system
CN113281770A (en) Coordinate system relation obtaining method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant