CN112146620B - Target object ranging method and device - Google Patents

Target object ranging method and device Download PDF

Info

Publication number
CN112146620B
CN112146620B CN202011333374.7A CN202011333374A CN112146620B CN 112146620 B CN112146620 B CN 112146620B CN 202011333374 A CN202011333374 A CN 202011333374A CN 112146620 B CN112146620 B CN 112146620B
Authority
CN
China
Prior art keywords
target
frame
vanishing point
determining
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011333374.7A
Other languages
Chinese (zh)
Other versions
CN112146620A (en
Inventor
李庆峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Tencent Cloud Computing Beijing Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011333374.7A priority Critical patent/CN112146620B/en
Publication of CN112146620A publication Critical patent/CN112146620A/en
Application granted granted Critical
Publication of CN112146620B publication Critical patent/CN112146620B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/10Measuring distances in line of sight; Optical rangefinders using a parallactic triangle with variable angles and a base of fixed length in the observation station, e.g. in the instrument

Abstract

The invention discloses a distance measuring method and device for a target object. Wherein, the method comprises the following steps: determining an angle change value of the target camera between a kth frame and a (k + 1) th frame according to the kth frame angular velocity and the (k + 1) th frame angular velocity output by the target sensor; determining a position change value of the position of the target vanishing point between the kth frame and the (k + 1) th frame according to the angle change value and the focal length of the target camera; under the condition that the position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image shot by the target camera according to the position change value; and determining the distance between the target object in the (k + 1) th frame of image and the target camera according to the (k + 1) th position of the target vanishing point. The invention solves the technical problem of low accuracy of the distance measurement result in automatic driving.

Description

Target object ranging method and device
Technical Field
The invention relates to the field of computers, in particular to a distance measuring method and device for a target object.
Background
In autonomous driving environment perception, perception of dynamic obstacles is an important ability for autonomous driving. Although the laser radar has the sensing capability of dynamic obstacles, the laser radar is difficult to be generally applied to the field of automatic driving due to the large volume and high price.
Currently, the industry tries to use a monocular camera to achieve ranging, but most monocular ranging work still needs to be performed with other expensive sensors such as: and laser radar, millimeter wave radar and the like are used in a fusion mode. A small percentage of purely visual research programs are too hypothetical and lack the ability to dynamically estimate. For example, the projection coordinate of the light ray projected to the image plane by the horizontal line of the assumed parallel plane of the prior art solution is fixed, and this projection coordinate may be called vanishing point. However, when the road jolts or goes up and down a slope in the actual road driving process, the vanishing point will change. There are also algorithms that attempt to calculate vanishing points using parallel lane lines, but vanishing points fail to calculate when parallel lane lines are not detected or when lane lines are not parallel. Therefore, the existing distance measurement methods all affect the precision of distance measurement and even cause a very off-spectrum distance measurement value, so that the automatic driving cannot correctly output the sensing result of the distance measurement, and great hidden danger can be brought to the safety of the automatic driving.
Aiming at the problem that the accuracy rate of a distance measurement result in automatic driving is low in the related technology, an effective solution does not exist at present.
Disclosure of Invention
The embodiment of the invention provides a distance measuring method and device for a target object, which at least solve the technical problem of low accuracy of distance measuring results in automatic driving.
According to an aspect of the embodiments of the present invention, there is provided a ranging method of a target object, including: determining an angle change value of a target camera between a kth frame and a (k + 1) th frame according to a kth frame angular velocity and a (k + 1) th frame angular velocity output by a target sensor, wherein the target camera and the target sensor are arranged on a target vehicle, and k is a natural number; determining a position change value of the position of a target vanishing point between the kth frame and the (k + 1) th frame according to the angle change value and the focal length of the target camera; under the condition that the position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition, determining the (k + 1) th position of the target vanishing point in a (k + 1) th frame image shot by the target camera according to the position change value; and determining the distance between a target object in the (k + 1) th frame of image and the target camera according to the (k + 1) th position of the target vanishing point.
According to another aspect of the embodiments of the present invention, there is also provided a ranging apparatus for a target object, including: the first determination module is used for determining an angle change value of a target camera between a kth frame and a (k + 1) th frame according to a kth frame angular velocity and a (k + 1) th frame angular velocity output by a target sensor, wherein the target camera and the target sensor are arranged on a target vehicle, and k is a natural number; a second determining module, configured to determine, according to the angle change value and the focal length of the target camera, a position change value of a position of a target vanishing point between the kth frame and the (k + 1) th frame; a third determining module, configured to determine, according to the position change value, a (k + 1) th position of the target vanishing point in a (k + 1) th frame image captured by the target camera when a position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition; a fourth determining module, configured to determine, according to the (k + 1) th position of the target vanishing point, a distance between the target object in the (k + 1) th frame of image and the target camera.
According to still another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the above-mentioned distance measuring method for a target object when running.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the above-mentioned distance measuring method for a target object through the computer program.
In the embodiment of the invention, the angle change value of the target camera between the kth frame and the (k + 1) th frame is determined according to the kth frame angular velocity and the (k + 1) th frame angular velocity output by the target sensor, the target camera and the target sensor are arranged on the target vehicle, and k is a natural number; determining a position change value of the position of a target vanishing point between a kth frame and the (k + 1) th frame according to the angle change value and the focal length of the target camera; under the condition that the position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image shot by the target camera according to the position change value; and determining the distance between the target object in the (k + 1) th frame of image and the target camera according to the (k + 1) th position of the target vanishing point. The purpose of distance measurement according to the position of the vanishing point in the image is achieved, the technical effect of improving the accuracy of the distance measurement result is achieved, and the technical problem that the accuracy of the distance measurement result in automatic driving is low is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of an application environment of an alternative method for measuring distance of a target object according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of ranging a target object according to an embodiment of the present invention;
FIG. 3 is a first schematic diagram in accordance with an alternative embodiment of the present invention;
FIG. 4 is a schematic illustration of the effect of camera changes on vanishing points in accordance with an alternative embodiment of the present invention;
FIG. 5 is a schematic illustration of a vanishing point in an image in accordance with an alternative embodiment of the invention;
FIG. 6 is a second schematic diagram in accordance with an alternative embodiment of the present invention;
FIG. 7 is a top view of a target object lateral position estimate according to an alternative embodiment of the present invention;
FIG. 8 is a schematic diagram of an alternative target object ranging apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an alternative electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The following explains the terms referred to in the present application:
dynamic obstacle ranging: the actual distance of a dynamic object (vehicle, pedestrian, etc.) from the current vehicle is measured, and the distance comprises coordinate position information of longitudinal direction x and transverse direction y.
Vanishing point: in linear fluoroscopy, the point at which two or more lines representing parallel lines extend far out of the horizon to converge is called the vanishing point.
According to an aspect of the embodiments of the present invention, there is provided a method for ranging a target object, optionally, as an optional implementation manner, the method for ranging a target object may be applied to, but not limited to, a scene as shown in fig. 1, where the scene includes: a first vehicle 102 and a second vehicle 104.
The first vehicle 102 is a vehicle traveling ahead of the second vehicle 104, and the second vehicle 104 may have an object camera and an object sensor mounted thereon. The target camera mounted on the second vehicle 104 is used to photograph vehicles traveling ahead, including but not limited to the first vehicle 102 described above, traveling ahead of the second vehicle 104. The target sensor is used for detecting an angular velocity of the vehicle, and the target sensor includes, but is not limited to, an Inertial Measurement Unit (IMU) sensor.
The above is merely an example, and this is not limited in this embodiment.
Specifically, the following steps are realized through the distance measurement of the target object:
determining an angle change value of a target camera between a kth frame and a (k + 1) th frame according to a kth frame angular velocity and a (k + 1) th frame angular velocity output by a target sensor, wherein the target camera and the target sensor are arranged on a target vehicle, and k is a natural number; in step S104, determining a position change value of the position of the target vanishing point between the kth frame and the (k + 1) th frame according to the angle change value and the focal length of the target camera; in step S106, when the result of determining the position of the target vanishing point before the (k + 1) th frame meets a first preset condition, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image captured by the target camera according to the position change value; in step S108, a distance between the target object in the (k + 1) th frame of image and the target camera is determined according to the (k + 1) th position of the target vanishing point.
Alternatively, the main body of the above steps may be a server, or may be a processor installed on the second vehicle 104, but is not limited thereto.
Optionally, as an optional implementation manner, as shown in fig. 2, the distance measuring method for the target object includes:
step S202, determining an angle change value of a target camera between a kth frame and a (k + 1) th frame according to a kth frame angular velocity and a (k + 1) th frame angular velocity output by a target sensor, wherein the target camera and the target sensor are arranged on a target vehicle, and k is a natural number;
step S204, determining a position change value of the position of the target vanishing point between the kth frame and the (k + 1) th frame according to the angle change value and the focal length of the target camera;
step S206, under the condition that the position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image shot by the target camera according to the position change value;
step S208, determining the distance between the target object in the (k + 1) th frame of image and the target camera according to the (k + 1) th position of the target vanishing point.
Through the steps, determining an angle change value of a target camera between a kth frame and a (k + 1) th frame according to a kth frame angular velocity and a (k + 1) th frame angular velocity output by a target sensor, wherein the target camera and the target sensor are arranged on a target vehicle, and k is a natural number; determining a position change value of the position of a target vanishing point between a kth frame and the (k + 1) th frame according to the angle change value and the focal length of the target camera; under the condition that the position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image shot by the target camera according to the position change value; and determining the distance between the target object in the (k + 1) th frame of image and the target camera according to the (k + 1) th position of the target vanishing point. The purpose of distance measurement according to the position of the vanishing point in the image is achieved, the technical effect of improving the accuracy of the distance measurement result is achieved, and the technical problem that the accuracy of the distance measurement result in automatic driving is low is solved.
As an optional embodiment, the target sensor may be an imu (inertial measurement unit) sensor, and the target camera may be a monocular camera. The embodiment provides a real-time dynamic obstacle ranging algorithm based on the fusion of a monocular camera and an IMU, and the target object may be an obstacle, and in general, an object located in front of the target vehicle may be considered as an obstacle, including but not limited to a vehicle, a pedestrian, and the like. The obstacle may be dynamic, such as a vehicle that is traveling, a pedestrian that is moving, or the like.
As a preferred embodiment, the following description will be given taking an example in which the target sensor is an inertial measurement unit, the target camera is a monocular camera, and the target object is a preceding vehicle that travels ahead of the target vehicle. Fig. 3 is a first schematic diagram according to an alternative embodiment of the present invention, which includes a front vehicle 302 and a target vehicle 304, wherein the target vehicle 304 is mounted with a monocular camera 306 and an inertial measurement unit 308, and the monocular camera and the inertial measurement unit can be mounted at any position of the target vehicle, and the arrangement positions of the monocular camera and the inertial measurement unit in fig. 3 are only for illustrating the embodiment and are not limited thereto. In the embodiment, the monocular camera is oriented in front of the target vehicle, and the front vehicle can be photographed in real time.
As an alternative embodiment, the k frame and the k +1 frame may be the time when the target sensor and the target camera perform data acquisition simultaneously. The time for the target sensor to acquire the angular velocity of the k frame is also the time for the target camera to take the image of the k frame, and similarly, the time for the target sensor to acquire the angular velocity of the (k + 1) th frame is also the time for the target camera to take the image of the (k + 1) th frame. In this embodiment, the grounding point and the lane line of the dynamic target of each frame of image can be detected by using a deep learning detection algorithm. And calculating vanishing points on each frame of image by using the parallel lane lines, wherein each frame of image comprises a kth frame of image and a (k + 1) th frame of image. Synchronizing the angular speed of the kth frame and the angular speed of the (k + 1) th frame output by the IMU sensor, calculating the change of the pitching angle of the image of the kth frame and the image of the (k + 1) th frame in a camera coordinate system by using a gyroscope of the IMU sensor, then converting the change into a position change value of a vanishing point in the image coordinate system, and fusing a lane line and the vanishing point calculated by the IMU through filtering. And calculating the position of the target in the (k + 1) th frame image by using the fused vanishing point, the grounding point of the target and the external parameter value of the camera through plane assumption.
Optionally, the determining, according to the position change value, a (k + 1) th position of the target vanishing point in a (k + 1) th frame image captured by the target camera includes: determining a k +1 estimated position of the target vanishing point in a k +1 frame image shot by the target camera according to the position change value and the k position of the target vanishing point in the k frame image shot by the target camera; under the condition that a lane line meeting a second preset condition is detected in the (k + 1) th frame image, determining a (k + 1) th observation position of the target vanishing point in the (k + 1) th frame image according to the detected lane line; and determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the (k + 1) th estimated position and the (k + 1) th observation position.
As an alternative implementation, FIG. 4 is a schematic diagram of the effect of camera changes on vanishing points, according to an alternative embodiment of the invention, in which
Figure 592318DEST_PATH_IMAGE001
Is the k-th position of the vanishing point of the object in the k-th frame image taken by the object camera,
Figure 944539DEST_PATH_IMAGE002
is the estimated position of the k +1 th frame image shot by the target camera at the target vanishing point,
Figure 868633DEST_PATH_IMAGE003
is the value of the change in position,
Figure 971718DEST_PATH_IMAGE004
is the value of the change in the angle,
Figure 803408DEST_PATH_IMAGE005
is the camera focal length. In the present embodiment, it is preferred that,
Figure 308339DEST_PATH_IMAGE006
as an optional implementation manner, the k +1 th frame image captured by the target camera includes lane lines, and if the k +1 th frame image includes at least two lane lines, the location of the vanishing point in the k +1 th frame image can be determined by the at least two lane lines included in the k +1 th frame image. In this embodiment, the vanishing point position determined by the lane line in the (k + 1) th image is a visual observation value. Since the preceding vehicle is dynamic and the visual observation value is static, the position accuracy and robustness of the resulting target vanishing point in the k +1 th frame image may not be high. In the embodiment, the k +1 th observation position and the k +1 th pre-estimated position can be fused, so that the motion information of the vanishing point is increased, and the estimation of the vanishing point of the target can be more accurate and the robustness is better by combining the visual observation information.
Optionally, the determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the (k + 1) th estimated position and the (k + 1) th observation position includes: filtering the (k + 1) th estimated position and the (k + 1) th observation position to obtain a (k + 1) th position; or carrying out weighted average processing on the (k + 1) th estimated position and the (k + 1) th observation position to obtain the (k + 1) th position.
As an optional implementation manner, the filtering process may be kalman filtering, and the k +1 th estimated position and the k +1 th observed position may be fused through the kalman filtering. In the embodiment, the observation position of the target vanishing point of the visual observation can be fused through Kalman filtering, and the motion estimation position calculated through the IMU sensor combines the motion information and the visual observation information, so that the accuracy of the target vanishing point position can be improved.
As an alternative embodiment, the fusion of the (k + 1) th estimated position and the (k + 1) th observed position may also be implemented by means of weighted average processing. The selection of the specific weight may be determined according to actual conditions, for example, if the weight of the motion information is large, the weight of the estimated position k +1 may be set to be larger, for example, 0.6, 0.7, and the like, and the weight of the corresponding observed position k +1 may be set to be smaller, and correspondingly, may be 0.4, 0.3, and the like. Conversely, if the weight of the visual observation is large, the weight of the k +1 th observation position may be set to be larger, and correspondingly the weight of the k +1 th estimated position may be set to be smaller. The weights of the (k + 1) th estimated position and the (k + 1) th observation position can be set to be 0.5, and in this case, the (k + 1) th estimated position and the (k + 1) th observation position are subjected to mean processing. In this embodiment, the weights of the (k + 1) th estimated position and the (k + 1) th observation position can be set according to the actual situation through a weighted average processing mode, and the motion information and the visual observation information are fused according to the weights, so that the accuracy of the target vanishing point position can be improved.
Optionally, the determining, according to the detected lane line, a (k + 1) th observation position of the target vanishing point in the (k + 1) th frame image includes: and under the condition that the detected lane lines comprise at least 2 parallel lane lines, performing fitting operation on the at least 2 parallel lane lines to obtain the (k + 1) th observation position.
As an alternative embodiment, the lane lines on the road are parallel in the real scene, but due to the imaging principle, a plurality of lane lines captured in the image captured by the camera intersect at a point, which is the vanishing point. In this embodiment, taking the (k + 1) th frame image including two lane lines as an example, as shown in fig. 5, it is a schematic diagram of vanishing points in the image according to the alternative embodiment of the present invention, where the diagram includes a lane line 501 and a lane line 502, and an intersection of extension lines of the lane line 501 and the lane line 502 is a point, which is a (k + 1) th position of the target vanishing point in the (k + 1) th frame image. The lane lines 501 and 502 in this embodiment are only for illustrating this embodiment, and the number of lane lines may be determined according to actual situations. In this embodiment, at least 2 lane lines are
Figure 137755DEST_PATH_IMAGE007
The vanishing point is calculated by a least square method, see the following formula:
Figure 993715DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure 363517DEST_PATH_IMAGE009
is the vanishing point on the image,
Figure 487068DEST_PATH_IMAGE010
is a lane line on the image and,
Figure 487385DEST_PATH_IMAGE011
is a least squares fit function.
Optionally, the determining, according to the position change value, a (k + 1) th position of the target vanishing point in a (k + 1) th frame image captured by the target camera includes: determining a k +1 estimated position of the target vanishing point in a k +1 frame image shot by the target camera according to the position change value and the k position of the target vanishing point in the k frame image shot by the target camera; and under the condition that the lane line meeting the second preset condition is not detected in the (k + 1) th frame image, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image as being equal to the (k + 1) th estimated position.
As an optional implementation manner, in many cases, due to the fact that the lane line is worn, the lane line cannot be detected in such cases, and thus the (k + 1) th observation position of the target vanishing point in the (k + 1) th frame image cannot be determined according to the lane line, or the lane lines in the actual scene may not be parallel, the lane lines in the (k + 1) th frame image may not intersect with one point, and in such cases, the (k + 1) th observation position of the target vanishing point in the (k + 1) th frame image cannot be determined according to the lane line. In this case, the k +1 th estimated position of the target vanishing point in the k +1 th frame image can be estimated by using the previous frame, i.e. the k-th frame image, and the k +1 th estimated position can be used as the position of the target vanishing point in the k +1 th frame image.
Optionally, determining a k +1 estimated position of the target vanishing point in a k +1 frame image captured by the target camera according to the position change value and a k position of the target vanishing point in the k frame image captured by the target camera, includes: determining the k +1 th predicted position as equal to the sum of the k-th position and the position change value.
As an alternative embodiment, as shown in FIG. 4, in the figure
Figure 361800DEST_PATH_IMAGE012
Is the k-th position of the vanishing point of the object in the k-th frame image taken by the object camera,
Figure 4134DEST_PATH_IMAGE013
is the estimated position of the k +1 th frame image shot by the target camera at the target vanishing point,
Figure 218078DEST_PATH_IMAGE014
is the value of the change in position,
Figure 654875DEST_PATH_IMAGE015
optionally, the method further comprises: under the condition that at least 2 parallel lane lines are detected in the (k + 1) th frame image, determining that the lane lines meeting the second preset condition are detected in the (k + 1) th frame image; and under the condition that at least 2 parallel lane lines are not detected in the (k + 1) th frame image, determining that no lane line meeting the second preset condition is detected in the (k + 1) th frame image.
As an alternative embodiment, the vanishing point is the intersection of the extensions of at least two non-parallel straight lines. Due to the imaging principle, two parallel straight lines in an actual scene are not parallel in an image, and the intersection point of the extension lines is the vanishing point. That is, if the position of the target vanishing point in the (k + 1) th frame image is determined by the lane lines, it needs to be proved that the lane lines are parallel in the actual scene. However, if the lane lines in the actual scene are parallel, the lane lines imaged in the (k + 1) th frame image are not parallel, as shown in fig. 5. It can be proved that the lane lines are parallel in the actual scene by converting the imaged lane lines in the (k + 1) th frame image into a physical coordinate system.
As a preferred embodiment, the deep learning neural network may be used to detect the lane lines on the (k + 1) th frame image. Suppose that each lane line
Figure 16587DEST_PATH_IMAGE016
Consisting of n pixel coordinates. The mapping relation H of the camera relative to the road surface can be calculated through external reference calibration of the camera, and the physical position of the lane line under the current camera coordinate system can be calculated if the road surface is flat, such as a formula:
Figure 931453DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure 265482DEST_PATH_IMAGE018
is a world physical coordinate point of a pixel point on a lane line corresponding to a camera coordinate system, K is a camera internal parameter, H is a mapping relation from the camera to a road surface,
Figure 669919DEST_PATH_IMAGE019
are homogeneous coordinates of the lane line pixels. After the world physical coordinates of each pixel of the lane line are calculated, each lane line is fitted through a straight line
Figure 722189DEST_PATH_IMAGE020
Then, a fitting error is calculated, if the fitting error is smaller than a threshold, the lane line is considered to be a straight line, and the threshold may be determined according to actual conditions, for example: 0.01, 0.1, etc., and the specific selection can be determined according to the actual situation. In this embodiment, whether at least 2 parallel lane lines in the actual scene are flat or not is detected through the k +1 th frame image by the above-mentioned coordinate system conversion methodAnd if the lane lines are parallel, determining that at least 2 parallel lane lines of the (k + 1) th frame image are the lane lines meeting a second preset condition, and otherwise, determining that the lane lines meeting the second preset condition are not detected in the (k + 1) th frame image.
Optionally, the method further comprises: and under the condition that the position determination result of the target vanishing point before the (k + 1) th frame does not meet the first preset condition, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the projection position of the object detected in the (k + 1) th frame image.
As an alternative embodiment, if there is no visual observation of vanishing points for a long time, the (k + 1) th position of the target vanishing point in the (k + 1) th frame image can be determined according to the projection position of the object detected in the (k + 1) th frame image.
Optionally, the determining the k +1 th position of the target vanishing point in the k +1 th frame image according to the projection position of the object detected in the k +1 th frame image includes: and determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the projection position of the object detected in the (k + 1) th frame image, the height of the target camera to the ground and the focal length of the target camera.
As an alternative implementation, fig. 6 is a schematic diagram two of an alternative embodiment of the present invention, wherein,
Figure 673702DEST_PATH_IMAGE021
is the height of the target camera to the ground,
Figure 127817DEST_PATH_IMAGE022
is the focal length of the target camera and,
Figure 703155DEST_PATH_IMAGE023
is the difference between the pixel projected on the image by the light parallel to the ground and the ground point of the vehicle ahead,
Figure 242720DEST_PATH_IMAGE024
wherein, in the step (A),
Figure 499389DEST_PATH_IMAGE025
is the projection position of the object detected in the image of the (k + 1) th frame,
Figure 604749DEST_PATH_IMAGE026
is the (k + 1) th position of the target vanishing point in the (k + 1) th frame image. Therefore, the (k + 1) th position of the target vanishing point in the (k + 1) th frame image can be obtained by combining the above parameters in fig. 6.
Optionally, the determining the k +1 th position of the target vanishing point in the k +1 th frame image according to the projection position of the object detected in the k +1 th frame image, the height of the target camera to the ground and the focal length of the target camera includes: respectively determining the k +1 th estimated positions of the target vanishing point in the k +1 th frame image according to the projection positions of M objects in the N objects, the height from the target camera to the ground and the focal length of the target camera under the condition that the detected objects in the k +1 th frame image comprise N objects, and obtaining M k +1 th estimated positions in total, wherein N is 1 or a natural number greater than 1, and M is less than or equal to N; and determining the (k + 1) th position according to the (k + 1) th estimated positions.
As an alternative embodiment, a plurality of target objects may be included in front of the target vehicle, for example, a plurality of vehicles, a plurality of pedestrians, or a pedestrian at the right of the vehicle. If no vanishing point is observed for a long time, the (k + 1) th position of the vanishing point of the target in the (k + 1) th frame image can be reversely deduced by selecting the target distance with higher confidence coefficient from the plurality of target objects in the (k + 1) th frame image. Specifically, a Random sample consensus (RANSAC) algorithm may be used to cyclically remove an object with a low confidence, and the (k + 1) th position of the target vanishing point in the (k + 1) th frame image may be obtained by using the object position information with a high confidence. In this embodiment, assuming that the object detected in the (k + 1) th frame image includes N objects, the (k + 1) th estimated position of the target vanishing point in the (k + 1) th frame image may be determined according to the parameter information of M objects of the N objects by determining the position of the vanishing point in the image according to the projection position of the object, the height from the target camera to the ground, and the focal length of the target camera in the above embodiment. The M objects are objects with a high confidence level among the N objects. And finally, obtaining an average value of the k +1 th estimated positions of the M, or determining the k +1 th position of the target vanishing point in the k +1 th frame image through Kalman filtering.
Optionally, the determining, according to the projection positions of M of the N objects, the heights of the target cameras to the ground, and the focal lengths of the target cameras, the (k + 1) th estimated positions of the target vanishing point in the (k + 1) th frame image respectively to obtain M (k + 1) th estimated positions includes: acquiring first M objects with the confidence degrees ordered from high to low in the N objects; and respectively determining the (k + 1) th estimated position of the target vanishing point in the (k + 1) th frame image according to the projection positions of the first M objects, the height from the target camera to the ground and the focal length of the target camera, and obtaining the (M) th estimated positions of the (k + 1) th frame image.
As an alternative embodiment, a Random sample consensus algorithm (RANSAC algorithm for short) may be used to circularly remove objects with low confidence. Specifically, the N objects may be sorted in the order from high confidence to low confidence, and the top M objects with the top confidence may be selected from the N objects. And then determining the (k + 1) th estimated position of the target vanishing point in the (k + 1) th frame image according to the projection position of the object, the height from the target camera to the ground and the focal length of the target camera in the embodiment, so as to obtain M (k + 1) th estimated positions. And then the (k + 1) th position of the target vanishing point in the (k + 1) th frame image can be obtained by means of averaging the (k + 1) th estimated positions or Kalman filtering.
Optionally, the determining, according to the projection positions of M objects in the N objects, the heights of the target cameras to the ground, and the focal lengths of the target cameras, the k +1 th estimated positions of the target vanishing point in the k +1 th frame of image respectively includes: under the condition that the position change value of the target vanishing point between the kth frame and the (k + 1) th frame is the change value of the longitudinal coordinate of the target vanishing point between the kth frame and the (k + 1) th frame, and the (k + 1) th position is the (k + 1) th longitudinal coordinate, for each of the M objects, determining the (k + 1) th estimated longitudinal coordinate of the target vanishing point in the (k + 1) th frame image by the following formula:
Figure 554250DEST_PATH_IMAGE027
wherein the content of the first and second substances,
Figure 49954DEST_PATH_IMAGE028
is the (k + 1) th estimated longitudinal coordinate corresponding to the ith object in the M objects,
Figure 375893DEST_PATH_IMAGE029
is a longitudinal coordinate of a projection of the grounding point of the ith one of the M objects on the longitudinal axis,
Figure 335758DEST_PATH_IMAGE021
is the height of the target camera to the ground,
Figure 456161DEST_PATH_IMAGE030
is the focal length of the subject camera and,
Figure 203275DEST_PATH_IMAGE031
is the estimated distance between the target camera and the target object.
As an optional implementation manner, the change of the position of the vanishing point in the image due to factors such as road bump is generally longitudinal, in this embodiment, the position change value of the target vanishing point between the kth frame and the (k + 1) th frame may be a change of a longitudinal coordinate, and then the (k + 1) th estimated longitudinal coordinate of the target vanishing point in the (k + 1) th frame image may be determined by the following formula:
Figure 864064DEST_PATH_IMAGE027
Figure 881698DEST_PATH_IMAGE032
is the (k + 1) th estimated longitudinal coordinate corresponding to the ith object in the M objects,
Figure 173002DEST_PATH_IMAGE029
is the longitudinal coordinate of the projection of the grounding point of the ith object in the M objects on the longitudinal axis,
Figure 908877DEST_PATH_IMAGE021
is the height of the target camera to the ground,
Figure 107777DEST_PATH_IMAGE033
is the focal length of the subject camera and,
Figure 245497DEST_PATH_IMAGE031
is the estimated distance between the target camera and the target object.
Optionally, the determining the k +1 th position according to the M k +1 th predicted positions includes: determining the k +1 th position as being equal to an average of the M k +1 th predicted positions.
As an alternative implementation, the average value of the M k +1 th predicted positions may be used as the k +1 th position of the target vanishing point in the k +1 th frame image.
Optionally, the method further comprises: determining that the position determination result of the target vanishing point before the (k + 1) th frame does not meet the first preset condition under the condition that at least 2 parallel lane lines are not detected in the continuous P1 frame images shot by the target camera before the (k + 1) th frame; otherwise, determining that the position determination result of the target vanishing point before the (k + 1) th frame meets the first preset condition, wherein P1 is a preset natural number greater than 1; or under the condition that at least 2 parallel lane lines are not detected in the P2 frame images shot by the target camera within the preset time length before the (k + 1) th frame, determining that the position determination result of the target vanishing point before the (k + 1) th frame does not meet the first preset condition; otherwise, determining that the position determination result of the target vanishing point before the (k + 1) th frame meets the first preset condition, wherein P2 is a preset natural number greater than 1.
As an alternative embodiment, the first preset condition is described below, and the first preset condition may include two cases:
in case 1, at least 2 parallel lane lines are not detected in each of the consecutive P1 frame images captured before the k +1 th frame image captured by the subject camera, and it is determined that the first preset bar is not satisfied. Conversely, if there is a certain frame image in the continuous P1 frame images or at least 2 parallel lane lines are detected in a certain number of frame images, it is determined that the first preset condition is satisfied.
In case 2, the target camera may capture a P3 frame image within a preset time period before the k +1 th frame image captured by the target camera, wherein P3 may be greater than or equal to P2. Under the condition that P3 is larger than P2, if at least 2 parallel lane lines are not detected in the P2 frame images, determining that the first preset condition is not met, wherein the P2 frame images can be continuous images or interval images; otherwise, determining that the position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition, wherein P2 is a preset natural number greater than 1.
In this embodiment, when the first preset condition is satisfied, the data of the previous k-th frame may be used to obtain the k +1 th estimated position of the target vanishing point in the k +1 th frame image captured by the target camera by using the IMU as the motion prediction of the vanishing point, and the k +1 th observed position of the target vanishing point in the k +1 th frame image obtained by the visual observation may be fused to obtain the k +1 th position of the target vanishing point in the k +1 th frame image. Under the condition that the first preset condition is not met, the (k + 1) th position of the target vanishing point in the (k + 1) th frame image can be determined according to the projection position of the object detected in the (k + 1) th frame image.
Optionally, the k frame angular velocity and the k +1 frame according to the target sensor outputAn angular velocity determining an angle change value of the target camera between the k frame and the k +1 frame, including: determining an angle change value of the target camera between the kth frame and the (k + 1) th frame by the following formula when a position change value of the position of the target vanishing point between the kth frame and the (k + 1) th frame is a change value of a longitudinal coordinate of the target vanishing point between the kth frame and the (k + 1) th frame, and the (k + 1) th position is the (k + 1) th longitudinal coordinate
Figure 707703DEST_PATH_IMAGE034
Figure 462032DEST_PATH_IMAGE035
Wherein the content of the first and second substances,
Figure 933465DEST_PATH_IMAGE036
Figure 660112DEST_PATH_IMAGE037
is the k frame angular velocity of the target camera output,
Figure 815192DEST_PATH_IMAGE038
is the k +1 frame angular velocity of the target camera output.
As an alternative embodiment, the IMU may consist of a 3-axis gyroscope and a 3-axis angular velocity meter, wherein the gyroscope outputs 3 degrees of angular velocity and the accelerometer outputs 3 degrees of acceleration information. The time interval between two adjacent frames of the image may be 20ms to 50 ms. Typically, IMUs are at frequencies between 100Hz and 200Hz, which are much greater than the image frequency. Therefore, in such a short time, the angular velocity of the gyroscope output can be integrated to accurately calculate the angular change in the time, and specifically, the angular velocities of the 3 axes can be respectively:
Figure 525659DEST_PATH_IMAGE039
Figure 331941DEST_PATH_IMAGE040
Figure 178674DEST_PATH_IMAGE041
wherein k, k +1 are the times of the k, k +1 th frame image, respectively,
Figure 982682DEST_PATH_IMAGE042
is the angular velocity value output by the gyroscope,
Figure 711603DEST_PATH_IMAGE043
is the value of the angle integrated between the k and k +1 frame images. After the angle change of the camera between two frames of images is calculated, the change of vanishing points can be calculated through approximate transformation.
Optionally, the determining, according to the angle change value and the focal length of the target camera, a position change value of the position of the target vanishing point between the k-th frame and the (k + 1) -th frame includes: determining a position change value of the position of the target vanishing point between the k-th frame and the k + 1-th frame by the following formula
Figure 524839DEST_PATH_IMAGE044
Figure 226078DEST_PATH_IMAGE045
Wherein the content of the first and second substances,
Figure 200988DEST_PATH_IMAGE022
is the focal length of the subject camera and,
Figure 682785DEST_PATH_IMAGE046
the angle change value. In this embodiment, as shown in FIG. 4
Figure 34132DEST_PATH_IMAGE045
Optionally, the determining a distance between a target object in the (k + 1) th frame of image and the target camera according to the (k + 1) th position of the target vanishing point includes: determining a distance d between a target object in the image of the (k + 1) th frame and the target camera by the following formula when a position change value of the position of the target vanishing point between the (k) th frame and the (k + 1) th frame is a change value of a longitudinal coordinate of the target vanishing point between the (k) th frame and the (k + 1) th frame, and the (k + 1) th position is the (k + 1) th longitudinal coordinate:
Figure 88413DEST_PATH_IMAGE047
Figure 30961DEST_PATH_IMAGE048
wherein the content of the first and second substances,
Figure 937737DEST_PATH_IMAGE049
is a longitudinal coordinate of the projection of the grounding point of the target object on the longitudinal axis,
Figure 827196DEST_PATH_IMAGE050
is the (k + 1) th vertical coordinate of the target vanishing point in the (k + 1) th frame image,
Figure 503028DEST_PATH_IMAGE021
is the height of the target camera to the ground,
Figure 882057DEST_PATH_IMAGE022
is the focal length of the target camera.
As an alternative, referring to fig. 6, in the parameter diagram shown in fig. 6, the parameter can be represented by a formula
Figure 276129DEST_PATH_IMAGE051
The distance from the camera lens to the vehicle in front is found, and, in this embodiment,
Figure 703699DEST_PATH_IMAGE023
can be represented by formula
Figure 296354DEST_PATH_IMAGE052
To obtain a mixture of, among others,
Figure 49547DEST_PATH_IMAGE049
may be the longitudinal coordinate of the ground point of the vehicle in front projected on the longitudinal axis,
Figure 399757DEST_PATH_IMAGE050
is the (k + 1) th longitudinal coordinate of the target vanishing point in the (k + 1) th frame image,
Figure 395132DEST_PATH_IMAGE021
is the height of the target camera to the ground,
Figure 842294DEST_PATH_IMAGE022
is the focal length of the target camera.
Optionally, the method further comprises: determining a lateral distance x of the target object from the target camera in the (k + 1) th frame image by:
Figure 766388DEST_PATH_IMAGE053
wherein the content of the first and second substances,
Figure 869473DEST_PATH_IMAGE054
is the horizontal coordinate of the target vanishing point in the (k + 1) th frame image,
Figure 701163DEST_PATH_IMAGE055
is a lateral coordinate of a projection of a grounding point of the target object on a lateral axis, and d is a distance between the target object and the target camera in the (k + 1) th frame image.
As an alternative implementation, the lateral position information of the target object can also be estimated according to the visual geometry, as shown in fig. 7, which is a top view of the lateral position estimation of the target object according to an alternative embodiment of the present invention, where x is the lateral distance of the target object in the camera coordinate system,
Figure 206093DEST_PATH_IMAGE056
is the lateral coordinate of the target object's grounding point projected on the image,
Figure 35509DEST_PATH_IMAGE057
is the lateral coordinate of the target vanishing point. According to the visual geometric relationship, the following can be obtained:
Figure 625891DEST_PATH_IMAGE053
wherein d is a distance between a target object in the (k + 1) th frame image and the target camera,
Figure 261271DEST_PATH_IMAGE022
is the focal length of the target camera.
Optionally, the method further comprises: determining the estimated position, the estimated speed and the estimated acceleration of the target object in the (k + 1) th frame of image according to the following formulas:
Figure 886288DEST_PATH_IMAGE058
wherein T is a time interval between the k frame and a (k + 1) th frame,
Figure 886605DEST_PATH_IMAGE059
Figure 761020DEST_PATH_IMAGE060
respectively the estimated position, the estimated speed and the estimated acceleration of the target object in the (k + 1) th frame image,
Figure 636310DEST_PATH_IMAGE061
is the position, velocity and acceleration of the target object in the k-th frame image, respectively.
As an optional implementation manner, in practical application, due to wear of lane lines, the lane lines do not have a parallel relationship or a traveling road does not satisfy a plane assumption, which both may bring a large error to position estimation, and in order to reduce fluctuation caused by the large error, a kalman filtering method of a constant acceleration model is adopted in this embodiment:
Figure 850253DEST_PATH_IMAGE058
Figure 287051DEST_PATH_IMAGE062
Figure 648762DEST_PATH_IMAGE059
Figure 829208DEST_PATH_IMAGE063
respectively the estimated position, the estimated speed and the estimated acceleration of the target object in the (k + 1) th frame image,
Figure 163237DEST_PATH_IMAGE064
is the position, velocity and acceleration of the target object in the k-th frame image, respectively.
Optionally, a position change value of the target vanishing point between the kth frame and the (k + 1) th frame is a change value of a longitudinal coordinate of the target vanishing point between the kth frame and the (k + 1) th frame, and the (k + 1) th position is the (k + 1) th longitudinal coordinate.
According to the method and the device, the vanishing point can be dynamically estimated in real time, and barrier ranging information with higher precision can be obtained compared with static vanishing point estimation. Even if the lane line cannot be detected, the lane line is not parallel or the lane line is detected wrongly, the vanishing point can be reversely deduced by measuring the target distance with the highest confidence coefficient. In addition the present application provides vanishing point observation by combining monocular vision. And the position of the vanishing point on the image is estimated with high precision through Kalman filtering. And calculating high-precision dynamic obstacle position information through the visual geometric relationship, and then calculating the position and speed information of the obstacle through Kalman filtering. The vanishing point estimation can be more accurate and the robustness is better.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiments of the present invention, there is also provided a target object ranging apparatus for implementing the above target object ranging method. As shown in fig. 8, the apparatus includes: a first determining module 82, configured to determine an angle change value of a target camera between a k frame and a k +1 frame according to a k frame angular velocity and a k +1 frame angular velocity output by a target sensor, where the target camera and the target sensor are disposed on a target vehicle, and k is a natural number; a second determining module 84, configured to determine, according to the angle change value and the focal length of the target camera, a position change value of the position of the target vanishing point between the kth frame and the (k + 1) th frame; a third determining module 86, configured to determine, according to the position change value, a (k + 1) th position of the target vanishing point in a (k + 1) th frame image captured by the target camera when a position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition; a fourth determining module 88, configured to determine, according to the (k + 1) th position of the target vanishing point, a distance between the target object in the (k + 1) th frame of image and the target camera.
Optionally, the above apparatus is further configured to determine, according to the position change value, a (k + 1) th position of the target vanishing point in a (k + 1) th frame image captured by the target camera by: determining a k +1 estimated position of the target vanishing point in a k +1 frame image shot by the target camera according to the position change value and the k position of the target vanishing point in the k frame image shot by the target camera; under the condition that a lane line meeting a second preset condition is detected in the (k + 1) th frame image, determining a (k + 1) th observation position of the target vanishing point in the (k + 1) th frame image according to the detected lane line; and determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the (k + 1) th estimated position and the (k + 1) th observation position.
Optionally, the above apparatus is further configured to determine, according to the (k + 1) th estimated position and the (k + 1) th observation position, the (k + 1) th position of the target vanishing point in the (k + 1) th frame image by: filtering the (k + 1) th estimated position and the (k + 1) th observation position to obtain a (k + 1) th position; or carrying out weighted average processing on the (k + 1) th estimated position and the (k + 1) th observation position to obtain the (k + 1) th position.
Optionally, the apparatus is further configured to determine, according to the detected lane line, a k +1 th observation position of the target vanishing point in the k +1 th frame image by: and under the condition that the detected lane lines comprise at least 2 parallel lane lines, performing fitting operation on the at least 2 parallel lane lines to obtain the (k + 1) th observation position.
Optionally, the above apparatus is further configured to determine, according to the position change value, a (k + 1) th position of the target vanishing point in a (k + 1) th frame image captured by the target camera by: determining a k +1 estimated position of the target vanishing point in a k +1 frame image shot by the target camera according to the position change value and the k position of the target vanishing point in the k frame image shot by the target camera; and under the condition that the lane line meeting the second preset condition is not detected in the (k + 1) th frame image, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image as being equal to the (k + 1) th estimated position.
Optionally, the apparatus is further configured to determine, according to the position change value and a kth position of the target vanishing point in a kth frame image captured by the target camera, a k +1 th estimated position of the target vanishing point in a k +1 th frame image captured by the target camera, by: determining the k +1 th predicted position as equal to the sum of the k-th position and the position change value.
Optionally, the apparatus is further configured to determine that a lane line meeting the second preset condition is detected in the (k + 1) th frame image when at least 2 parallel lane lines are detected in the (k + 1) th frame image; and under the condition that at least 2 parallel lane lines are not detected in the (k + 1) th frame image, determining that no lane line meeting the second preset condition is detected in the (k + 1) th frame image.
Optionally, the above apparatus is further configured to, when a position determination result of the target vanishing point before the (k + 1) th frame does not satisfy the first preset condition, determine the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to a projection position of an object detected in the (k + 1) th frame image.
Optionally, the above apparatus is further configured to determine, according to the projection position of the object detected in the (k + 1) th frame image, the (k + 1) th position of the target vanishing point in the (k + 1) th frame image by: and determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the projection position of the object detected in the (k + 1) th frame image, the height of the target camera to the ground and the focal length of the target camera.
Optionally, the above apparatus is further configured to determine the k +1 th position of the target vanishing point in the k +1 th frame image according to the projection position of the object detected in the k +1 th frame image, the height of the target camera to the ground, and the focal length of the target camera, by: respectively determining the k +1 th estimated positions of the target vanishing point in the k +1 th frame image according to the projection positions of M objects in the N objects, the height from the target camera to the ground and the focal length of the target camera under the condition that the detected objects in the k +1 th frame image comprise N objects, and obtaining M k +1 th estimated positions in total, wherein N is 1 or a natural number greater than 1, and M is less than or equal to N; and determining the (k + 1) th position according to the (k + 1) th estimated positions.
Optionally, the above apparatus is further configured to determine, according to the projection positions of M objects in the N objects, the height from the target camera to the ground, and the focal length of the target camera, the k +1 th estimated positions of the target vanishing point in the k +1 th frame image respectively, and obtain M k +1 th estimated positions in total: acquiring first M objects with the confidence degrees ordered from high to low in the N objects; respectively determining the (k + 1) th estimated positions of the target vanishing point in the (k + 1) th frame image according to the projection positions of the first M objects, the height from the target camera to the ground and the focal length of the target camera, and obtaining the (M) th estimated positions of the (k + 1) th frame image.
Optionally, the above apparatus is further configured to determine, according to the projection positions of M of the N objects, the heights of the target cameras to the ground, and the focal lengths of the target cameras, the k +1 th estimated positions of the target vanishing point in the k +1 th frame of image respectively by: under the condition that the position change value of the target vanishing point between the kth frame and the (k + 1) th frame is the change value of the longitudinal coordinate of the target vanishing point between the kth frame and the (k + 1) th frame, and the (k + 1) th position is the (k + 1) th longitudinal coordinate, for each of the M objects, determining the (k + 1) th estimated longitudinal coordinate of the target vanishing point in the (k + 1) th frame image by the following formula:
Figure 770936DEST_PATH_IMAGE027
wherein the content of the first and second substances,
Figure 354364DEST_PATH_IMAGE032
is the (k + 1) th estimated longitudinal coordinate corresponding to the ith object in the M objects,
Figure 72921DEST_PATH_IMAGE029
is a longitudinal coordinate of a projection of the grounding point of the ith one of the M objects on the longitudinal axis,
Figure 527036DEST_PATH_IMAGE021
is the height of the target camera to the ground,
Figure 102374DEST_PATH_IMAGE065
is the focal length of the subject camera and,
Figure 874896DEST_PATH_IMAGE031
is the estimated distance between the target camera and the target object.
Optionally, the apparatus is further configured to determine the k +1 th position according to the M k +1 th predicted positions by: determining the k +1 th position as being equal to an average of the M k +1 th predicted positions.
Optionally, the apparatus is further configured to determine that the position determination result of the target vanishing point before the (k + 1) th frame does not satisfy the first preset condition when at least 2 parallel lane lines are not detected in any of the consecutive P1 frame images captured by the target camera before the (k + 1) th frame; otherwise, determining that the position determination result of the target vanishing point before the (k + 1) th frame meets the first preset condition, wherein P1 is a preset natural number greater than 1; or under the condition that at least 2 parallel lane lines are not detected in the P2 frame images shot by the target camera within the preset time length before the (k + 1) th frame, determining that the position determination result of the target vanishing point before the (k + 1) th frame does not meet the first preset condition; otherwise, determining that the position determination result of the target vanishing point before the (k + 1) th frame meets the first preset condition, wherein P2 is a preset natural number greater than 1.
Optionally, the above apparatus is further configured to determine an angle change value of the target camera between the k frame and the k +1 frame according to the k frame angular velocity and the k +1 frame angular velocity output by the target sensor, by: determining an angle change value of the target camera between the kth frame and the (k + 1) th frame by the following formula when a position change value of the position of the target vanishing point between the kth frame and the (k + 1) th frame is a change value of a longitudinal coordinate of the target vanishing point between the kth frame and the (k + 1) th frame, and the (k + 1) th position is the (k + 1) th longitudinal coordinate
Figure 131565DEST_PATH_IMAGE034
Figure 440186DEST_PATH_IMAGE035
Wherein the content of the first and second substances,
Figure 186426DEST_PATH_IMAGE036
Figure 947708DEST_PATH_IMAGE037
is the k frame angular velocity of the target camera output,
Figure 8068DEST_PATH_IMAGE038
is the k +1 frame angular velocity of the target camera output.
Optionally, the above apparatus is further configured to determine, according to the angle change value and the focal length of the target camera, a position change value of the position of the target vanishing point between the k-th frame and the (k + 1) -th frame by: determining a position change value of the position of the target vanishing point between the k-th frame and the k + 1-th frame by the following formula
Figure 233513DEST_PATH_IMAGE066
Figure 353916DEST_PATH_IMAGE067
Wherein the content of the first and second substances,
Figure 336915DEST_PATH_IMAGE068
is the focal length of the subject camera and,
Figure 200966DEST_PATH_IMAGE046
the angle change value.
Optionally, the above apparatus is further configured to determine a distance between a target object in the (k + 1) th frame image and the target camera according to the (k + 1) th position of the target vanishing point by: determining a distance d between a target object in the image of the (k + 1) th frame and the target camera by the following formula when a position change value of the position of the target vanishing point between the (k) th frame and the (k + 1) th frame is a change value of a longitudinal coordinate of the target vanishing point between the (k) th frame and the (k + 1) th frame, and the (k + 1) th position is the (k + 1) th longitudinal coordinate:
Figure 280918DEST_PATH_IMAGE047
Figure 805178DEST_PATH_IMAGE069
wherein the content of the first and second substances,
Figure 541052DEST_PATH_IMAGE070
is a longitudinal coordinate of the projection of the grounding point of the target object on the longitudinal axis,
Figure 5532DEST_PATH_IMAGE050
is the (k + 1) th vertical coordinate of the target vanishing point in the (k + 1) th frame image,
Figure 143252DEST_PATH_IMAGE021
is the height of the target camera to the ground,
Figure DEST_PATH_IMAGE071
is the focal length of the target camera.
Optionally, the above apparatus is further configured to determine a lateral distance x between the target object in the (k + 1) th frame image and the target camera by the following formula:
Figure 808720DEST_PATH_IMAGE053
wherein the content of the first and second substances,
Figure 828628DEST_PATH_IMAGE072
is the horizontal coordinate of the target vanishing point in the (k + 1) th frame image,
Figure 300061DEST_PATH_IMAGE056
is a lateral coordinate of a projection of a grounding point of the target object on a lateral axis, and d is a distance between the target object and the target camera in the (k + 1) th frame image.
Optionally, the above apparatus is further configured to determine an estimated position, an estimated speed and an estimated acceleration of the target object in the (k + 1) th frame of image by the following formulas:
Figure 26709DEST_PATH_IMAGE058
wherein T is a time interval between the k frame and a (k + 1) th frame,
Figure 456553DEST_PATH_IMAGE059
Figure 167020DEST_PATH_IMAGE073
respectively the destination in the k +1 frame imageThe estimated position, the estimated speed and the estimated acceleration of the object,
Figure 409520DEST_PATH_IMAGE074
the position, the velocity, and the acceleration of the target object in the k-th frame image, respectively.
Optionally, a position change value of the target vanishing point between the kth frame and the (k + 1) th frame is a change value of a longitudinal coordinate of the target vanishing point between the kth frame and the (k + 1) th frame, and the (k + 1) th position is the (k + 1) th longitudinal coordinate.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device for implementing the above-described distance measuring method for a target object, which may be an in-vehicle apparatus mounted on a target vehicle shown in fig. 1. As shown in fig. 9, the electronic device comprises a memory 902 and a processor 904, the memory 902 having stored therein a computer program, the processor 904 being arranged to perform the steps of any of the above-described method embodiments by means of the computer program.
Optionally, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, determining an angle change value of the target camera between the kth frame and the (k + 1) th frame according to the kth frame angular velocity and the (k + 1) th frame angular velocity output by the target sensor, wherein the target camera and the target sensor are arranged on the target vehicle, and k is a natural number;
s2, determining a position change value of the position of the target vanishing point between the kth frame and the (k + 1) th frame according to the angle change value and the focal length of the target camera;
s3, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image shot by the target camera according to the position change value under the condition that the position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition;
s4, determining the distance between the target object in the (k + 1) th frame image and the target camera according to the (k + 1) th position of the target vanishing point.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 9 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 9 is a diagram illustrating a structure of the electronic device. For example, the electronics may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 9, or have a different configuration than shown in FIG. 9.
The memory 902 may be configured to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for measuring a distance of a target object in the embodiment of the present invention, and the processor 904 executes various functional applications and data processing by running the software programs and modules stored in the memory 902, that is, implements the method for measuring a distance of a target object. The memory 902 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 902 may further include memory located remotely from the processor 904, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 902 may be, but is not limited to, used for storing information such as images. As an example, as shown in fig. 9, the memory 902 may include, but is not limited to, a first determining module 802, a second determining module 804, a third determining module 806, and a fourth determining module 810 of the ranging apparatus including the target object. In addition, the distance measuring device may further include, but is not limited to, other module units in the distance measuring device for the target object, which is not described in detail in this example.
Optionally, the transmitting device 906 is used for receiving or sending data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 906 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmission device 906 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In addition, the electronic device further includes: a display 908 for displaying the distance between the target object and the target camera; and a connection bus 910 for connecting the respective module components in the above-described electronic apparatus.
In other embodiments, the terminal device or the server may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting a plurality of nodes through a network communication. Nodes can form a Peer-To-Peer (P2P, Peer To Peer) network, and any type of computing device, such as a server, a terminal, and other electronic devices, can become a node in the blockchain system by joining the Peer-To-Peer network.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations described above. Wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, determining an angle change value of the target camera between the kth frame and the (k + 1) th frame according to the kth frame angular velocity and the (k + 1) th frame angular velocity output by the target sensor, wherein the target camera and the target sensor are arranged on the target vehicle, and k is a natural number;
s2, determining a position change value of the position of the target vanishing point between the kth frame and the (k + 1) th frame according to the angle change value and the focal length of the target camera;
s3, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image shot by the target camera according to the position change value under the condition that the position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition;
s4, determining the distance between the target object in the (k + 1) th frame image and the target camera according to the (k + 1) th position of the target vanishing point.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (15)

1. A method of ranging a target object, comprising:
determining an angle change value of a target camera between a kth frame and a (k + 1) th frame according to a kth frame angular velocity and a (k + 1) th frame angular velocity output by a target sensor, wherein the target camera and the target sensor are arranged on a target vehicle, and k is a natural number;
determining a position change value of the position of a target vanishing point between the kth frame and the (k + 1) th frame according to the angle change value and the focal length of the target camera;
under the condition that the position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition, determining the (k + 1) th position of the target vanishing point in a (k + 1) th frame image shot by the target camera according to the position change value;
and determining the distance between a target object in the (k + 1) th frame of image and the target camera according to the (k + 1) th position of the target vanishing point.
2. The method of claim 1, wherein determining the k +1 th position of the target vanishing point in the k +1 th frame image captured by the target camera according to the position variation value comprises:
determining a k +1 estimated position of the target vanishing point in a k +1 frame image shot by the target camera according to the position change value and the k position of the target vanishing point in the k frame image shot by the target camera;
under the condition that a lane line meeting a second preset condition is detected in the (k + 1) th frame image, determining a (k + 1) th observation position of the target vanishing point in the (k + 1) th frame image according to the detected lane line;
and determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the (k + 1) th estimated position and the (k + 1) th observation position.
3. The method according to claim 2, wherein the determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the (k + 1) th estimated position and the (k + 1) th observed position comprises:
filtering the (k + 1) th estimated position and the (k + 1) th observation position to obtain a (k + 1) th position; or
And carrying out weighted average processing on the (k + 1) th estimated position and the (k + 1) th observation position to obtain the (k + 1) th position.
4. The method according to claim 2, wherein the determining the k +1 th observation position of the target vanishing point in the k +1 th frame image according to the detected lane line comprises:
and under the condition that the detected lane lines comprise at least 2 parallel lane lines, performing fitting operation on the at least 2 parallel lane lines to obtain the (k + 1) th observation position.
5. The method of claim 1, wherein determining the k +1 th position of the target vanishing point in the k +1 th frame image captured by the target camera according to the position variation value comprises:
determining a k +1 estimated position of the target vanishing point in a k +1 frame image shot by the target camera according to the position change value and the k position of the target vanishing point in the k frame image shot by the target camera;
and under the condition that a lane line meeting a second preset condition is not detected in the (k + 1) th frame image, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image as being equal to the (k + 1) th estimated position.
6. The method according to claim 2 or 5, characterized in that the method further comprises:
under the condition that at least 2 parallel lane lines are detected in the (k + 1) th frame image, determining that the lane lines meeting the second preset condition are detected in the (k + 1) th frame image;
and under the condition that at least 2 parallel lane lines are not detected in the (k + 1) th frame image, determining that no lane line meeting the second preset condition is detected in the (k + 1) th frame image.
7. The method of claim 1, further comprising:
and under the condition that the position determination result of the target vanishing point before the (k + 1) th frame does not meet the first preset condition, determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the projection position of the object detected in the (k + 1) th frame image.
8. The method according to claim 7, wherein the determining the k +1 th position of the target vanishing point in the k +1 th frame image according to the projection position of the object detected in the k +1 th frame image comprises:
and determining the (k + 1) th position of the target vanishing point in the (k + 1) th frame image according to the projection position of the object detected in the (k + 1) th frame image, the height of the target camera to the ground and the focal length of the target camera.
9. The method of claim 8, wherein the determining the k +1 th position of the target vanishing point in the k +1 th frame image according to the projection position of the detected object in the k +1 th frame image, the height of the target camera to the ground and the focal length of the target camera comprises:
respectively determining the k +1 th estimated positions of the target vanishing point in the k +1 th frame image according to the projection positions of M objects in the N objects, the height from the target camera to the ground and the focal length of the target camera under the condition that the detected objects in the k +1 th frame image comprise N objects, and obtaining M k +1 th estimated positions in total, wherein N is 1 or a natural number more than 1, and M is less than or equal to N;
and determining the (k + 1) th position according to the (k + 1) th estimated positions.
10. The method according to claim 9, wherein the determining k +1 th estimated positions of the vanishing point of the target in the k +1 th frame of image according to the projection positions of M of the N objects, the height of the target camera to the ground, and the focal length of the target camera respectively to obtain M k +1 th estimated positions comprises:
acquiring first M objects with the confidence degrees ordered from high to low in the N objects;
respectively determining the (k + 1) th estimated positions of the target vanishing point in the (k + 1) th frame image according to the projection positions of the first M objects, the height from the target camera to the ground and the focal length of the target camera, and obtaining the (M) th estimated positions of the (k + 1) th frame image.
11. The method according to claim 9, wherein the determining the k +1 th estimated position of the target vanishing point in the k +1 th frame image according to the projection positions of the M objects in the N objects, the height of the target camera to the ground and the focal length of the target camera respectively comprises:
under the condition that the position change value of the target vanishing point between the kth frame and the (k + 1) th frame is the change value of the longitudinal coordinate of the target vanishing point between the kth frame and the (k + 1) th frame, and the (k + 1) th position is the (k + 1) th longitudinal coordinate, determining the (k + 1) th estimated longitudinal coordinate of the target vanishing point in the (k + 1) th frame image by the following formula for each of the M objects:
Figure DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 703793DEST_PATH_IMAGE002
is the (k + 1) th estimated longitudinal coordinate corresponding to the ith object in the M objects,
Figure DEST_PATH_IMAGE003
is a longitudinal coordinate of a projection of the grounding point of the ith one of the M objects on the longitudinal axis,
Figure 745567DEST_PATH_IMAGE004
is the height of the target camera to the ground,
Figure DEST_PATH_IMAGE005
is the focal length of the subject camera and,
Figure 712255DEST_PATH_IMAGE006
is the estimated distance between the target camera and the target object.
12. The method of claim 1 or 7, further comprising:
determining that the position determination result of the target vanishing point before the (k + 1) th frame does not meet the first preset condition under the condition that at least 2 parallel lane lines are not detected in the continuous P1 frame images shot by the target camera before the (k + 1) th frame; otherwise, determining that the position determination result of the target vanishing point before the (k + 1) th frame meets the first preset condition, wherein P1 is a preset natural number greater than 1; or
Determining that the position determination result of the target vanishing point before the (k + 1) th frame does not meet the first preset condition under the condition that at least 2 parallel lane lines are not detected in the P2 frame images shot by the target camera within a preset time length before the (k + 1) th frame; otherwise, determining that the position determination result of the target vanishing point before the (k + 1) th frame meets the first preset condition, wherein P2 is a preset natural number greater than 1.
13. The method according to any one of claims 1 to 5 and 8 to 11, wherein the determining an angle change value of the target camera between the k frame and the k +1 frame according to the k frame angular velocity and the k +1 frame angular velocity output by the target sensor comprises:
determining an angle change value of the target camera between the kth frame and the (k + 1) th frame by the following formula under the condition that a position change value of the position of the target vanishing point between the kth frame and the (k + 1) th frame is a change value of a longitudinal coordinate of the target vanishing point between the kth frame and the (k + 1) th frame, and the (k + 1) th position is a (k + 1) th longitudinal coordinate
Figure DEST_PATH_IMAGE007
Figure 232098DEST_PATH_IMAGE008
Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE009
Figure 984153DEST_PATH_IMAGE010
is the k frame angular velocity of the target camera output,
Figure DEST_PATH_IMAGE011
is the k +1 frame angular velocity of the target camera output,
Figure 993566DEST_PATH_IMAGE012
is the ith frame angular velocity between the kth frame angular velocity and the (k + 1) th frame angular velocity output by the target camera.
14. The method according to any one of claims 1 to 5 and 7 to 11, wherein the determining the distance between the target object and the target camera in the (k + 1) th frame image according to the (k + 1) th position of the target vanishing point comprises:
determining a distance d between a target object in the image of the (k + 1) th frame and the target camera by the following formula when a position change value of the position of the target vanishing point between the (k) th frame and the (k + 1) th frame is a change value of a longitudinal coordinate of the target vanishing point between the (k) th frame and the (k + 1) th frame, and the (k + 1) th position is a (k + 1) th longitudinal coordinate:
Figure DEST_PATH_IMAGE013
Figure 385233DEST_PATH_IMAGE014
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE015
is a longitudinal coordinate of the projection of the grounding point of the target object on the longitudinal axis,
Figure 714627DEST_PATH_IMAGE016
is the (k + 1) th vertical coordinate of the target vanishing point in the (k + 1) th frame image,
Figure DEST_PATH_IMAGE017
is the height of the target camera to the ground,
Figure 508139DEST_PATH_IMAGE018
is the focal length of the target camera.
15. A range finder apparatus for a target object, comprising:
the first determination module is used for determining an angle change value of a target camera between a kth frame and a (k + 1) th frame according to a kth frame angular velocity and a (k + 1) th frame angular velocity output by a target sensor, wherein the target camera and the target sensor are arranged on a target vehicle, and k is a natural number;
a second determining module, configured to determine, according to the angle change value and the focal length of the target camera, a position change value of a position of a target vanishing point between the kth frame and the (k + 1) th frame;
a third determining module, configured to determine, according to the position change value, a (k + 1) th position of the target vanishing point in a (k + 1) th frame image captured by the target camera when a position determination result of the target vanishing point before the (k + 1) th frame meets a first preset condition;
a fourth determining module, configured to determine, according to the (k + 1) th position of the target vanishing point, a distance between the target object in the (k + 1) th frame of image and the target camera.
CN202011333374.7A 2020-11-25 2020-11-25 Target object ranging method and device Active CN112146620B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011333374.7A CN112146620B (en) 2020-11-25 2020-11-25 Target object ranging method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011333374.7A CN112146620B (en) 2020-11-25 2020-11-25 Target object ranging method and device

Publications (2)

Publication Number Publication Date
CN112146620A CN112146620A (en) 2020-12-29
CN112146620B true CN112146620B (en) 2021-03-16

Family

ID=73887308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011333374.7A Active CN112146620B (en) 2020-11-25 2020-11-25 Target object ranging method and device

Country Status (1)

Country Link
CN (1) CN112146620B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907678B (en) * 2021-01-25 2022-05-13 深圳佑驾创新科技有限公司 Vehicle-mounted camera external parameter attitude dynamic estimation method and device and computer equipment
CN114037977B (en) * 2022-01-07 2022-04-26 深圳佑驾创新科技有限公司 Road vanishing point detection method, device, equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5752728B2 (en) * 2013-02-28 2015-07-22 富士フイルム株式会社 Inter-vehicle distance calculation device and operation control method thereof
JP6251099B2 (en) * 2014-03-24 2017-12-20 国立大学法人東京工業大学 Distance calculation device
CN111104824A (en) * 2018-10-26 2020-05-05 中兴通讯股份有限公司 Method for detecting lane departure, electronic device, and computer-readable storage medium
CN111191487A (en) * 2018-11-14 2020-05-22 北京市商汤科技开发有限公司 Lane line detection and driving control method and device and electronic equipment

Also Published As

Publication number Publication date
CN112146620A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN112785702B (en) SLAM method based on tight coupling of 2D laser radar and binocular camera
CN109084732B (en) Positioning and navigation method, device and processing equipment
CN109588060B (en) Method for controlling a sensor and corresponding device, vehicle, system and computer-readable storage medium
EP2948927B1 (en) A method of detecting structural parts of a scene
US8467612B2 (en) System and methods for navigation using corresponding line features
CN110553648A (en) method and system for indoor navigation
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
WO2010036098A1 (en) Method of and arrangement for blurring an image
Salih et al. Depth and geometry from a single 2d image using triangulation
CN112146620B (en) Target object ranging method and device
Badino et al. Stereo-based free space computation in complex traffic scenarios
CN113701750A (en) Fusion positioning system of underground multi-sensor
CN111553342B (en) Visual positioning method, visual positioning device, computer equipment and storage medium
KR20160125803A (en) Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest
CN108322698B (en) System and method based on fusion of multiple cameras and inertial measurement unit
CN112115930B (en) Method and device for determining pose information
JP5891802B2 (en) Vehicle position calculation device
Aliakbarpour et al. Geometric exploration of virtual planes in a fusion-based 3D data registration framework
Yang et al. Road detection by RANSAC on randomly sampled patches with slanted plane prior
JP4622889B2 (en) Image processing apparatus and image processing method
Pełczyński et al. Motion vector estimation of a stereovision camera with inertial sensors
Kanatani et al. Detection of 3D points on moving objects from point cloud data for 3D modeling of outdoor environments
CN113379850B (en) Mobile robot control method, device, mobile robot and storage medium
CN115564836B (en) Monocular coordinate conversion method and device for curtain wall robot and electronic equipment
CN112528728B (en) Image processing method and device for visual navigation and mobile robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20221021

Address after: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors

Patentee after: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

Patentee after: TENCENT CLOUD COMPUTING (BEIJING) Co.,Ltd.

Address before: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors

Patentee before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

TR01 Transfer of patent right