CN115841660A - Distance prediction method, device, equipment, storage medium and vehicle - Google Patents

Distance prediction method, device, equipment, storage medium and vehicle Download PDF

Info

Publication number
CN115841660A
CN115841660A CN202211122201.XA CN202211122201A CN115841660A CN 115841660 A CN115841660 A CN 115841660A CN 202211122201 A CN202211122201 A CN 202211122201A CN 115841660 A CN115841660 A CN 115841660A
Authority
CN
China
Prior art keywords
detection target
image
distance
detection
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211122201.XA
Other languages
Chinese (zh)
Inventor
赵庆会
张一鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Co Wheels Technology Co Ltd
Original Assignee
Beijing Co Wheels Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Co Wheels Technology Co Ltd filed Critical Beijing Co Wheels Technology Co Ltd
Priority to CN202211122201.XA priority Critical patent/CN115841660A/en
Publication of CN115841660A publication Critical patent/CN115841660A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The disclosure relates to a distance prediction method, a distance prediction device, a distance prediction apparatus, a storage medium, and a vehicle. The method comprises the steps of determining the position of a detection target when the detection target first appears in an image acquisition range based on an image acquired when the detection target first appears in the image acquisition range, calculating the movement distance of the detection target in the time according to the movement speed of the detection target and the time difference between the moment when the detection target first appears in the image acquisition range and the current moment, further calculating the distance between the detection target and a vehicle at the current moment, avoiding errors caused by distance measurement directly based on a current frame image, improving the instantaneity and accuracy of a distance prediction method, providing reliable distance data for AEB, and effectively ensuring the driving safety.

Description

Distance prediction method, device, equipment, storage medium and vehicle
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a distance prediction method, apparatus, device, storage medium, and vehicle.
Background
In vehicle driver assistance technology, an automatic Braking system (AEB) is a very important part. When the vehicle runs on a road, in order to ensure the safety of pedestrians and people riding in front of or around the vehicle, the AEB can brake in time when the distance between the self-vehicle and the pedestrians or people riding is less than the safety distance, so as to avoid traffic accidents.
In the prior art, the distance between a pedestrian, a rider and a self-vehicle in an image is usually acquired according to an image acquired in real time, but because a detection target such as the pedestrian and the rider is acquired according to the image, and a certain calculation time is needed for further determining the position of the detection target according to the image, the detection target still moves relative to the vehicle in the calculation process, and the calculated result is still the distance between the detection target and the vehicle at the image acquisition time, the accuracy of the distance measurement method is low, so that the effect of AEB is influenced, and potential safety hazards are caused.
Disclosure of Invention
In order to solve the technical problems, the disclosure provides a distance prediction method, a distance prediction device, distance prediction equipment, a storage medium and a vehicle, so that the real-time performance and accuracy of the distance prediction method are improved, reliable distance data are provided for AEB, and driving safety is effectively guaranteed.
In a first aspect, an embodiment of the present disclosure provides a distance prediction method, including:
acquiring an initial detection frame of a detection target in a first image and the movement speed of the detection target, wherein the first image is an image acquired when the detection target appears in an image acquisition range for the first time;
determining first position information of the detection target based on inverse perspective transformation according to the pixel position of the initial detection frame in the first image;
calculating the time difference between the timestamp of the first image and the current moment;
and calculating the distance between the detection target and the vehicle at the current moment according to the movement speed of the detection target, the first position information and the time difference.
In some embodiments, the determining first position information of the detection target based on an inverse perspective transformation according to the pixel position of the initial detection frame in the first image includes:
determining an inverse perspective transformation matrix of the initial detection frame based on camera calibration parameters;
and according to the inverse perspective transformation matrix, performing inverse perspective transformation on the pixel position of the initial detection frame in the first image to obtain first position information of the detection target in a vehicle coordinate system.
In some embodiments, the calculating a distance between the detection target and the host vehicle at the current time according to the movement speed of the detection target, the first position information, and the time difference includes:
calculating the movement distance of the detection target in the time difference according to the absolute speed information and/or the acceleration information of the detection target;
determining second position information of the detection target at the current moment according to the first position information and the movement distance in the time difference;
and determining the distance between the detection target and the self vehicle at the current moment based on the position information of the self vehicle at the current moment and the second position information.
In some embodiments, after calculating the distance between the detection target and the own vehicle at the current time, the method further comprises:
updating the information of the initial detection frame according to the height information of the detection target and the distance between the detection target and the vehicle at the current moment to obtain updated detection frame information;
and displaying the updated detection frame in the current frame image based on the updated detection frame information.
In some embodiments, the updating the pixel position of the initial detection frame according to the height information of the detection target and the distance between the detection target and the vehicle at the current time to obtain updated detection frame information includes:
determining the pixel height of the detection target in the current frame image by combining a pinhole imaging principle according to the height information of the detection target;
and determining the pixel width of the detection target in the current frame image according to the ratio of the pixel height to the pixel width of the initial detection frame.
In some embodiments, before determining the pixel height of the detection target in the current frame image according to the height information of the detection target and combining with the pinhole imaging principle, the method further comprises:
acquiring the real height of the detection target;
and if the real height of the detection target is in any one of a plurality of preset height ranges, determining the height information of the detection target to be the preset height corresponding to the preset height range.
In a second aspect, an embodiment of the present disclosure provides a distance prediction apparatus, including:
the device comprises an acquisition module, a detection module and a processing module, wherein the acquisition module is used for acquiring an initial detection frame of a detection target in a first image and the movement speed of the detection target, and the first image is an image acquired when the detection target appears in an image acquisition range for the first time;
a determining module, configured to determine first position information of the detection target based on an inverse perspective transformation according to a pixel position of the initial detection frame in the first image;
the first calculation module is used for calculating the time difference between the timestamp of the first image and the current moment;
and the second calculation module is used for calculating the distance between the detection target and the vehicle at the current moment according to the movement speed of the detection target, the first position information and the time difference.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of the first aspect.
In a fourth aspect, the present disclosure provides a computer-readable storage medium, on which a computer program is stored, the computer program being executed by a processor to implement the method of the first aspect.
In a fifth aspect, the disclosed embodiments also provide a computer program product comprising a computer program or instructions which, when executed by a processor, implement the distance prediction method as described above.
In a sixth aspect, the disclosed embodiments also provide a vehicle including the distance prediction apparatus as described above.
The distance prediction method, the distance prediction device, the distance prediction equipment, the storage medium and the vehicle provided by the embodiment of the disclosure are characterized in that based on an image acquired when a detection target firstly appears in an image acquisition range, the position of the detection target firstly appears in the image acquisition range is determined, then the movement distance of the detection target in the period of time is calculated according to the movement speed of the detection target and the time difference between the moment when the detection target firstly appears in the image acquisition range and the current moment, so that the position of the detection target at the current moment is determined, the distance between the detection target at the current moment and a vehicle is further calculated, errors caused by distance measurement directly based on a current frame image are avoided, the real-time performance and the accuracy of the distance prediction method are improved, reliable distance data are provided for AEB, and the driving safety is effectively guaranteed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a flowchart of a distance prediction method provided by an embodiment of the present disclosure;
FIG. 2 is a flowchart of a distance prediction method according to another embodiment of the disclosure;
FIG. 3 is a flowchart of a distance prediction method according to another embodiment of the disclosure;
fig. 4 is a schematic diagram illustrating a principle of calculating a true height of a detected target according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating a principle of calculating a true height of another detected target according to an embodiment of the disclosure
Fig. 6 is a schematic diagram illustrating a principle of calculating a pixel height of a detection target in a current frame image according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a distance prediction apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
The embodiments of the present disclosure provide a distance prediction method, which is described below with reference to specific embodiments.
Fig. 1 is a flowchart of a distance prediction method according to an embodiment of the disclosure. The method can be applied to vehicle-mounted equipment, wherein the vehicle-mounted equipment can be a vehicle machine, a smart phone, a palm computer, a tablet computer, a notebook computer, an all-in-one machine, intelligent driving equipment and the like. It can be understood that the distance detection method provided by the embodiment of the present disclosure may also be applied in other scenarios.
The following describes a distance prediction method shown in fig. 1, which includes the following specific steps:
s101, acquiring an initial detection frame of a detection target in a first image and the movement speed of the detection target, wherein the first image is an image acquired when the detection target appears in an image acquisition range for the first time.
During the running process of the vehicle on the road, the vehicle-mounted equipment acquires an environment image around the vehicle in real time through a camera of a device in front of the vehicle, and specifically, the environment image in the range of 120 degrees in front of the vehicle can be acquired. The acquired images are input into a pre-trained detection model, so that one or more detection targets contained in each frame of image output by the detection model and an initial detection frame corresponding to each detection target can be obtained. The detection target may be any target within the image capturing range in front of the vehicle, such as a pedestrian, a rider, or the like. The initial detection frame corresponding to the detection target is the surrounding frame of the detection target in the acquired image, and the shape of the surrounding frame is generally rectangular.
And training the detection model in advance to obtain the trained detection model. Specifically, the labeled sample image set may be obtained in advance, the sample image is used as the input of the detection model, the labeling result of the sample image is used as the output of the detection model, and the detection model is trained. And the labeling result of the sample image comprises a detection frame of a detection target preset in the sample image. Preferably, a plurality of detection models can be constructed and trained respectively, a test image set is obtained to detect the plurality of detection models, and the detection model with the optimal detection effect is selected from the plurality of detection models.
It is to be understood that the first image may include a plurality of different detection targets, and the number of each detection target may also not be unique, so that one or more initial detection frames may be included in the first image, and the embodiment of the present disclosure is described by taking only one of the initial detection frames as an example.
In the process of detecting the acquired image, the vehicle-mounted equipment stores the information of a single detection target into a preset memory buffer area, wherein the information of the single detection target comprises the movement speed of the detection target, and the movement speed of the detection target is obtained by tracking and filtering through a filter. And according to the adjacent frame images obtained subsequently, associating the information of the same detection target with the information of the corresponding detection target in the memory buffer area. Therefore, when the information of a certain detection target is not associated with any detection target in the memory buffer area, the detection target appears in the image acquisition range for the first time, and the frame image is the first image corresponding to the detection target.
In some embodiments, the data in the preset memory buffer is emptied or updated at preset intervals, and then for each detection target, a frame with the earliest timestamp in the corresponding image is selected as the first image of the detection target.
S102, according to the pixel position of the initial detection frame in the first image, determining first position information of the detection target based on inverse perspective transformation.
The pixel position of the initial detection frame in the first image is the position of the pixel point corresponding to the initial detection frame in the first image, and specifically may be a coordinate position of a certain point on the initial detection frame or a pixel point corresponding to a certain point inside the initial detection frame in the first image. Optionally, the coordinate position of the pixel point corresponding to the bottom edge of the initial detection frame in the first image may be used as the pixel coordinate of the initial detection frame in the first image, or the coordinate positions of the pixel points corresponding to the four corners of the initial detection frame in the first image may be used as the pixel positions of the initial detection frame in the first image.
The inverse perspective transformation is the inverse process of perspective transformation, and mainly maps a certain point in an image from an image coordinate system to a world coordinate system or a coordinate system with a self-vehicle as an origin by combining camera parameters, so that the interference and the error of perspective influence on image detection and identification tasks are eliminated. The corresponding relation between the image coordinate system and the world coordinate system or the coordinate system with the self-vehicle as the origin can be obtained by adopting inverse perspective transformation, then the pixel position in the image coordinate system is corresponded, and the position of the pixel position corresponding to the real world, namely the first position information of the detection target is obtained.
And S103, calculating the time difference between the time stamp of the first image and the current time.
When the vehicle-mounted equipment collects the environment image around the vehicle through the camera of the vehicle front device, the time of each image collection can be recorded, and the time is the time stamp of the corresponding image. The time of originally acquiring the first image can be obtained according to the time stamp of the first image, and the time difference between the time of acquiring the first image and the current time is calculated.
And S104, calculating the distance between the detection target and the vehicle at the current moment according to the movement speed of the detection target, the first position information and the time difference.
The vehicle is equipped with a sensor for collecting information of the surrounding environment of the vehicle, such as one or more of a camera, a vision sensor, a laser radar, a millimeter wave radar, and an ultrasonic radar, but is not limited thereto. During the running process of the vehicle, the vehicle-mounted equipment acquires the environmental information around the vehicle in real time through the sensor, wherein the environmental information comprises the movement speed of the detection target.
The running state information of the vehicle includes, but is not limited to, positioning information of the vehicle and kinematic information of the vehicle, wherein the Positioning information of the vehicle can be obtained by Positioning by a Global Positioning System (GPS) or other Positioning systems; the kinematic information of the vehicle includes, but is not limited to, the speed, the acceleration, the turning angle, and the like of the vehicle, and the kinematic information may be collected by a speed sensor, an acceleration sensor, an inertial measurement sensor, and the like in the vehicle, and based on the positioning information of the vehicle, the position of the own vehicle at the current time may be determined, including, but not limited to, coordinate information of the own vehicle in a world coordinate system.
According to the movement speed of the detection target and the time difference between the time of acquiring the first image and the current time, the distance of the detection target moving in the direction in the time period corresponding to the time difference, namely the moving direction and the moving distance of the detection target from the first position can be calculated. On the basis of the first position information of the detection target, the second position information of the detection target at the current moment can be determined by combining the moving direction and the moving distance of the detection target between the moment of acquiring the first image and the current moment. Further, position information of the vehicle at the current time is obtained, and the distance between the detection target and the vehicle at the current time is calculated according to second position information of the detection target and the position information of the vehicle at the current time.
In some embodiments, the distance between the detection target and the own vehicle at the current moment may also be calculated according to the relative movement speed of the detection target and the own vehicle, in combination with the first position information and the time difference, which is not limited in this disclosure.
The embodiment of the disclosure determines the position of the detection target when the detection target first appears in the image acquisition range based on the image acquired when the detection target first appears in the image acquisition range, and then calculates the movement distance of the detection target in the period of time according to the movement speed of the detection target and the time difference between the moment when the detection target first appears in the image acquisition range and the current moment, so as to determine the position of the detection target at the current moment, further calculate the distance between the detection target and the vehicle at the current moment, avoid the error caused by directly performing distance measurement based on the current frame image, improve the real-time performance and the accuracy of the distance prediction method, provide reliable distance data for AEB, and effectively guarantee the driving safety.
Fig. 2 is a flowchart of a distance prediction method according to another embodiment of the disclosure. As shown in fig. 2, the method includes the following steps:
s201, acquiring an initial detection frame of a detection target in a first image and the movement speed of the detection target, wherein the first image is acquired when the detection target appears in an image acquisition range for the first time.
S202, determining an inverse perspective transformation matrix of the initial detection frame based on camera calibration parameters.
S203, according to the inverse perspective transformation matrix, performing inverse perspective transformation on the pixel position of the initial detection frame in the first image to obtain first position information of the detection target in a vehicle coordinate system.
The camera calibration parameters comprise internal parameters and external parameters of the camera, the internal parameters comprise the focal length of the camera, the pixel size of the camera and the like, and the external parameters comprise the position, the pitch angle and the like of the camera. And calculating according to the parameters of the camera to obtain an inverse perspective transformation matrix, and multiplying by the matrix to obtain first position information of the detection target in the vehicle coordinate system. The vehicle coordinate system is a coordinate system with the self vehicle as an origin. The specific calculation process is as follows:
Figure BDA0003846935470000061
wherein, M is an inverse perspective transformation matrix, x, y and z are coordinate values of the detection target in the vehicle coordinate system, and u and v are pixel coordinate values of the initial detection frame in the first image.
And S204, calculating the movement distance of the detection target in the time difference according to the absolute speed information and/or the acceleration information of the detection target.
S205, according to the first position information and the movement distance in the time difference, second position information of the detection target at the current moment is determined.
And S206, determining the distance between the detection target and the self vehicle at the current moment based on the position information of the self vehicle at the current moment and the second position information.
The speed information of the detection target obtained by the vision-based speed measurement system comprises absolute speed information of the detection target and acceleration information of the detection target. Absolute velocity information is the velocity of the detected object relative to the ground. According to the absolute speed information and/or the acceleration information of the detection target and the elapsed time between the moment of acquiring the first image and the current moment of the detection target, the moving direction and the moving distance of the detection target from the first position in the period of time can be calculated. Further, position information of the vehicle at the current time is obtained, and the distance between the detection target and the vehicle at the current time is calculated according to second position information of the detection target and the position information of the vehicle at the current time.
According to the distance prediction method and device, the moving direction and distance of the detection target between the moment of acquiring the first image and the current moment can be accurately calculated through the absolute speed information and/or the acceleration information of the detection target, the distance between the current moment and the vehicle of the detection target is further calculated, errors caused by distance measurement directly based on the current frame image are avoided, the instantaneity and the accuracy of the distance prediction method are further improved, reliable distance data are provided for AEB, and therefore driving safety is effectively guaranteed.
Fig. 3 is a flowchart of a distance prediction method according to another embodiment of the disclosure. As shown in fig. 3, the method includes the following steps:
s301, acquiring an initial detection frame of a detection target in a first image and the movement speed of the detection target, wherein the first image is acquired when the detection target appears in an image acquisition range for the first time.
S302, according to the pixel position of the initial detection frame in the first image, determining first position information of the detection target based on inverse perspective transformation.
And S303, calculating the time difference between the time stamp of the first image and the current time.
S304, calculating the distance between the detection target and the vehicle at the current moment according to the movement speed of the detection target, the first position information and the time difference.
Specifically, the implementation processes and principles of S301 to S304 are consistent with those of S101 to S104, and are not described herein again.
S305, determining the pixel height of the detection target in the current frame image according to the height information of the detection target and by combining a pinhole imaging principle.
And acquiring height data of the detected target in the real world through a visual sensor, and performing Kalman filtering on the height data to acquire height information of the detected target in the real world. Optionally, before this step, the real height of the detection target may be obtained first; and if the real height of the detection target is in any one of a plurality of preset height ranges, determining the height information of the detection target as the preset height corresponding to the preset height range. Specifically, in addition to acquiring height data of the detection target in the real world using the vision sensor, the real height of the detection target may be calculated based on the image that has been acquired by the following method.
Fig. 4 is a schematic diagram illustrating a principle of calculating a true height of a detected target according to an embodiment of the present disclosure. As shown in fig. 4, a point P (X, Y, Z) is a highest point position of the detection target, where Z represents a real height of the detection target. Dot p ( y And z) is the pixel coordinate of the point P in the acquired image, wherein z is the height of the detection frame corresponding to the detection target. And the middle point of the bottom edge of the detection frame is restored to the real world after inverse perspective transformation, so that the geometric center of the projection shape of the detection target on the ground can be determined, and the position of the detection target relative to the vehicle, namely the X value and the Y value in the position coordinate of the point P can be determined by combining the internal parameter and the external parameter of the camera. According to the similar triangle principle, based on the radial distance X of the detection target relative to the vehicle, the focal distance f of the camera and the height z of the detection frame, the real height z of the detection target can be obtained, namely:
Figure BDA0003846935470000071
fig. 5 is a schematic diagram illustrating a principle of calculating a true height of another detected target according to an embodiment of the present disclosure. As shown in fig. 5, taking the detected object as a pedestrian as an example, the real height H of the object can be calculated by knowing the camera installation height cam _ H, the difference Δ H between the longitudinal distances from the middle point of the bottom edge of the detection frame to the vanishing point, and the height H of the detection frame. The vanishing point is a point where two parallel straight lines intersect in the image through perspective transformation, and in the image collected by the vehicle-mounted device, the intersecting point is generally located on the horizon. Namely:
Figure BDA0003846935470000081
a plurality of preset height ranges are preset, and each height range is provided with a corresponding preset height. And if the real height of the detection target belongs to any one of the preset height ranges, considering the height information of the detection target as the preset height corresponding to the preset height range. Alternatively, different preset height ranges may be set for different types of detection targets. For example, for a pedestrian, the preset height range is set as follows: less than 1.6m, 1.6m-1.8m and more than 1.8m, and the corresponding preset heights are 1.6m, 1.7m and 1.8m respectively. If the real height of a certain pedestrian is 1.65m and is within the preset height range of 1.6m-1.8m, the preset height of 1.7m is used as the height information of the pedestrian serving as the detection target.
Fig. 6 is a schematic diagram illustrating a principle of calculating a pixel height of a detection target in a current frame image according to an embodiment of the present disclosure. As shown in fig. 6, height information H of the detection target is obtained according to the above-described method in this step; obtaining a camera focal length f according to camera internal parameters; the distance d between the detection target and the vehicle at the current moment is obtained in the steps S301 to S304, and the pixel height h of the detection target in the current frame image can be calculated by using the graph similarity relation according to the pinhole imaging rule. The specific calculation formula is as follows:
Figure BDA0003846935470000082
s306, determining the pixel width of the detection target in the current frame image according to the ratio of the pixel height to the width of the initial detection frame.
For the same detection target, no matter how the position relation between the detection target and the vehicle is, the ratio of the pixel height and the pixel width of the corresponding detection frame is fixed, so that the pixel width of the detection target in the current frame image can be obtained based on the ratio of the pixel height and the pixel width of the initial detection frame in the first image and the pixel height of the detection target in the current frame image.
And S307, displaying the updated detection frame in the current frame image.
And displaying the updated detection frame in the current frame image based on the distance between the detection target and the vehicle at the current moment and the pixel height and the pixel width of the detection target in the current frame image.
According to the embodiment of the disclosure, the plurality of preset height ranges are set, the preset height corresponding to the preset height ranges is used as the height information of the detection target in the preset height ranges, and the real height of the detection target is discretized, so that the detection target can be classified based on the height, for example, children and adults can be distinguished, on one hand, the calculation process of the height of the detection frame in the current frame image is simplified, the calculation efficiency is improved, and the real-time performance and the accuracy of the updated detection frame displayed in the current frame image are ensured.
Fig. 7 is a schematic structural diagram of a distance prediction apparatus according to an embodiment of the present disclosure. The distance prediction means may be the in-vehicle device described in the above embodiment, or the distance prediction means may be a component or assembly in the in-vehicle device. The distance prediction apparatus provided in the embodiment of the present disclosure may execute the processing procedure provided in the embodiment of the distance prediction method, as shown in fig. 7, the distance prediction apparatus 70 includes: an acquisition module 71, a determination module 72, a first calculation module 73, a second calculation module 74; the acquiring module 71 is configured to acquire an initial detection frame of a detection target in a first image and a motion speed of the detection target, where the first image is an image acquired when the detection target first appears in an image acquisition range; the determining module 72 is configured to determine first position information of the detection target based on an inverse perspective transformation according to a pixel position of the initial detection frame in the first image; the first calculating module 73 is configured to calculate a time difference between the timestamp of the first image and the current time; the second calculating module 74 is configured to calculate a distance between the detected object and the vehicle at the current time according to the moving speed of the detected object, the first position information, and the time difference.
Optionally, the determining module 72 includes a first determining unit 721, a transforming unit 722; the first determining unit 721 is configured to determine an inverse perspective transformation matrix of the initial detection frame based on the camera calibration parameters; the transformation unit 722 is configured to perform inverse perspective transformation on the pixel position of the initial detection frame in the first image according to the inverse perspective transformation matrix, so as to obtain first position information of the detection target in a vehicle coordinate system.
Optionally, the second calculating module 74 includes a first calculating unit 741, a second determining unit 742, and a third determining unit 743; the first calculation unit 741 is configured to calculate a movement distance of the detection target within the time difference according to absolute velocity information and/or acceleration information of the detection target; the second determining unit 742 is configured to determine second position information of the detection target at the current time according to the first position information and the movement distance in the time difference; the third determining unit 743 is configured to determine the distance between the detection target and the vehicle at the current time based on the position information of the vehicle at the current time and the second position information.
Optionally, the distance prediction apparatus 70 further includes an updating module 75 and a display module 76; the updating module 75 is configured to update the information of the initial detection frame according to the height information of the detection target and the distance between the detection target and the vehicle at the current time, so as to obtain updated detection frame information; the display module 76 is configured to display the updated detection frame in the current frame image based on the updated detection frame information.
Optionally, the updating module 75 includes a height determining unit 751, a width determining unit 752; the height determining unit 751 is used for determining the pixel height of the detection target in the current frame image according to the height information of the detection target and by combining a pinhole imaging principle; the width determining unit 752 is configured to determine a pixel width of the detection target in the current frame image according to a ratio of a pixel height to a width of the initial detection frame.
Optionally, the height determining unit 751 is further configured to obtain a true height of the detection target; and if the real height of the detection target is in any one of a plurality of preset height ranges, determining the height information of the detection target to be the preset height corresponding to the preset height range.
The distance prediction apparatus in the embodiment shown in fig. 7 can be used to implement the technical solutions of the above method embodiments, and the implementation principle and technical effects are similar, which are not described herein again.
In addition, the embodiment of the disclosure also provides a vehicle, which includes the distance prediction device as described in the above embodiment.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device may be the in-vehicle device described in the above embodiment. The electronic device provided in the embodiment of the present disclosure may execute the processing procedure provided in the embodiment of the distance prediction method, as shown in fig. 8, the electronic device 80 includes: memory 81, processor 82, computer programs and communication interface 83; wherein a computer program is stored in the memory 81 and is configured to execute the distance prediction method as described above by the processor 82.
In addition, the embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the distance prediction method described in the above embodiment.
Furthermore, the embodiments of the present disclosure also provide a computer program product, which includes a computer program or instructions, when executed by a processor, implement the distance prediction method as described above.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of distance prediction, the method comprising:
acquiring an initial detection frame of a detection target in a first image and the movement speed of the detection target, wherein the first image is an image acquired when the detection target appears in an image acquisition range for the first time;
determining first position information of the detection target based on inverse perspective transformation according to the pixel position of the initial detection frame in the first image;
calculating the time difference between the timestamp of the first image and the current moment;
and calculating the distance between the detection target and the self vehicle at the current moment according to the movement speed of the detection target, the first position information and the time difference.
2. The method of claim 1, wherein the determining the first position information of the detection target based on an inverse perspective transformation according to the pixel position of the initial detection frame in the first image comprises:
determining an inverse perspective transformation matrix of the initial detection frame based on camera calibration parameters;
and according to the inverse perspective transformation matrix, performing inverse perspective transformation on the pixel position of the initial detection frame in the first image to obtain first position information of the detection target in a vehicle coordinate system.
3. The method according to claim 1, wherein the calculating of the distance between the detection target and the host vehicle at the current moment according to the movement speed of the detection target, the first position information and the time difference comprises:
calculating the movement distance of the detection target in the time difference according to the absolute speed information and/or the acceleration information of the detection target;
determining second position information of the detection target at the current moment according to the first position information and the movement distance in the time difference;
and determining the distance between the detection target and the own vehicle at the current moment based on the position information of the own vehicle at the current moment and the second position information.
4. The method of claim 1, wherein after calculating the distance of the detection target from the host vehicle at the current time, the method further comprises:
updating the information of the initial detection frame according to the height information of the detection target and the distance between the detection target and the vehicle at the current moment to obtain updated detection frame information;
and displaying the updated detection frame in the current frame image based on the updated detection frame information.
5. The method according to claim 4, wherein the updating the pixel position of the initial detection frame according to the height information of the detection target and the distance between the detection target and the vehicle at the current time to obtain updated detection frame information comprises:
determining the pixel height of the detection target in the current frame image by combining a pinhole imaging principle according to the height information of the detection target;
and determining the pixel width of the detection target in the current frame image according to the ratio of the pixel height to the pixel width of the initial detection frame.
6. The method of claim 5, wherein the height of the pixel of the detected target in the current frame image is determined according to the height information of the detected target in combination with the pinhole imaging principle, and the method further comprises:
acquiring the real height of the detection target;
and if the real height of the detection target is in any one of a plurality of preset height ranges, determining the height information of the detection target to be the preset height corresponding to the preset height range.
7. A distance prediction apparatus, characterized in that the apparatus comprises:
the device comprises an acquisition module, a detection module and a processing module, wherein the acquisition module is used for acquiring an initial detection frame of a detection target in a first image and the movement speed of the detection target, and the first image is an image acquired when the detection target appears in an image acquisition range for the first time;
a determining module, configured to determine first position information of the detection target based on an inverse perspective transformation according to a pixel position of the initial detection frame in the first image;
the first calculation module is used for calculating the time difference between the timestamp of the first image and the current moment;
and the second calculation module is used for calculating the distance between the detection target and the vehicle at the current moment according to the movement speed of the detection target, the first position information and the time difference.
8. An electronic device, comprising:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any one of claims 1-6.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-6.
10. A vehicle, comprising: a distance prediction apparatus as claimed in claim 7.
CN202211122201.XA 2022-09-15 2022-09-15 Distance prediction method, device, equipment, storage medium and vehicle Pending CN115841660A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211122201.XA CN115841660A (en) 2022-09-15 2022-09-15 Distance prediction method, device, equipment, storage medium and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211122201.XA CN115841660A (en) 2022-09-15 2022-09-15 Distance prediction method, device, equipment, storage medium and vehicle

Publications (1)

Publication Number Publication Date
CN115841660A true CN115841660A (en) 2023-03-24

Family

ID=85574933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211122201.XA Pending CN115841660A (en) 2022-09-15 2022-09-15 Distance prediction method, device, equipment, storage medium and vehicle

Country Status (1)

Country Link
CN (1) CN115841660A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824458A (en) * 2023-08-28 2023-09-29 中国民用航空飞行学院 Airport runway intrusion prevention method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824458A (en) * 2023-08-28 2023-09-29 中国民用航空飞行学院 Airport runway intrusion prevention method and system
CN116824458B (en) * 2023-08-28 2023-11-24 中国民用航空飞行学院 Airport runway intrusion prevention method and system

Similar Documents

Publication Publication Date Title
US11967109B2 (en) Vehicle localization using cameras
US11094112B2 (en) Intelligent capturing of a dynamic physical environment
US20170217433A1 (en) Tracking objects within a dynamic environment for improved localization
JP6520740B2 (en) Object detection method, object detection device, and program
US20160055385A1 (en) Systems and methods for detecting traffic signs
CN111179300A (en) Method, apparatus, system, device and storage medium for obstacle detection
US20180260639A1 (en) Systems and methods for detecting traffic signs
CN110967018B (en) Parking lot positioning method and device, electronic equipment and computer readable medium
Cao et al. Amateur: Augmented reality based vehicle navigation system
CN114037972A (en) Target detection method, device, equipment and readable storage medium
CN109883439A (en) A kind of automobile navigation method, device, electronic equipment and storage medium
CN111160132B (en) Method and device for determining lane where obstacle is located, electronic equipment and storage medium
CN114248778A (en) Positioning method and positioning device of mobile equipment
CN111103584A (en) Device and method for determining height information of objects in the surroundings of a vehicle
CN115841660A (en) Distance prediction method, device, equipment, storage medium and vehicle
US12067790B2 (en) Method and system for identifying object
CN116524454A (en) Object tracking device, object tracking method, and storage medium
US20220101025A1 (en) Temporary stop detection device, temporary stop detection system, and recording medium
US20240249493A1 (en) Systems and methods for detecting a driving area in a video
CN115641567B (en) Target object detection method and device for vehicle, vehicle and medium
CN118062016B (en) Vehicle environment sensing method, apparatus and storage medium
US20240144612A1 (en) Vehicle ar display device and ar service platform
CN115661028A (en) Distance detection method, device, equipment, storage medium and vehicle
WO2020073268A1 (en) Snapshot image to train roadmodel
WO2020073270A1 (en) Snapshot image of traffic scenario

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination