CN110836656B - Anti-shake distance measuring method and device for monocular ADAS (adaptive Doppler analysis System) and electronic equipment - Google Patents

Anti-shake distance measuring method and device for monocular ADAS (adaptive Doppler analysis System) and electronic equipment Download PDF

Info

Publication number
CN110836656B
CN110836656B CN201810929374.XA CN201810929374A CN110836656B CN 110836656 B CN110836656 B CN 110836656B CN 201810929374 A CN201810929374 A CN 201810929374A CN 110836656 B CN110836656 B CN 110836656B
Authority
CN
China
Prior art keywords
vehicle
shake
determining
unit
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810929374.XA
Other languages
Chinese (zh)
Other versions
CN110836656A (en
Inventor
潘铭星
储刘火
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Horizon Robotics Science and Technology Co Ltd
Original Assignee
Shenzhen Horizon Robotics Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Horizon Robotics Science and Technology Co Ltd filed Critical Shenzhen Horizon Robotics Science and Technology Co Ltd
Priority to CN201810929374.XA priority Critical patent/CN110836656B/en
Publication of CN110836656A publication Critical patent/CN110836656A/en
Application granted granted Critical
Publication of CN110836656B publication Critical patent/CN110836656B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass

Abstract

The application relates to an anti-shake distance measurement method and device for monocular ADAS and electronic equipment. According to an embodiment, there is provided a method for determining an amount of shake of a vehicle, comprising: acquiring a multi-frame image of the surrounding environment of the vehicle by using a vehicle-mounted camera; determining a first shake of the vehicle based on the plurality of frame images; determining a second shake of the vehicle with an Inertial Measurement Unit (IMU); and fusing the first and second shakes to determine a shake of the vehicle. By determining the amount of shake of the vehicle, errors due to shake can be compensated during monocular vision ranging, thereby enabling more accurate ranging.

Description

Anti-shake distance measuring method and device for monocular ADAS (adaptive Doppler analysis System) and electronic equipment
Technical Field
The present application relates generally to the field of Assisted Driving (ADAS), and more particularly, to a monocular ADAS visual ranging anti-shake method, apparatus and electronic device based on an Inertial Measurement Unit (IMU).
Background
In recent years, automated driving, or Advanced Driving Assistance Systems (ADAS), have received extensive attention and intense research. The ADAS system needs to use various on-board sensors to sense various states of the vehicle itself and the surrounding environment, so as to make driving strategy decisions and finally implement an automatic driving function. Examples of such vehicle-mounted sensors include cameras, lidar, ultrasonic radar, and the like, wherein cameras, also referred to as vehicle-mounted cameras, are widely used due to their low price and capability of performing multiple functions. For example, cameras may be used for visual ranging to prevent collision, lane tracking, SLAM mapping, etc.
The cameras can be generally classified into a monocular camera and a monocular camera, wherein the monocular camera can directly obtain depth information of a photographed image through a multi-view ranging method, and thus can be easily used for ranging and other applications. However, the image shot by the monocular camera has no depth information, so that the monocular image needs to be subjected to specific processing to realize the ranging function. Currently, typical processing methods are generally divided into two types, i.e., calculating the distance to the target vehicle according to the pixels occupied by the vehicle width in the image, and calculating the distance of the vehicle according to the relative position of the target vehicle on the image. However, these methods have certain drawbacks. Since the vehicle width is greatly different due to different types of vehicles, the vehicle distance estimated from the width of the vehicle in the image may have poor accuracy. In addition, the method is also easily affected by conditions such as camera internal parameters and actual distance, so that the error is large. In the second method, the distance measurement can be performed on vehicles of different vehicle types by judging the relative position, however, the relative position of the front vehicle in the image is influenced by the fluctuation of the vehicle, and particularly, the relative position of the distant vehicle is greatly influenced.
Therefore, there is still a need for a method that can achieve accurate ranging using a monocular camera.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides a monocular ADAS visual ranging anti-shake method, a monocular ADAS visual ranging anti-shake device and electronic equipment based on an Inertial Measurement Unit (IMU), wherein the method, the device and the electronic equipment are used for accurately calculating the bumping condition of a vehicle by combining with the IMU, and compensating the relative position of a front vehicle in an image, so that the ranging precision is ensured.
According to one aspect of the present application, there is provided a method for determining an amount of shake of a vehicle, comprising: acquiring a multi-frame image of the surrounding environment of the vehicle by using a vehicle-mounted camera; determining a first shake of the vehicle based on the plurality of frame images; determining a second shake of the vehicle with an Inertial Measurement Unit (IMU); and fusing the first and second shakes to determine a shake of the vehicle.
In some embodiments, the onboard camera is a monocular camera.
In some embodiments, determining the first shake of the vehicle based on the plurality of frames of images comprises: extracting a plurality of first feature points (P) from a first frame image of the plurality of frame images1) (ii) a Determining a plurality of second feature points (P) corresponding to the plurality of first feature points from a second frame image of the plurality of frame images2) (ii) a And calculating a first shake based on the plurality of first feature points and the plurality of second feature points.
In some embodiments, prior to determining a second shake of the vehicle with an Inertial Measurement Unit (IMU), the method further includes calibrating an external reference of the IMU.
In some embodiments, determining the second shake of the vehicle with an Inertial Measurement Unit (IMU) includes: determining an included angle between the vehicle and the ground by using the measurement value of the inertia measurement unit; and determining a second shake of the vehicle based on the included angle.
In some embodiments, fusing the first and second shakes to determine the shake of the vehicle comprises: calculating reprojection errors between the plurality of first feature points and the plurality of second feature points; determining a confidence level of the first dither based on the reprojection error; and fusing the first jitter and the second jitter with the confidence to determine the jitter of the vehicle.
According to another aspect of the present application, there is provided a method for determining a distance between a current vehicle and a target vehicle, comprising: determining the shake of the current vehicle by using the method; obtaining a range image with an onboard camera, the range image including the target vehicle; determining a first position of the target vehicle in the range image; compensating the first position by using the shake of the current vehicle to obtain a second position; and determining a distance between the target vehicle and the current vehicle based on the second location.
In some embodiments, the ranging image is one of a plurality of frame images of the environment surrounding the vehicle.
According to another aspect of the present application, there is provided an apparatus for determining a shake amount of a vehicle, including: the vehicle-mounted camera comprises an image acquisition unit, a processing unit and a processing unit, wherein the image acquisition unit is used for acquiring multi-frame images of the surrounding environment of the vehicle, which are shot by the vehicle-mounted camera; a first shake calculation unit that calculates a first shake of the vehicle based on the plurality of frame images; a second shake calculation unit for calculating a second shake of the vehicle based on measurement data of an Inertial Measurement Unit (IMU); and a fusion unit configured to fuse the first shake and the second shake to determine a shake of the vehicle.
In some embodiments, the first jitter calculation unit includes: a feature point extraction unit for extracting feature points from the imageExtracting multiple first characteristic points (P) from the first frame image of the multiple frame images1) (ii) a A tracking unit for tracking the plurality of first feature points to determine a corresponding plurality of second feature points (P) in a second frame image of the plurality of frame images2) (ii) a And a shake calculation unit configured to calculate the first shake based on the plurality of first feature points and the plurality of second feature points.
In some embodiments, the apparatus further comprises: and the IMU calibration unit is used for calibrating the external parameters of the IMU.
In some embodiments, the second jitter calculating unit includes: the ground included angle calculation unit is used for calculating an included angle between the inertia measurement unit and the ground based on the measurement data of the inertia measurement unit; and a jitter calculating unit for calculating the second jitter based on the included angle.
In some embodiments, the fusion unit comprises: a reprojection error calculation unit configured to calculate reprojection errors between the plurality of first feature points and the plurality of second feature points; a confidence calculation unit for determining a confidence of the first dither based on the reprojection error; and a shake calculation unit that calculates shake of the vehicle based on the confidence, the first shake, and the second shake.
According to another aspect of the present application, there is provided an apparatus for determining a distance between a current vehicle and a target vehicle, comprising: the above-described apparatus for determining a shake amount of a vehicle, configured to determine a shake amount of a current vehicle; the image acquisition unit is used for acquiring a ranging image shot by a vehicle-mounted camera, and the ranging image comprises the target vehicle; a first position determination unit for determining a first position of the target vehicle in the ranging image; a compensation unit for compensating the first position based on the shake of the current vehicle to determine a second position; and a distance determination unit for determining a distance between the target vehicle and the current vehicle based on the second position.
According to another aspect of the present application, there is provided an electronic device including: a processor; and a memory having stored therein computer program instructions which, when executed by the processor, cause the processor to perform the above method.
According to another aspect of the present application, there is provided a vehicle including the above-described electronic apparatus.
According to another aspect of the application, there is provided a computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the above method.
Compared with the prior art, the embodiment of the invention realizes a plurality of beneficial technical effects. For example, the IMU unit can avoid the limitation of visual anti-shake in the aspects of illumination conditions, image blurring and the like, and the visual anti-shake can avoid the precision defect caused by accumulated errors of the IMU unit. Moreover, the invention can realize monocular distance measurement, thereby saving hardware cost.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail exemplary embodiments thereof with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the exemplary embodiments of the present application and are incorporated in and constitute a part of this specification, illustrate principles of the application and together with the exemplary embodiments of the application do not constitute any limitation of the application. In the drawings, the same or similar reference numbers generally represent the same or similar parts or steps.
Fig. 1 is a schematic diagram illustrating an application scenario of vehicle anti-shake according to an embodiment of the present application.
FIG. 2 illustrates a flow chart of a method for determining an amount of shake of a vehicle according to an embodiment of the present application.
Fig. 3 illustrates a schematic diagram of determining an amount of visual jitter in the method of fig. 2.
Fig. 4 shows a flow chart of jitter fusion steps in the method of fig. 2.
Fig. 5 shows a flow chart of a visual ranging method according to an embodiment of the present application.
Fig. 6 shows a functional block diagram of a vehicle shake determination apparatus according to an embodiment of the present application.
FIG. 7 shows a functional block diagram of a visual ranging apparatus according to an embodiment of the present application.
Fig. 8 shows a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are merely some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited in any way by the exemplary embodiments described herein.
Summary of the application
As mentioned above, the current monocular vision distance measuring schemes, which generally estimate the distance of the leading vehicle based on the width or relative position of the leading vehicle in the image, have certain disadvantages. For example, for a width-based solution, the width of different vehicles may be different, and the solution is also susceptible to camera parameters, actual distances, and the like, so the error is large. The solution based on the relative position is easily affected by the fluctuation of the host vehicle, and particularly has a large effect on the relative position of a distant vehicle.
In view of the above and other problems, a basic idea of the present application is to propose a method of determining vehicle shake that combines an amount of visual shake and an amount of IMU shake to determine the shake of a vehicle, thereby compensating for ranging accuracy. The motion state of the vehicle can be determined by the visual information, and the vehicle can have an anti-shake effect, but the visual anti-shake effect is easily affected by conditions such as illumination and motion blur, and therefore the vehicle fails in some cases. The IMU anti-shake is not influenced by the environmental conditions, but the error accumulation occurs for a long time, so that the estimation of the motion state is inaccurate. The invention can obtain more stable and accurate results by fusing the two. By determining the amount of shake of the host vehicle, the relative vehicle distance can be accurately estimated. The present application also proposes apparatuses, electronic devices, vehicles, computer program products, etc. that can be used to implement the above-described methods.
By combining the advantages of vision and IMU, the change of the relative position of the vehicle in the image caused by vehicle bump can be stably and accurately compensated under various environmental conditions in the driving process of the vehicle, so that the distance measurement precision is ensured. The method can be used for measuring the distance of vehicles of different types, and has high precision and wide practicability.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary scenarios
Fig. 1 is a schematic diagram illustrating an application scenario of vehicle anti-shake according to an embodiment of the present application.
As shown in fig. 1, the vehicle anti-shake scheme of the present application may be applied to a vehicle 10, which may also be referred to as a host vehicle. The host vehicle 10 may be a vehicle equipped with A Driving Assistance System (ADAS) that, while traveling, may require visual ranging of a vehicle in front (not shown) to make various driving strategies based on distance from the vehicle in front. As described above, the visual ranging may be affected by the shaking of the host vehicle 10, and therefore the anti-shake scheme of the present invention may be implemented on the vehicle 10 to achieve stable and accurate ranging.
Typically, the vehicle 10 may include an onboard camera 12 that may capture images of the environment surrounding the vehicle 10, including images of the front of the vehicle, which may be used to implement the visual anti-shake and ranging scheme of the present invention, and the ADAS system may use the results of the shake amount compensated visual ranging to assess the driving environment and make various corresponding driving maneuvers, such as braking or gas pedal. The onboard camera 12 may be a monocular camera that captures images that do not contain depth information. Of course, when the onboard camera 12 is a binocular or binocular camera, it may also be used to implement aspects of the present invention.
The vehicle 10 may also include an Inertial Measurement Unit (IMU)14, which typically may include an accelerometer and a gyroscope, for measuring the angular velocity and angular velocity, respectively, of the object in three-dimensional space, and thereby resolving the attitude of the object, including pitch, yaw, and roll angles. As described below, the IMU unit 14 may be utilized to determine the attitude of the vehicle 10, and thus the shake, i.e., pitch angle, of the vehicle 10.
With continued reference to FIG. 1, the vehicle 10 may also include an onboard electronic device 16, such as an onboard computer, the specific structure of which will be described in further detail below. The in-vehicle electronic device 16 may obtain the image taken by the in-vehicle camera 12 and the measurement result of the IMU unit 14, and use these data to perform the anti-shake and ranging method of the present invention described below.
Although embodiments of the present invention are described herein in the context of a vehicle, it should be understood that the principles of the present invention may also be applied to other scenarios requiring visual ranging, such as mobile robots and the like, which should be understood to be included within the scope of the appended claims and their equivalents.
Exemplary method
FIG. 2 illustrates a flow chart of a method 100 for determining an amount of shake of a vehicle according to an embodiment of the present application. As shown in FIG. 2, the method 100 may begin at step S110 by obtaining a plurality of frame images of the environment surrounding the vehicle 10, including images of a leading vehicle (not shown) for which ranging is desired, using the onboard camera 12. Here, the vehicle 10 may be referred to as a host vehicle, a current vehicle, or a following vehicle, and a vehicle requiring distance measurement may be referred to as a preceding vehicle or a target vehicle. As previously mentioned, the onboard camera 12 may be a monocular camera, and although fig. 1 shows the onboard camera 12 mounted on top of the vehicle 10, it may also be mounted in other locations, such as at the vehicle head, etc.
Then, at step S120, a shake of the vehicle 10, referred to herein as a first shake, may be determined based on the obtained multi-frame images. Step S120 may be implemented by visually determining the motion state of the vehicle 10 and thus the amount of shake thereof. As an example, a feature point tracking algorithm may be employed. In order to take the efficiency of the algorithm into consideration, instead of tracking the whole image, a certain region of interest ROI may be selected, and then a plurality of feature points, here denoted as first feature points P1, are extracted from the ROI. For example, several feature points P1 may be uniformly extracted in the ROI region in the image width and image height directions at a certain step. As an example, 48 feature points arranged in a matrix may be taken in steps of 20 pixels in the width direction and 15 pixels in the height direction. Of course, a greater or lesser number of feature points may be extracted, but the number of feature points is preferably 4 or more. Fig. 3 shows an example of taking several feature points P1 on an image frame 101, where only 9 feature points are shown for simplicity and clarity.
The first feature point P1 can then be tracked using a tracking algorithm, such as the KLT algorithm, to find the corresponding feature point P2 in the next frame of image. In the example of fig. 3, the same number of corresponding feature points P2 are found in the next image frame 102. Then, the first amount of shake of the vehicle 10 may be determined based on the positional relationship of the corresponding first characteristic point P1 and second characteristic point P2. Specifically, the transformation matrix H between the two frame images 101 and 102 can be calculated based on the coordinates of the corresponding feature points P1 and P2 using the following formula 1:
p2 ═ HP1 (formula 1)
The transformation matrix H is also referred to as a homography matrix. The coordinates of the pixel points P1 and P2 in the image can be represented as a three-dimensional homogeneous vector, the homography matrix H can be represented as a 3 x 3 matrix and has eight degrees of freedom, the pitch angle and the heading angle of the camera 12 can be calculated based on the homography matrix H, and the shake amount dA can be determined based on the pitch angle1Which may be expressed in terms of angles or pixel offsets on the image.
Referring back to fig. 2, the first dither dA is determined using the multi-frame image in step S1201Thereafter, in step S130, a second shaking dA of the vehicle is determined based also on the IMU unit 142. The IMU unit 14 may output in real time the current attitude of the vehicle 10, including its pitch angle, which represents the angle between the vehicle 10 and the ground, so that the shake of the vehicle 10 may be determined, which may be expressed as the pitch angle or the resulting pixel shift. In some embodiments, it may also be desirable to determine whether the IMU unit 14 has been calibrated before determining the shudder of the vehicle 10 using the IMU unit 14. When the IMU unit 14 is mounted on a vehicleThe orientation of the unit may be at an angle to the vehicle and the IMU unit itself may be offset, requiring calibration of the newly installed IMU unit 14. There are many methods for calibrating the IMU unit 14 in the prior art, and as an example, the vehicle 10 may be parked still on a horizontal ground, and the acceleration of the IMU unit 14 should be converted to the information of only the gravity acceleration in the vehicle coordinate system, i.e. G ═ 0,0,9.8, where 9.8 is the value of the gravity acceleration, and the information of the angle of rotation of the gyroscope should be 0 degree. After the vehicle 10 is parked stationary for a certain period of time, e.g., 50 seconds or more, several frames of data of the IMU, preferably 100 frames or more, e.g., 200 frames, are taken, and the average acceleration and gyroscope information are calculated to obtain the gyroscope offset and the rotation angle q, where the rotation angle q is a quaternion representation. Calibration of the IMU unit 14, which represents the attitude of the IMU unit 14 relative to the vehicle 10, is accomplished by resolving the quaternion rotation angle q to determine the angle of the IMU unit 14 to the ground. It will be appreciated that in use, the IMU unit 14 outputs in real time its own attitude, including pitch, yaw and roll angles, etc., which is then translated into the attitude of the vehicle 10 in accordance with the calibration parameters.
Determining a second amount of shake dA of the vehicle 10 based on the IMU unit 142Then, in step S140, the first jitter amount dA is adjusted1And a second amount of jitter dA2Fusion is performed to determine the final amount of shake dA of the vehicle 10. For the first jitter amount dA1And a second amount of jitter dA2The fusion is performed by giving them appropriate weights, respectively, and the concept of confidence introduced in the present invention will be described in detail with reference to fig. 4.
As shown in fig. 4, first in step S141, a reprojection error between feature points P1 and P2 corresponding to each other may be calculated, which may be calculated by the following equation 2 using the homography matrix H previously calculated based on equation 1:
Figure BDA0001766172790000081
wherein the content of the first and second substances,
Figure BDA0001766172790000082
is an estimate of the P1 that,
Figure BDA0001766172790000083
is an estimate of P2, which can be obtained by transforming the characteristic points P2 and P1, respectively, with the homography H. By equation 2, the maximum reprojection error ε can be determined. Then, in step S141, the first shake amount dA may be determined using the following equation 31The confidence of (conf).
If ε is not less than T, if conf is 0
If ε < T (equation 3), conf ═ T- ε/T
Where T is a pre-set reprojection error threshold. When the reprojection error ε reaches the threshold, the calculated first amount of jitter dA is considered1Without confidence, the confidence conf is zero. When the reprojection error ε is less than the threshold, the calculated first amount of dither dA is considered to be1With confidence, the confidence conf is calculated according to equation 3 above.
Finally, in step S143, the determined confidence may be utilized to correct the first jitter dA1And a second dither dA2The fusion is performed to determine the amount of shake dA of the vehicle 10, for example, as shown in equation 4 below.
dA=conf*dA1+(1-conf)dA2(formula 4)
In the above-described embodiment, the shake of the vehicle 10 is determined in combination with both the visual image and the IMU unit, which can avoid the visual image from being easily affected by factors such as lighting conditions, motion blur, and the like, and can also alleviate the defect of accuracy reduction of the IMU unit due to the accumulated error, so that stable and accurate shake estimation can be achieved.
Another embodiment of the present invention provides a method for implementing visual ranging based on the jitter amount determined above, and a flowchart thereof is shown in fig. 5. As shown in fig. 5, the method 200 may begin at step S210 by determining an amount of shake of the vehicle 10, which may be performed using the methods described above with reference to fig. 1-4 and will not be repeated here. Then, in step S220, a ranging image including the target vehicle (or referred to as a leading vehicle) is obtained. It should be understood that the ranging image including the target vehicle obtained here may be the current frame image used previously when the vehicle shake amount is determined in step S210, thereby ensuring that the determined shake corresponds to the shake amount at the time of ranging, so that the shake amount can accurately compensate the ranging result, as described below.
Next in step S230, a first position of the target vehicle in the ranging image is determined, which may be based on a tail feature of the vehicle. For example, the tail feature of the target vehicle may be identified in the range image by an image recognition algorithm to determine its relative position in the image. In general, the relative position may correspond to a distance of the target vehicle from the host vehicle, although the position of the target vehicle in the image may vary greatly due to the shake of the host vehicle. For example, when the head of the host vehicle is raised, the position of the target vehicle in the image is lowered, and the distance measurement result becomes closer; when the head of the vehicle is bent downwards, the position of the target vehicle in the image rises, and the distance measurement result becomes far. To avoid the influence of the shake, in step S240, the first position of the target vehicle is compensated based on the determined shake amount of the host vehicle to obtain the second position of the target vehicle. Further, in step S250, the distance of the target vehicle from the host vehicle may be determined based on the second position.
Exemplary devices
Fig. 6 shows a functional block diagram of a vehicle shake determination apparatus 300 according to an embodiment of the present application. Fig. 7 shows a functional block diagram of a visual ranging apparatus 400 according to an embodiment of the present application. Since the specific functions and operations of the respective units and modules in these apparatuses have been described in detail in the above description with reference to fig. 1 to 5, they are only briefly introduced here to avoid repetitive descriptions.
Referring first to fig. 6, the vehicle shake determination apparatus 300 includes an image acquisition unit 310, a first shake calculation unit 320, a second shake calculation unit 330, and a fusion unit 340.
The image acquisition unit 310 may acquire the surroundings of the vehicle 10 captured by the onboard camera 12For example, it may include an image of a preceding vehicle. The first shake calculation unit 320 can calculate the first shake dA1 of the vehicle 10 based on the obtained multiframe images, as described above. Specifically, the first jitter calculating unit 320 may include: a feature point extracting unit 322 for extracting a plurality of first feature points P1 from a first frame image of the multi-frame images; a tracking unit 324 for tracking a plurality of first feature points P1 to determine a corresponding plurality of second feature points P2 in a second frame image of the plurality of frame images; and a shake calculation unit 326 that can calculate a first shake dA based on the plurality of first feature points and the plurality of second feature points1As described previously.
The second shake calculation unit 330 is operable to calculate a second shake dA of the vehicle based on measurement data of an Inertial Measurement Unit (IMU)2. Specifically, the second shake calculation unit 330 may include a ground angle calculation unit 332 that may determine the attitude of the vehicle 10, including its pitch angle with the ground, based on the measurement data of the IMU unit, and a shake calculation unit 334 that may calculate a second shake dA of the vehicle 10 from the angle of the vehicle 10 with the ground determined by the IMU unit2Which may be expressed as a pitch angle or a pixel shift amount caused thereby.
The merging unit 340 can be used to merge the first dither dA1And a second dither dA2A fusion is performed to determine the amount of shake dA of the vehicle. In some embodiments, the fusion unit 340 may include: a reprojection error calculation unit 342 for calculating reprojection errors between the plurality of first feature points P1 and the corresponding plurality of second feature points P2; a confidence calculation unit 344 for determining a confidence conf of the first dither based on the reprojection error; and a jitter calculating unit 334 for calculating the first jitter dA based on the confidence conf1And a second dither dA2Fusion is performed to calculate the amount of shake dA of the vehicle 10. The operation of these units has been described in detail above with reference to equations 1-4 and will not be repeated here.
In some embodiments, the apparatus 300 may further include an IMU calibration unit 350 for calibrating the parameters of the IMU unit so that its own pose output by the IMU unit 14 can be converted into a pose of the vehicle 10.
Referring to fig. 7, visual ranging device 400 may include a shake amount determination device 410, which may be implemented as device 300 described with reference to fig. 6 to determine an amount of shake of vehicle 10 based on the previously described process. The visual ranging apparatus 400 further includes: an image acquisition unit 420 operable to acquire a range image captured by the onboard camera 12, which may include a target vehicle such as a leading vehicle; a position determination unit 430 operable to determine a first position of the target vehicle in the range image; a compensation unit 440 for compensating the first position based on the amount of shake of the current vehicle to obtain a second position; and a distance determination unit 450 operable to determine a distance between the target vehicle and the current vehicle based on the second position. By compensating for the relative position of the target vehicle in the image using the amount of shake, more accurate monocular visual ranging may be achieved.
Exemplary electronic device
Fig. 8 shows a block diagram of an electronic device 500 according to an embodiment of the present application. The electronic device 500 may be implemented as the in-vehicle electronic device 16 of the vehicle 10 shown in FIG. 1, or as a component in the in-vehicle electronic device 16. Also, the apparatuses 300 and 400 shown in fig. 6 and 7 may be implemented in the electronic device 500 as software or firmware.
As shown in fig. 8, the electronic device 500 may include a processor 510 and a memory 520.
The processor 510 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 500 to perform desired functions.
Memory 520 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 510 to implement the vehicle shake determination method, the visual ranging method, and/or other desired functions of the various embodiments of the present application described above. Related information such as the onboard camera 12 and the IMU unit 14, drivers, and the like may also be stored in the computer-readable storage medium.
In an example, the electronic device 500 may also include a first interface 530, a second interface 540, and an output unit 550, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The first interface 530 and the second interface 540 may be connected to, for example, the in-vehicle camera 12 and the IMU unit 14, respectively, to receive data thereof. As previously mentioned, such data may be used by processor 510 to implement the vehicle shake determination method and the visual ranging method, etc. of the various embodiments of the present application. The output unit 550 may output the result data, such as the amount of shake of the vehicle 10 and the measured distance to the outside, such as an ADAS system equipped on the vehicle 10, so that the ADAS system can understand the state and driving environment of the vehicle 10 itself to make a correct driving decision.
For simplicity, only some components of the electronic device 500 that are relevant to the present application are shown in fig. 8, while some relevant peripheral or auxiliary components are omitted. In addition, the electronic device 500 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the vehicle shake determination method and the visual ranging method according to various embodiments of the present application described in the "exemplary methods" section of this specification, above.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the vehicle shake determination method and the visual ranging method according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (13)

1. A method for determining an amount of shake of a vehicle, comprising:
acquiring a multi-frame image of the surrounding environment of the vehicle by using a vehicle-mounted camera;
determining a first shake of the vehicle based on the plurality of frame images;
determining a second shake of the vehicle with the inertial measurement unit; and
fusing the first and second shakes to determine a shake of the vehicle,
wherein determining the first shake of the vehicle based on the plurality of frame images comprises:
extracting a plurality of first feature points (P) from a first frame image of the plurality of frame images1);
Determining a plurality of second feature points (P) corresponding to the plurality of first feature points from a second frame image of the plurality of frame images2) (ii) a And
calculating a first jitter based on the plurality of first feature points and the plurality of second feature points,
fusing the first shake and the second shake to determine a shake of the vehicle comprises:
calculating reprojection errors between the plurality of first feature points and the plurality of second feature points;
determining a confidence level of the first dither based on the reprojection error; and
fusing the first and second jitters with the confidence level to determine a jitter of the vehicle.
2. The method of claim 1, wherein the onboard camera is a monocular camera.
3. The method of claim 1, wherein prior to determining a second shake of the vehicle with the inertial measurement unit, the method further comprises calibrating a parameter of the inertial measurement unit.
4. The method of claim 3, wherein determining the second shake of the vehicle with the inertial measurement unit comprises:
determining an included angle between the vehicle and the ground by using the measurement value of the inertia measurement unit; and
determining a second shake of the vehicle based on the included angle.
5. A method for determining a distance between a current vehicle and a target vehicle, comprising:
determining a shake of a current vehicle using the method of any of claims 1-4;
obtaining a range image with an onboard camera, the range image including the target vehicle;
determining a first position of the target vehicle in the range image;
compensating the first position by using the shake of the current vehicle to obtain a second position; and
determining a distance between the target vehicle and the current vehicle based on the second location.
6. The method of claim 5, wherein the ranging image is one of a plurality of frame images of the vehicle surroundings.
7. An apparatus for determining an amount of shake of a vehicle, comprising:
the vehicle-mounted camera comprises an image acquisition unit, a processing unit and a processing unit, wherein the image acquisition unit is used for acquiring multi-frame images of the surrounding environment of the vehicle, which are shot by the vehicle-mounted camera;
a first shake calculation unit that calculates a first shake of the vehicle based on the plurality of frame images;
a second shake calculation unit for calculating a second shake of the vehicle based on the measurement data of the inertial measurement unit; and
a fusion unit for fusing the first shake and the second shake to determine a shake of the vehicle,
wherein the first jitter calculating unit includes:
a feature point extracting unit for extracting a plurality of first feature points (P) from a first frame image of the plurality of frame images1);
A tracking unit for tracking the plurality of first feature points to determine a corresponding plurality of second feature points (P) in a second frame image of the plurality of frame images2) (ii) a And
a shake calculation unit configured to calculate the first shake based on the plurality of first feature points and the plurality of second feature points,
the fusion unit includes:
a reprojection error calculation unit configured to calculate reprojection errors between the plurality of first feature points and the plurality of second feature points;
a confidence calculation unit for determining a confidence of the first dither based on the reprojection error; and
a shake calculation unit configured to calculate a shake of the vehicle based on the confidence, the first shake, and the second shake.
8. The apparatus of claim 7, further comprising:
and the inertial measurement unit calibration unit is used for calibrating the external parameters of the inertial measurement unit.
9. The apparatus of claim 7, wherein the second jitter calculating unit comprises:
the ground included angle calculation unit is used for determining the included angle between the vehicle and the ground by using the measurement value of the inertia measurement unit; and
and the shake calculating unit is used for calculating second shake of the vehicle based on the included angle.
10. An apparatus for determining a distance between a current vehicle and a target vehicle, comprising:
the apparatus for determining an amount of shake of a vehicle of any of claims 7-9, for determining an amount of shake of a current vehicle;
the image acquisition unit is used for acquiring a ranging image shot by a vehicle-mounted camera, and the ranging image comprises the target vehicle;
a first position determination unit for determining a first position of the target vehicle in the ranging image;
a compensation unit for compensating the first position based on the shake of the current vehicle to determine a second position; and
a distance determination unit to determine a distance between the target vehicle and the current vehicle based on the second position.
11. An electronic device, comprising:
a processor; and
a memory having stored therein computer program instructions which, when executed by the processor, cause the processor to perform the method of any of claims 1-6.
12. A vehicle comprising the electronic device of claim 11.
13. A computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the method of any of claims 1-6.
CN201810929374.XA 2018-08-15 2018-08-15 Anti-shake distance measuring method and device for monocular ADAS (adaptive Doppler analysis System) and electronic equipment Active CN110836656B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810929374.XA CN110836656B (en) 2018-08-15 2018-08-15 Anti-shake distance measuring method and device for monocular ADAS (adaptive Doppler analysis System) and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810929374.XA CN110836656B (en) 2018-08-15 2018-08-15 Anti-shake distance measuring method and device for monocular ADAS (adaptive Doppler analysis System) and electronic equipment

Publications (2)

Publication Number Publication Date
CN110836656A CN110836656A (en) 2020-02-25
CN110836656B true CN110836656B (en) 2022-01-18

Family

ID=69572983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810929374.XA Active CN110836656B (en) 2018-08-15 2018-08-15 Anti-shake distance measuring method and device for monocular ADAS (adaptive Doppler analysis System) and electronic equipment

Country Status (1)

Country Link
CN (1) CN110836656B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114674415A (en) * 2022-05-25 2022-06-28 合肥安迅精密技术有限公司 Method and system for testing jitter of suction nozzle rod of XY motion platform

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3980565B2 (en) * 2004-02-16 2007-09-26 松下電器産業株式会社 Driving assistance device
CN106153000A (en) * 2016-06-17 2016-11-23 合肥工业大学 A kind of front vehicles distance detection method
CN106679633B (en) * 2016-12-07 2019-06-04 东华大学 A kind of vehicle-mounted distance-finding system base and method
CN107218922A (en) * 2016-12-29 2017-09-29 恩泊泰(天津)科技有限公司 A kind of distance-finding method based on monocular camera
CN108106627B (en) * 2017-12-20 2019-08-09 清华大学 A kind of monocular vision vehicle positioning method based on the online dynamic calibration of characteristic point

Also Published As

Publication number Publication date
CN110836656A (en) 2020-02-25

Similar Documents

Publication Publication Date Title
US20230360260A1 (en) Method and device to determine the camera position and angle
CN110146869B (en) Method and device for determining coordinate system conversion parameters, electronic equipment and storage medium
CN109544630B (en) Pose information determination method and device and visual point cloud construction method and device
JP2020064046A (en) Vehicle position determining method and vehicle position determining device
CN109300143B (en) Method, device and equipment for determining motion vector field, storage medium and vehicle
CN106447730B (en) Parameter estimation method and device and electronic equipment
JP4943034B2 (en) Stereo image processing device
US9098750B2 (en) Gradient estimation apparatus, gradient estimation method, and gradient estimation program
JP6932058B2 (en) Position estimation device and position estimation method for moving objects
KR101880185B1 (en) Electronic apparatus for estimating pose of moving object and method thereof
KR102006291B1 (en) Method for estimating pose of moving object of electronic apparatus
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
US10991105B2 (en) Image processing device
CN111415387A (en) Camera pose determining method and device, electronic equipment and storage medium
CN113112413A (en) Image generation method, image generation device and vehicle-mounted head-up display system
CN114550042A (en) Road vanishing point extraction method, vehicle-mounted sensor calibration method and device
JP6858681B2 (en) Distance estimation device and method
CN110720113A (en) Parameter processing method and device, camera equipment and aircraft
CN110836656B (en) Anti-shake distance measuring method and device for monocular ADAS (adaptive Doppler analysis System) and electronic equipment
JP6815935B2 (en) Position estimator
KR102003387B1 (en) Method for detecting and locating traffic participants using bird&#39;s-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
WO2021062587A1 (en) Systems and methods for automatic labeling of objects in 3d point clouds
US10643077B2 (en) Image processing device, imaging device, equipment control system, equipment, image processing method, and recording medium storing program
WO2022133986A1 (en) Accuracy estimation method and system
JP3951734B2 (en) Vehicle external recognition device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant