CN110751685A - Depth information determination method, determination device, electronic device and vehicle - Google Patents

Depth information determination method, determination device, electronic device and vehicle Download PDF

Info

Publication number
CN110751685A
CN110751685A CN201911000153.5A CN201911000153A CN110751685A CN 110751685 A CN110751685 A CN 110751685A CN 201911000153 A CN201911000153 A CN 201911000153A CN 110751685 A CN110751685 A CN 110751685A
Authority
CN
China
Prior art keywords
camera
target object
determining
depth information
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911000153.5A
Other languages
Chinese (zh)
Other versions
CN110751685B (en
Inventor
史海龙
郭彦东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Motors Technology Co Ltd
Original Assignee
Guangzhou Xiaopeng Motors Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng Motors Technology Co Ltd filed Critical Guangzhou Xiaopeng Motors Technology Co Ltd
Priority to CN201911000153.5A priority Critical patent/CN110751685B/en
Publication of CN110751685A publication Critical patent/CN110751685A/en
Application granted granted Critical
Publication of CN110751685B publication Critical patent/CN110751685B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application discloses a depth information determination method for a binocular camera. The depth information determination method comprises the following steps: determining a motion trail of a target object according to an image acquired by a binocular camera; determining time delays of a first camera and a second camera of the binocular camera according to the motion trail; and determining the three-dimensional coordinates of the target object at the first moment according to the time delay and the motion trail as the depth information of the target object at the first moment. In the depth information determining method of the embodiment of the application, the target object is detected and tracked through the binocular camera to obtain the motion trail, the robustness of stereo matching is improved, the depth information is calculated according to the motion trail of the target object, and the system error caused by time asynchronism is solved. The application also discloses a determination device, an electronic device and a vehicle.

Description

Depth information determination method, determination device, electronic device and vehicle
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a depth information determining method, a depth information determining apparatus, an electronic apparatus, and a vehicle.
Background
Binocular stereo vision is a common computer vision application used to estimate the position of objects in the real world, particularly the depth information of objects. When using binocular cameras for depth estimation, it is often necessary that both cameras use the same or similar parameter settings. In practical engineering, it is often not possible to guarantee a strict synchronization of the two cameras, thus resulting in systematic errors, such that the estimated depth information is in error.
Disclosure of Invention
In view of this, embodiments of the present application provide a depth information determining method, a determining apparatus, and an electronic apparatus.
The application provides a depth information determination method, which is used for a binocular camera, wherein the binocular camera is used for acquiring an image of a target object, the binocular camera comprises a first camera and a second camera, and the depth information determination method comprises the following steps:
determining the motion track of the target object according to the image;
determining a time delay of the first camera and the second camera according to the motion trajectory;
and determining the three-dimensional coordinates of the target object at a first moment according to the time delay and the motion trail to serve as the depth information of the target object at the first moment.
In some embodiments, the determining the motion trajectory of the target object from the image comprises:
detecting and tracking the target object through the first camera and the second camera respectively to obtain a first motion track point set and a second motion track point set of the target object respectively;
and fitting the first motion track and the second motion track of the target object according to the first motion track point set and the second motion track point set.
In some embodiments, the determining the time delay of the first camera and the second camera from the motion trajectory comprises:
time-aligning the first set of motion trajectory points with the second set of motion trajectory points;
determining epipolar geometric constraint conditions;
determining a time difference of the first camera and the second camera according to the epipolar geometric constraint to determine a time delay of the first camera and the second camera.
In some embodiments, the determining, as the depth information of the target object at the first time, the three-dimensional coordinates of the target object at the first time according to the time delay and the motion trajectory includes:
and determining first position information of the target object in the first camera and second position information of the target object in the second camera at the first moment according to the time difference, the first motion trail and the second motion trail so as to obtain two-dimensional coordinates of the target object at the first moment.
In some embodiments, the determining the three-dimensional coordinate of the target object at the first time as the depth information of the target object at the first time according to the time delay and the motion trail includes:
determining the three-dimensional coordinates of the target object at a first moment by triangulation according to the two-dimensional coordinates of the target object at the first moment;
and taking the three-dimensional coordinates as the depth information of the target object at the first moment.
The application provides a depth information's confirming device for binocular camera, binocular camera is used for acquireing the image of target object, binocular camera includes first camera and second camera, the confirming device includes:
the motion track determining module is used for determining the motion track of the target object according to the image;
a time delay determining module for determining the time delay of the first camera and the second camera according to the motion trail;
and the depth information determining module is used for determining the three-dimensional coordinates of the target object at the first moment according to the time delay and the motion trail to serve as the depth information of the target object at the first moment.
The application provides an electronic device, including binocular camera and treater, binocular camera is used for acquireing the image of target object, binocular camera includes first camera and second camera, the treater is used for:
determining the motion track of the target object according to the image;
determining a time delay of the first camera and the second camera according to the motion trajectory;
and determining the three-dimensional coordinates of the target object at a first moment according to the time delay and the motion trail to serve as the depth information of the target object at the first moment.
In some embodiments, the processor is further configured to:
detecting and tracking the target object through the first camera and the second camera respectively to obtain a first motion track point set and a second motion track point set of the target object respectively;
and fitting the first motion track and the second motion track of the target object according to the first motion track point set and the second motion track point set.
In some embodiments, the processor is further configured to:
time-aligning the first set of motion trajectory points with the second set of motion trajectory points;
determining epipolar geometric constraint conditions;
determining a time difference of the first camera and the second camera according to the epipolar geometric constraint to determine a time delay of the first camera and the second camera.
In some embodiments, the processor is further configured to:
and determining first position information of the target object in the first camera and second position information of the target object in the second camera at the first moment according to the time difference, the first motion trail and the second motion trail so as to obtain two-dimensional coordinates of the target object at the first moment.
In some embodiments, the processor is further configured to:
determining the three-dimensional coordinates of the target object at a first moment by triangulation according to the two-dimensional coordinates of the target object at the first moment;
and taking the three-dimensional coordinates as the depth information of the target object at the first moment.
A vehicle is provided that includes a binocular camera, one or more processors, a memory; and one or more programs, wherein the one or more programs are stored in the memory and executed by the one or more processors, the programs comprising instructions for performing the depth information determination method as described above.
A non-transitory computer-readable storage medium containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the depth information determination method is provided.
In the depth information determining method, the determining device, the electronic device, the vehicle and the computer-readable storage medium according to the embodiments of the present application, the target object is detected and tracked by the binocular camera to obtain the motion trajectory, the robustness of stereo matching is increased, and the depth information is calculated according to the motion trajectory of the target object, so that not only is the system error caused by time asynchronism effectively solved, but also the depth information of the target object can be obtained, and the present application has stronger usability.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flow chart of a depth information determination method according to some embodiments of the present disclosure.
Fig. 2 is a block diagram of a depth information determining apparatus according to some embodiments of the present disclosure.
Fig. 3 is a scene schematic diagram of a depth information determining method according to some embodiments of the present application.
Fig. 4 is a scene schematic diagram of a depth information determining method according to some embodiments of the present application.
Fig. 5 is a flowchart illustrating a depth information determining method according to some embodiments of the present disclosure.
Fig. 6 is a scene schematic diagram of a depth information determining method according to some embodiments of the present application.
Fig. 7 is a flowchart illustrating a depth information determining method according to some embodiments of the present disclosure.
Fig. 8 is a scene schematic diagram of a depth information determining method according to some embodiments of the present application.
Fig. 9 is a flowchart illustrating a depth information determining method according to some embodiments of the present disclosure.
Fig. 10 is a flowchart illustrating a depth information determining method according to some embodiments of the present disclosure.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
Referring to fig. 1, the present application provides a depth information determining method for a binocular camera. The binocular camera is used for acquiring images of a target object, and comprises a first camera and a second camera. The determination method comprises the following steps:
s10: determining the motion track of the target object according to the image;
s20: determining a time delay of the first camera and the second camera according to the motion trajectory;
s30: and determining the three-dimensional coordinates of the target object at the first moment according to the time delay and the motion trail as the depth information of the target object at the first moment.
Referring to fig. 2 to 4, an electronic device 100 is provided in an embodiment of the present disclosure. The electronic apparatus 100 includes a binocular camera 11 and a processor (not shown). The binocular camera 11 includes a first camera 11a and a second camera 11 b. The binocular camera 11 is used to acquire an image of a target object. The processor (not shown) is configured to determine a motion trajectory of the target object according to the image, determine time delays of the first camera 11a and the second camera 11b according to the motion trajectory, and determine three-dimensional coordinates of the target object at a first time as depth information of the target object at the first time according to the time delays and the motion trajectory. The electronic device 100 may be an industrial computer for acquiring high-precision three-dimensional information in industrial production, or may be a device such as a game machine for daily consumer electronics.
The embodiment of the present application further provides a depth information determining device 110, and the depth information determining method according to the embodiment of the present application can be implemented by the determining device 110.
In particular, the determination means 110 comprises a motion trajectory determination module 112, a time delay determination module 114 and a depth information determination module 116. S10 may be implemented by the motion trajectory determination module 112, S20 may be implemented by the time delay determination module 114, and S30 may be implemented by the depth information determination module 116. In other words, the motion trajectory determination module 112 is configured to determine the motion trajectory of the target object according to the image. The time delay determination module 114 determines the time delays of the first camera 11a and the second camera 11b from the motion trajectory. The depth information determining module 116 is configured to determine, according to the time delay and the motion trajectory, three-dimensional coordinates of the target object at the first time as depth information of the target object at the first time.
In the depth information determining method, the determining device and the electronic device of the embodiment of the application, the target object is detected and tracked through the binocular camera 11 to obtain the motion trail, the robustness of stereo matching is improved, the depth information is calculated according to the motion trail of the target object, the system error caused by time asynchronism is effectively solved, the depth information of the target object can be obtained, and the application has stronger usability.
In particular, binocular or multi-ocular depth estimation is a commonly used visual perception technique in computer vision applications. The method comprises the steps of imaging the same object through two cameras which are arranged at different positions and have the same relative position, and then obtaining three-dimensional data information of a target object through mathematical calculation according to the difference of the pixel positions of the same object imaged in the two cameras and the relative parameters of the binocular cameras.
In the related art, for convenience of processing, it is generally required that the two cameras adopt the same or similar parameter settings, for example, the same field angle, focal length, pixel value, and shutter synchronization of the two cameras at the time of imaging are required. Designing and deploying two cameras that are perfectly time synchronized requires providing driving clocks of the same source and tightly controlling the time delay of signal and data transmission. In actual operation, most synchronous binocular cameras are realized only by adding a time stamp at a data end, but not really synchronized, physical delay of image acquisition and transmission cannot be accurately estimated, and a large time error exists.
Therefore, in actual operation, a binocular scene which is not synchronous in time is often faced. Assuming that the shutter time of the second camera 11b is later than the first camera 11a by Δ t, as shown in fig. 4, and the target object moves during this time, the pixel position imaged in the second camera 11b will also be x' (t ″) as wellk) Move to x' (t)k+ Δ t). If x' is used directly (t)k+ Δ t) and x (t)k) The coordinates of (a) are triangulated, and the resulting three-dimensional coordinates of the object will be incorrect.
In the related technology, pixel points in time-staggered left and right images acquired by a binocular camera need to be matched, then a transformation model between pixels is constructed for the current scene, and the pixel positions of the left and right images which cannot be actually measured due to time misalignment are estimated in a time interpolation mode. However, in a complex scene with a large number of moving target objects, the related art cannot simply and reliably perform pixel matching of left and right images, and cannot obtain reliable depth estimation of the moving target objects.
In the depth information determining method according to the embodiment of the application, based on the trajectory tracking of the target object, the motion trajectory of the target object is first obtained, then the time difference between two images, that is, the time delay of the first camera 11a and the time delay of the second camera 11b, are obtained according to the motion trajectory, and then the two-dimensional coordinate information of the target after time correction is obtained by using the obtained time delay and the motion trajectory interpolation, so that the depth information of the target is finally obtained.
Referring to fig. 5, in some embodiments, S10 includes:
s11: detecting and tracking the target object through a first camera and a second camera respectively to obtain a first motion track point set and a second motion track point set of the target object respectively;
s12: and fitting the first motion track and the second motion track of the target object according to the first motion track point set and the second motion track point set.
In some embodiments, S11 and S12 may be implemented by the motion trajectory determination module 112. That is, the motion trail determination module 112 is configured to detect and track the target object through the first camera 11a and the second camera 11b respectively to obtain a first motion trail point set and a second motion trail point set of the target object respectively. And the first motion track and the second motion track are used for fitting the target object according to the first motion track point set and the second motion track point set.
In some embodiments, the processor (not shown) is configured to detect and track the target object by the first camera 11a and the second camera 11b respectively to obtain a first set of motion track points and a second set of motion track points of the target object respectively. And the first motion track and the second motion track are used for fitting the target object according to the first motion track point set and the second motion track point set.
Referring to fig. 6, specifically, the parameters and the shutter time of the first camera 11a and the second camera 11b in the binocular camera 11 in the present application may be different. In actual operation, a related algorithm for deep learning target detection and tracking can be adopted to detect and track a target object in real time, and deep learning detection methods with different complexities can be selected according to the overall configuration, resource allocation and cost factors of electronic equipment, for example, in consumer electronics, deep learning detection methods with lower complexities, such as a coolo algorithm, an SSD algorithm and other deep neural network target detection algorithms, can be selected. It should be noted that the detection and tracking of the target need to be performed on two cameras independently. For the detailed description of the algorithm, reference may be made to the principles and explanations of the related art, which are not repeated in this application.
Through target detection and tracking of the deep learning, a motion trail point set …, x (t) of the target object in the images formed by the first camera 11a and the second camera 11b respectively can be obtainedk-1),x(tk),x(tk+1) … and …, x '(t'k-1),x'(t'k),x'(t'k+1) …, according to the obtained motion trail point sets, the two-dimensional motion trail of the first image and the second image can be obtained by a curve fitting method. It can be understood that, in general, the motion trajectory of an object is relatively smooth, and therefore, a high-order continuous multiple can be used for fitting, that is:
f(t)=c1φ1(t)+c2φ2(t)+…cmφm(t);
wherein phii(t) is a specific set of functions, which may be trigonometric functions { sin (it) }, polynomial functions { t }i-1And so on.
Through curve fitting, a first motion track x (t) of the target object in the first image and a second motion track x '(t') of the target object in the second image can be obtained. At tkThe coordinate of the target object in the first image at the moment is x (t)k) And at t'kThe coordinates of the target object in the time second image are x '(t'k) Which are points falling on the first motion trajectory x (t) and the second motion trajectory x '(t'), respectively.
Therefore, in the embodiment of the application, the target object in the image is used as the object, the target object is tracked by the binocular camera respectively to obtain the motion trail, and the target object detection and tracking results are identifiable, so that compared with a mode of matching pixel points in time-staggered left and right images acquired by the binocular camera in the related art, the robustness of stereo matching is improved.
Referring to fig. 7, in some embodiments, S20 includes:
s21: time alignment is carried out on the first motion track point set and the second motion track point set;
s22: determining epipolar geometric constraint conditions;
s23: the time difference of the first camera and the second camera is determined according to epipolar geometric constraints to determine the time delay of the first camera and the second camera.
In some embodiments, S21-S23 may be implemented by the time delay determination module 114, that is, the time delay module 114 is configured to time align the first set of motion trajectory points with the second set of motion trajectory points, and to determine epipolar geometric constraints, and to determine time differences of the first camera 11a and the second camera 11b according to the epipolar geometric constraints to determine time delays of the first camera 11a and the second camera 11 b.
In some embodiments, a processor (not shown) is configured to time align the first set of motion trajectory points with the second set of motion trajectory points, and to determine epipolar geometric constraints, and to determine time differences of the first camera 11a and the second camera 11b based on the epipolar geometric constraints to determine time delays of the first camera 11a and the second camera 11 b.
Referring to fig. 8, specifically, a binocular vision constraint equation may be further constructed according to the motion trajectory point set of the target object obtained by the binocular camera in the previous step.
First, as for the target object i, the set of motion locus points …, x it acquires by the first camera 11a is knowni(tk-1),xi(tk),xi(tk+1) … and set of trajectory points …, x 'acquired by the second camera 11 b'i(t'k-1),x'i(t'k),x'i(t'k+1) …, if they are to establish their binocular visual constraint equations, it is first necessary to make two sets of locus pointsThe sets are time aligned. Assuming that the frame rate of the first camera is f and the frame rate of the second camera is f', the time delay of the first camera 11a and the second camera 11b is t0Then tkAnd t'kThere may be a correspondence as follows:
Figure BDA0002241055040000071
order to
Figure BDA0002241055040000072
Can obtain tk=t0+ρt'k. In many cases, the frame rates of the first camera 11a and the second camera 11b are the same, and f is taken as an example, that is, ρ is 1.
From the correspondence between the moments in the first camera 11a and the second camera 11b, an epipolar geometric constraint equation for the binocular camera 11 can be constructed:
xi'(tk)TFxi(tk)=0
where F is a basis matrix of the binocular camera 11, which may be constructed by an extrinsic parameter matrix of the binocular camera 11 and an intrinsic parameter matrix of the binocular camera 11. For example, if it is known that the internal reference matrix of the first camera 11a is K and the internal reference matrix of the second camera is K', the external reference matrices of the first camera 11a and the second camera 11b, that is, the relative position and orientation transformation rotation matrix R and the translation vector t of the first camera 11a and the second camera 11b, then F ═ K ″.-1R[t]×K-1. Will tk=t0+ρt'kObtaining after substitution:
xi'(t0+ρt'k)TFxi(tk)=0
taking ρ ═ 1 as an example, taylor expansion can be performed on the above equation to obtain:
wherein
Figure BDA0002241055040000082
Is x'i(t'k) At t'kThe time derivative of (a), in combination with the second motion trajectory x '(t') obtained in the previous step, can be obtained by:
Figure BDA0002241055040000083
to calculate x'i(t'k) At t'kThe time derivative of (c).
Finally, a reference t can be constructed0Simple solution of the equation (a) of (b) to (b):
Figure BDA0002241055040000084
similarly, the motion track point x at other times of the target object can also be usedi'(tk-1),xi(tk-1),xi'(tk+1),xi(tk+1),xi'(tk+2),xi(tk+2) … solving for t0And can be calculated for a plurality of timesStatistical averaging is performed to reduce errors.
For the case where ρ ≠ 1, it is similar to the above-described configuration calculation and will not be described here.
Therefore, in the depth information calculation link, the system error caused by time asynchronism is effectively solved by taking the motion track of the target object as a unit.
Referring to fig. 9, in some embodiments, S30 includes:
s31: and determining first position information of the target object in the first camera and second position information of the target object in the second camera at the first moment according to the time difference, the first motion trail and the second motion trail so as to obtain two-dimensional coordinates of the target object at the first moment.
In some embodiments, S31 may be implemented by depth information determination module 116. In other words, the depth information determining module 116 is configured to determine, according to the time difference, the first motion trajectory and the second motion trajectory, first position information of the target object in the first camera 11a and second position information of the target object in the second camera 11b at the first time point, respectively, so as to obtain two-dimensional coordinates of the target object at the first time point.
In some embodiments, the processor (not shown) is configured to determine first position information of the target object in the first camera 11a and second position information of the target object in the second camera 11b at the first time point, respectively, and two-dimensional coordinates of the target object at the first time point according to the time difference, the first motion trajectory and the second motion trajectory.
Specifically, referring to fig. 8 again, the time difference t between the first camera 11a and the second camera 11b is obtained0The motion trail x obtained in the foregoing can be further utilizedi(t) and x'i(t') to obtain tkThe real pixel point position x of the target object at the first camera 11a at that momenti(tk) And the true pixel position x at the second camera 11bi'(tk) Thereby obtaining the two-dimensional coordinates of the target object.
Referring to fig. 10, in some embodiments, S30 further includes:
s32: determining the three-dimensional coordinate of the target object at the first moment by a triangulation method according to the two-dimensional coordinate of the target object at the first moment;
s33: and taking the three-dimensional coordinates as the depth information of the target object at the first moment.
In some embodiments, S32 and S33 may be implemented by the depth information determining module 116, or the depth information determining module 116 is configured to determine the three-dimensional coordinates of the target object at the first time by triangulation based on the two-dimensional coordinates of the target object at the first time, and to use the three-dimensional coordinates as the depth information of the target object at the first time.
In some embodiments, the processor (not shown) is configured to determine the three-dimensional coordinates of the target object at the first time by triangulation based on the two-dimensional coordinates of the target object at the first time, and to use the three-dimensional coordinates as depth information of the target object at the first time.
Specifically, please refer to FIG. 8 again, at tkThe two-dimensional coordinate of the time is also the real pixel point position x of the target object in the first camera 11ai(tk) And the true pixel position x at the second camera 11bi'(tk) Then, t can be determined from triangulation in stereo visionkThree-dimensional coordinate X of time target objecti(tk) By the three-dimensional coordinate Xi(tk) Depth information of the target object may be acquired.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the method of determining depth information of any of the embodiments described above.
The embodiment of the application also provides a vehicle. The vehicle includes a binocular camera, memory, and one or more processors, one or more programs being stored in the memory and configured to be executed by the one or more processors. The program includes a program for executing the method for determining depth information according to any one of the above embodiments.
The processor may be used to provide computational and control capabilities to support the operation of the entire vehicle. Memory in the vehicle provides an environment for the computer readable instructions in the memory to operate.
The depth information determining method is applied to the vehicle and can be used for detecting the depth information of other vehicles or people around the vehicle in the driving environment, so that a corresponding information basis is provided for automatic driving, obstacle avoidance and the like of the vehicle.
It will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), or the like.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (13)

1. A depth information determination method for a binocular camera for acquiring an image of a target object, the binocular camera including a first camera and a second camera, the depth information determination method comprising:
determining the motion track of the target object according to the image;
determining a time delay of the first camera and the second camera according to the motion trajectory;
and determining the three-dimensional coordinates of the target object at a first moment according to the time delay and the motion trail to serve as the depth information of the target object at the first moment.
2. The depth information determination method according to claim 1, wherein the determining the motion trajectory of the target object from the image includes:
detecting and tracking the target object through the first camera and the second camera respectively to obtain a first motion track point set and a second motion track point set of the target object respectively;
and fitting the first motion track and the second motion track of the target object according to the first motion track point set and the second motion track point set.
3. The depth information determination method of claim 2, wherein the determining the time delay of the first camera and the second camera from the motion trajectory comprises:
time-aligning the first set of motion trajectory points with the second set of motion trajectory points;
determining epipolar geometric constraint conditions;
determining a time difference of the first camera and the second camera according to the epipolar geometric constraint to determine a time delay of the first camera and the second camera.
4. The depth information determination method according to claim 3, wherein the determining, as the depth information of the target object at the first time, three-dimensional coordinates of the target object at the first time from the time delay and the motion trajectory includes:
and determining first position information of the target object in the first camera and second position information of the target object in the second camera at the first moment according to the time difference, the first motion trail and the second motion trail so as to obtain two-dimensional coordinates of the target object at the first moment.
5. The depth information determination method according to claim 4, wherein the determining, as the depth information of the target object at the first time, the three-dimensional coordinates of the target object at the first time from the time delay and the motion trajectory includes:
determining the three-dimensional coordinates of the target object at a first moment by triangulation according to the two-dimensional coordinates of the target object at the first moment;
and taking the three-dimensional coordinates as the depth information of the target object at the first moment.
6. A depth information determination apparatus for a binocular camera for acquiring an image of a target object, the binocular camera including a first camera and a second camera, the determination apparatus comprising:
the motion track determining module is used for determining the motion track of the target object according to the image;
a time delay determining module for determining the time delay of the first camera and the second camera according to the motion trail;
and the depth information determining module is used for determining the three-dimensional coordinates of the target object at the first moment according to the time delay and the motion trail to serve as the depth information of the target object at the first moment.
7. An electronic device comprising a binocular camera for acquiring an image of a target object, the binocular camera comprising a first camera and a second camera, and a processor for:
determining the motion track of the target object according to the image;
determining a time delay of the first camera and the second camera according to the motion trajectory;
and determining the three-dimensional coordinates of the target object at a first moment according to the time delay and the motion trail to serve as the depth information of the target object at the first moment.
8. The electronic device of claim 7, wherein the processor is further configured to:
detecting and tracking the target object through the first camera and the second camera respectively to obtain a first motion track point set and a second motion track point set of the target object respectively;
and fitting the first motion track and the second motion track of the target object according to the first motion track point set and the second motion track point set.
9. The electronic device of claim 8, wherein the processor is further configured to:
time-aligning the first set of motion trajectory points with the second set of motion trajectory points;
determining epipolar geometric constraint conditions;
determining a time difference of the first camera and the second camera according to the epipolar geometric constraint to determine a time delay of the first camera and the second camera.
10. The electronic device of claim 9, wherein the processor is further configured to:
and determining first position information of the target object in the first camera and second position information of the target object in the second camera at the first moment according to the time difference, the first motion trail and the second motion trail so as to obtain two-dimensional coordinates of the target object at the first moment.
11. The electronic device of claim 10, wherein the processor is further configured to:
determining the three-dimensional coordinates of the target object at a first moment by triangulation according to the two-dimensional coordinates of the target object at the first moment;
and taking the three-dimensional coordinates as the depth information of the target object at the first moment.
12. A vehicle, characterized by comprising:
a binocular camera;
one or more processors, memory; and
one or more programs, wherein the one or more programs are stored in the memory and executed by the one or more processors, the programs comprising instructions for performing the depth information determination method of any of claims 1-5.
13. A non-transitory computer-readable storage medium of computer-executable instructions that, when executed by one or more processors, cause the processors to perform the depth information determination method of any of claims 1-5.
CN201911000153.5A 2019-10-21 2019-10-21 Depth information determination method, determination device, electronic device and vehicle Active CN110751685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911000153.5A CN110751685B (en) 2019-10-21 2019-10-21 Depth information determination method, determination device, electronic device and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911000153.5A CN110751685B (en) 2019-10-21 2019-10-21 Depth information determination method, determination device, electronic device and vehicle

Publications (2)

Publication Number Publication Date
CN110751685A true CN110751685A (en) 2020-02-04
CN110751685B CN110751685B (en) 2022-10-14

Family

ID=69279121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911000153.5A Active CN110751685B (en) 2019-10-21 2019-10-21 Depth information determination method, determination device, electronic device and vehicle

Country Status (1)

Country Link
CN (1) CN110751685B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177440A (en) * 2021-04-09 2021-07-27 深圳市商汤科技有限公司 Image synchronization method and device, electronic equipment and computer storage medium
US11232315B2 (en) 2020-04-28 2022-01-25 NextVPU (Shanghai) Co., Ltd. Image depth determining method and living body identification method, circuit, device, and medium
WO2023272524A1 (en) * 2021-06-29 2023-01-05 深圳市大疆创新科技有限公司 Binocular capture apparatus, and method and apparatus for determining observation depth thereof, and movable platform
CN117491003A (en) * 2023-12-26 2024-02-02 国网天津市电力公司城南供电分公司 Circuit breaker motion characteristic detection method and device, electronic equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6445815B1 (en) * 1998-05-08 2002-09-03 Canon Kabushiki Kaisha Measurement of depth image considering time delay
EP2309451A1 (en) * 2009-09-25 2011-04-13 Deutsche Telekom AG Method and system for self-calibration of asynchronized camera networks
CN106780620A (en) * 2016-11-28 2017-05-31 长安大学 A kind of table tennis track identification positioning and tracking system and method
CN108234819A (en) * 2018-01-30 2018-06-29 西安电子科技大学 Video synchronization method based on homograph

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6445815B1 (en) * 1998-05-08 2002-09-03 Canon Kabushiki Kaisha Measurement of depth image considering time delay
EP2309451A1 (en) * 2009-09-25 2011-04-13 Deutsche Telekom AG Method and system for self-calibration of asynchronized camera networks
CN106780620A (en) * 2016-11-28 2017-05-31 长安大学 A kind of table tennis track identification positioning and tracking system and method
CN108234819A (en) * 2018-01-30 2018-06-29 西安电子科技大学 Video synchronization method based on homograph

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邵蓓: ""基于局部单应矩阵的视频同步算法"", 《中国优秀博硕士学位论文全文数据库(硕士)》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11232315B2 (en) 2020-04-28 2022-01-25 NextVPU (Shanghai) Co., Ltd. Image depth determining method and living body identification method, circuit, device, and medium
CN113177440A (en) * 2021-04-09 2021-07-27 深圳市商汤科技有限公司 Image synchronization method and device, electronic equipment and computer storage medium
WO2023272524A1 (en) * 2021-06-29 2023-01-05 深圳市大疆创新科技有限公司 Binocular capture apparatus, and method and apparatus for determining observation depth thereof, and movable platform
CN117491003A (en) * 2023-12-26 2024-02-02 国网天津市电力公司城南供电分公司 Circuit breaker motion characteristic detection method and device, electronic equipment and medium

Also Published As

Publication number Publication date
CN110751685B (en) 2022-10-14

Similar Documents

Publication Publication Date Title
CN110751685B (en) Depth information determination method, determination device, electronic device and vehicle
US10750150B2 (en) Methods for automatic registration of 3D image data
EP3420530B1 (en) A device and method for determining a pose of a camera
US9774837B2 (en) System for performing distortion correction and calibration using pattern projection, and method using the same
US10260862B2 (en) Pose estimation using sensors
US20170019657A1 (en) Stereo auto-calibration from structure-from-motion
US10438412B2 (en) Techniques to facilitate accurate real and virtual object positioning in displayed scenes
US8531505B2 (en) Imaging parameter acquisition apparatus, imaging parameter acquisition method and storage medium
US9454226B2 (en) Apparatus and method for tracking gaze of glasses wearer
JP6662382B2 (en) Information processing apparatus and method, and program
CN113474819A (en) Information processing apparatus, information processing method, and program
JP2008309595A (en) Object recognizing device and program used for it
CN110800020B (en) Image information acquisition method, image processing equipment and computer storage medium
JP2018201146A (en) Image correction apparatus, image correction method, attention point recognition apparatus, attention point recognition method, and abnormality detection system
US10148929B2 (en) Method of prompting proper rotation angle for image depth establishing
JP6734994B2 (en) Stereo measuring device and system
CN112802112B (en) Visual positioning method, device, server and storage medium
KR100961616B1 (en) Method and system for calibrating of omnidirectional camera based on contour matching
CN109328459B (en) Intelligent terminal, 3D imaging method thereof and 3D imaging system
KR101142279B1 (en) An apparatus for aligning images in stereo vision system and the method thereof
KR20170122508A (en) Coordination guide method and system based on multiple marker
KR20160111151A (en) image processing method and apparatus, and interface method and apparatus of gesture recognition using the same
JP2016042639A (en) Camera posture controller, method and program
EP2953096B1 (en) Information processing device, information processing method, system and carrier means
WO2022210005A1 (en) Attitude estimation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant