CN112129317B - Information acquisition time difference determining method and device, electronic equipment and storage medium - Google Patents

Information acquisition time difference determining method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112129317B
CN112129317B CN201910550267.0A CN201910550267A CN112129317B CN 112129317 B CN112129317 B CN 112129317B CN 201910550267 A CN201910550267 A CN 201910550267A CN 112129317 B CN112129317 B CN 112129317B
Authority
CN
China
Prior art keywords
information
angle increment
image
time difference
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910550267.0A
Other languages
Chinese (zh)
Other versions
CN112129317A (en
Inventor
储刘火
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Horizon Robotics Technology Co Ltd
Original Assignee
Nanjing Horizon Robotics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Horizon Robotics Technology Co Ltd filed Critical Nanjing Horizon Robotics Technology Co Ltd
Priority to CN201910550267.0A priority Critical patent/CN112129317B/en
Publication of CN112129317A publication Critical patent/CN112129317A/en
Application granted granted Critical
Publication of CN112129317B publication Critical patent/CN112129317B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • G01C25/005Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass initial alignment, calibration or starting-up of inertial devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The embodiment of the disclosure provides a method and a device for determining information acquisition time difference, an electronic device and a storage medium, and relates to the technical field of images, wherein the method comprises the following steps: receiving image information acquired by an image acquisition device, and acquiring first angle increment information between image frames based on the image information; receiving inertia measurement information acquired by an inertia measurement device, and acquiring second angle increment information based on the inertia measurement information; determining information acquisition time difference information between the image acquisition device and the inertial measurement device based on the first angle increment information and the second angle increment information; the method, the device, the electronic equipment and the storage medium do not need to change hardware, do not depend on hardware logic and do not need to drive and cooperate to realize synchronization, and the scheme of the method and the device can be applied to a sensor which does not support a hardware synchronization function, so that the jitter amount of a target on an image caused by jitter and the like can be eliminated, and the tracking and distance measurement of the target based on the image are more accurate.

Description

Information acquisition time difference determining method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image technologies, and in particular, to a method and an apparatus for determining an information acquisition time difference, an electronic device, and a storage medium.
Background
Calibration is a fundamental problem in the fields of computer vision and the like. An autonomous vehicle is equipped with an inertial measurement device such as a gyroscope and various image acquisition devices such as cameras, and is used to acquire information, for example, the gyroscope acquires information such as angular velocity of motion and transmits the information to a processor system, and the camera transmits the acquired image information to the processor system. At present, when an image acquisition device and an inertia measurement device are calibrated, hardware connection between the image acquisition device and the inertia measurement device needs to be set, so that the diversity of system design is limited, and the application of a calibration scheme in different application scenes is influenced due to the dependence on hardware logic.
Disclosure of Invention
The present disclosure is proposed to solve the above technical problems. The embodiment of the disclosure provides a method and a device for determining information acquisition time difference, electronic equipment and a storage medium.
According to an aspect of the embodiments of the present disclosure, there is provided an information acquisition time difference determining method, including: receiving image information acquired by an image acquisition device, and acquiring first angle increment information between image frames based on the image information; receiving inertia measurement information acquired by an inertia measurement device, and acquiring second angle increment information based on the inertia measurement information; determining information acquisition time difference information between the image acquisition device and the inertial measurement device based on the first angle increment information and the second angle increment information.
According to another aspect of the embodiments of the present disclosure, there is provided an information acquisition time difference determining apparatus including: the first increment obtaining module is used for receiving image information collected by the image collecting device and obtaining first angle increment information between image frames based on the image information; the second increment obtaining module is used for receiving the inertia measurement information collected by the inertia measurement device and obtaining second angle increment information based on the inertia measurement information; and the time difference calibration module is used for determining information acquisition time difference information between the image acquisition device and the inertial measurement unit based on the first angle increment information and the second angle increment information.
According to yet another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the above-mentioned method.
According to still another aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; the processor is used for executing the method.
Based on the method and device for determining the information acquisition time difference, the electronic device and the storage medium provided by the embodiments of the present disclosure, the information acquisition time difference between the image acquisition device and the inertial measurement device is determined based on the angle increment information, hardware modification is not required, hardware logic is not required, and driving coordination is not required to implement synchronization.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally indicate like parts or steps.
FIG. 1 is a schematic diagram of a scenario to which the present disclosure is applicable;
FIG. 2 is a flow chart of one embodiment of an information acquisition time difference determination method of the present disclosure;
FIG. 3 is a flow diagram for one embodiment of the present disclosure to obtain first angular delta information between image frames;
FIG. 4 is a flow chart of one embodiment of the present disclosure for determining information acquisition time difference;
FIG. 5 is a schematic illustration of a first angle increment and a second angle increment of the present disclosure without using information acquisition time differences for a fusion process;
FIG. 6 is a schematic diagram of a first angle increment and a second angle increment for a fusion process using information acquisition time differences according to the present disclosure;
fig. 7 is a schematic structural diagram of an embodiment of an information acquisition time difference determining apparatus according to the present disclosure;
FIG. 8 is a block diagram of one embodiment of an electronic device of the present disclosure.
Detailed Description
Example embodiments according to the present disclosure will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more than two and "at least one" may refer to one, two or more than two.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing the associated object, and means that there may be three kinds of relationships, such as a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Embodiments of the present disclosure may be implemented in electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with an electronic device, such as a terminal device, computer system, or server, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set top boxes, programmable consumer electronics, network pcs, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be implemented in a distributed cloud computing environment. In a distributed cloud computing environment, tasks may be performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Summary of the application
In the course of implementing the present disclosure, the inventor finds that a fixed time delay occurs in the sampling, processing, transmission and other processing of information by the inertial measurement device, and the inertial measurement device may be a gyroscope or the like. In a scene of high-speed driving and the like, the inertia measuring device can measure the shake of a vehicle body in real time, so as to eliminate the shake amount of an object on an image caused by the shake of the vehicle and the like, and carry out compensation processing. Therefore, accurately obtaining the difference between the data sampling time of the inertial measurement unit and the image data sampling time will help to improve the accuracy of the compensation process.
The inertia measurement device performs processing such as sampling measurement and filter calculation on a motion angular acceleration or an angular velocity of a vehicle or the like, and transmits a processed data sample to an Application Processor (AP). The application processor stamps the data samples with the system time stamp when the data samples transmitted by the inertial measurement unit are received, but the application processor cannot accurately determine the actual sampling time of the inertial measurement unit.
The image acquisition device transmits acquired image data to the application processor, and the application processor stamps a system time stamp on each frame of image when receiving the image data, but the application processor cannot accurately determine the sampling time of each frame of image. Therefore, the application processor cannot accurately determine information acquisition time difference information between the image acquisition device and the inertial measurement unit.
At present, a common processing method is to connect the image acquisition device and the inertial measurement unit by hardware, so as to determine information acquisition time difference information between the image acquisition device and the inertial measurement unit. For example, by connecting a GPIO (General-Purpose-output, IO port) pin of a CMOS sensor in the image acquisition device, which receives an exposure signal, to a Sync pin of a gyro sensor, an MCU of the gyro records a sampling sample position corresponding to an exposure time point, and an application processor may be connected to the MCU of the gyro through a bus for reading a gyro data sampling time and an image data sampling time. However, since the image capturing device and the inertial measurement unit are connected by hardware, the inertial measurement unit is usually required to be designed inside the image capturing device, which limits the diversity of system design, and also affects the application of the calibration scheme in different application scenarios due to the dependence on hardware logic.
The information acquisition time calibration method can determine the acquisition time difference between the inertia measurement device and the image acquisition device, and can solve the problem of time synchronization of sampling of the inertia measurement device and the image acquisition device, thereby ensuring the accuracy of fusion and compensation of data acquired by the inertia measurement device and the image acquisition device.
Exemplary System
In one embodiment, as shown in fig. 1, the image capturing device may be various, such as a camera module 101. The camera module 101 includes a vision sensor based on CMOS or CMOS + ISP (Image Signal Processing) for performing vision imaging, and an imaging result may be transmitted to the application Processor 103 in a manner of DVP (digital video port)/MIPI (Mobile Industry Processor Interface)/LVDS (Low Voltage Differential Signaling), Low Voltage Differential Signaling Interface)/AHD (Analog High Definition), and the like.
The inertial measurement unit may be of various types, such as a gyroscope 102. A gyroscope sensor of the gyroscope 102 is an MEMS (Micro-Electro-Mechanical System) device, a microprocessor unit built in the gyroscope 102 may perform sampling measurement, filtering calculation, and the like on a motion angular velocity of an object, may perform x/y/z three-axis angular velocity measurement, and transmit a measurement result to the application processor 103 in a manner of I2C (Inter-Integrated Circuit)/SPI (Serial Peripheral Interface)/UART (Universal Asynchronous Receiver Transmitter/Transmitter, Universal Asynchronous Receiver/Transmitter), and the like.
There are various applications processor 103, such as a CPU, running a calibration program 106, a first driver 104, and a second driver 105 in applications processor 103. The first driver 104 is a driver of the camera module 101, and is configured to initialize and control the operation of the camera module 101, and receive image frame data transmitted by the camera module 101 through DVP/MIPI and other buses. The second driver 105 is a driver of the gyroscope 102, and is configured to initialize and control the operation of the gyroscope 102, and read/receive angular velocity sample data collected by the gyroscope 102 through a bus such as SPI/UART/I2C.
The calibration program 106 obtains image frame data and angular velocity sample data by calling the interfaces of the first driver 104 and the second driver 105, calculates an acquisition time difference between image frame data acquired by the camera module 101 and angular velocity sample data acquired by the gyroscope 102 according to a preset information acquisition time difference determination method, and performs fusion processing on the image frame data and the angular velocity sample data based on the acquisition time difference.
Exemplary method
Fig. 2 is a flowchart of an embodiment of the information acquisition time difference determining method of the present disclosure, where the method shown in fig. 2 includes the steps of: s201, S202 and S203. The following describes each step.
S201, receiving image information collected by an image collecting device, and obtaining first angle increment information between image frames based on the image information.
The image acquisition device can be for installing the camera device on the vehicle, and the image information that camera device gathered can be the image of gathering under driving scene, and first angular velocity increment information can be angle increment information such as Pitch angle, Yaw angle, wherein, Pitch angle is the angle of Pitch, and Yaw angle is the Yaw angle.
S202, receiving inertia measurement information collected by the inertia measurement device, and obtaining second angle increment information based on the inertia measurement information.
The inertia measurement device may be a gyroscope mounted on the vehicle, the inertia measurement information may be angular velocity sample data collected by the gyroscope, and the second angular velocity increment information may be angular increment information such as a Pitch angle and a Yaw angle.
And S203, determining information acquisition time difference information between the image acquisition device and the inertial measurement unit based on the first angle increment information and the second angle increment information.
Fig. 3 is a flowchart of one embodiment of the present disclosure for obtaining first angle delta information between image frames, the method shown in fig. 3 comprising the steps of: s301, S302, and S303. The following describes each step.
S301, determining optical flow information of target pixel points in adjacent image frames acquired by the image acquisition device.
The optical flow is the instantaneous speed of the pixel motion of a space moving object on an observation imaging plane, and is a method for finding the corresponding relation between the previous frame and the current frame by using the change of the pixels in an image sequence on a time domain and the correlation between adjacent frames so as to calculate the motion information of the object between the adjacent frames. The optical flow is due to movement of the foreground objects themselves in the scene, movement of the camera, or both. Target pixel points in the image frame may be set, for example, a target area is set in the image frame, and the target pixel points are pixel points in the set target area in the image frame.
S302, determining a first displacement of a target pixel point in an adjacent image frame based on optical flow information.
First displacement of a target pixel point in an image frame is obtained based on optical flow information of the target pixel point in an adjacent image frame, and first angle increment information between the image frames can be obtained according to the first displacement.
And S303, determining first angle increment information according to the first displacement based on the internal reference of the image acquisition device.
The camera internal reference is used for determining the projection relation from a three-dimensional space to a two-dimensional image, the camera internal reference is divided into an internal reference matrix, a distortion parameter matrix and the like, the camera internal reference can be calibrated by using the existing camera internal reference calibration method, and the camera internal reference calibration method can be a Zhang Zhengyou calibration method and the like.
Obtaining the first angular increment information between image frames based on the first displacement may employ a variety of methods. For example, in a driving scene, an image frame sequence acquired by an image acquisition device is obtained, an image transverse center line is set in an image frame, the image frame can be equally divided into an upper part and a lower part through the image transverse center line, a target area is selected from the upper part and the lower part of the image frame from the image transverse center line, and an optical flow value p of each target pixel point in the target area between an adjacent image frame i and an image frame i +1 is calculated by adopting a sparse optical flow method and the like k (unit is pixel). Sparse optical flow is a method for image registration of sparse points on an image, that is, given several points (usually corner points) on a reference image, finding their corresponding points in a current image.
And selecting the direction in which the vehicle body shakes most in the driving process, for example, selecting the up-down direction (Pitch angle) of an image frame acquired by the image acquisition device, and calculating the up-down movement distance of the target pixel point. The up-down movement distance of the target pixel point can be calculated by using various methods. For example, the optical flow information in the target area is clustered using the k-means method, from M target pixelsSelecting optical flow values p 'of N target pixel points of the most classes in the points' k The up-down movement distance of the target pixel point
Figure BDA0002105298990000071
Combining the internal parameters of the camera module, converting the optical flow information of the target pixel point between the adjacent frames into pitch angle increment (first angle increment) between the adjacent frames:
Figure BDA0002105298990000072
wherein, f y Is the y-direction focal length (in pixels) of the camera's internal parameters.
In one embodiment, the inertial measurement information includes angular velocity information or the like. And obtaining angular velocity information acquired by the inertia measurement device, and performing integral operation on the angular velocity information to obtain second angle increment information. And obtaining a plurality of angular velocity samples acquired by the inertia measurement device, and performing digital integral calculation on the plurality of angular velocity samples within the acquisition time period of the plurality of angular velocity samples to obtain second angle increment information, wherein the second angle increment information can be angle increment information of a Pitch angle, a Yaw angle and the like.
Fig. 4 is a flowchart of an embodiment of determining an information collection time difference according to the present disclosure, and the method shown in fig. 4 includes the steps of: s401, S402, S403, and S404. The following will explain each step.
S401, determining a first time interval according to two timestamps corresponding to two image frames, and obtaining first angle increment information in the first time interval.
In one embodiment, the image acquisition device is a camera, the inertial measurement device is a gyroscope, and a data acquisition period can be preset, and can be 10 seconds, 15 seconds and the like. The image frame sequence acquired by the camera in the data acquisition period is obtained by calling the first driver interface, and each image frame is marked with a system time stamp, wherein the precision of the system time stamp is generally higher than 0.2 ms. And obtaining an angular velocity sample sequence acquired by the gyroscope in a data acquisition period by calling a second driver interface, and stamping a system time stamp on each angular velocity sample, wherein the precision of the system time stamp is generally higher than 1 ms.
The image frame rate for acquiring images by the camera is typically 30 to 60 frames per second. For example, the image frame rate is 30 fps. The method comprises the steps of obtaining an image frame sequence collected by a camera, stamping a system time stamp on each image frame, setting a time interval between two time stamps of two image frames as a first time interval, and calculating first angle increment information between the image frames in the first time interval, wherein the two image frames can be two adjacent image frames or one or more image frames at intervals. For example, a plurality of first period intervals T are set based on two time stamps corresponding to all two adjacent image frames in the image frame sequence i ,T i+1 ]Calculating all adjacent image frames F i And F i+1 Angle increment d in corresponding first time interval img[i] Angular increment d img[i] Is incremental information for Pitch angle, etc.
S402, constructing a second time interval corresponding to the first time interval based on the time difference variable, and obtaining second angle increment information in the second time interval.
In one embodiment, the acquisition frequency of the gyroscope is 100 to 1000Hz, i.e. the gyroscope acquires 100 to 1000 angular velocity sample data per second. Setting a time difference variable delta to construct a second time interval [ T ] corresponding to the first time interval i –δ,T i+1 -δ]Using angular velocity sample data collected by the gyroscope and calculating in a digitally integrated manner during a second time interval T i –δ,T i+1 -δ]The second angle increment information in the table may be increment information such as a Pitch angle.
For example, the image frame rate is 30fps, the acquisition frequency of the gyroscope is 1000Hz, and each second period interval [ T [ T ] ] i –δ,T i+1 -δ]And 33 angular velocity sample data can be obtained, and the second angle increment information is as follows:
Figure BDA0002105298990000081
wherein g [ k ] is the kth angular velocity sample data. When the time difference variable δ is 0, the first angle increment information and the second angle increment information, which are initially calculated, are as shown in fig. 5.
S403, establishing an objective function based on the first angle increment information in the first time interval and the second angle increment information in the second time interval, solving an optimal value of the time difference variable based on the objective function, and setting the optimal value as information acquisition time difference information.
A variety of objective functions may be established. For example, the objective function is established as the sum of the squares of the differences between the first angle increment information and the corresponding second angle increment information, that is, the objective function is established as follows:
F(δ)=Σ(d img[i] –d gyro[i] ) 2 (1-3);
by solving for min Σ (d) img[i] –d gyro[i] ) 2 The value of the time difference variable that minimizes the objective function may be obtained as an optimal value. And solving an optimal time difference variable delta, and setting the delta as information acquisition time difference information. The objective function may be solved using a variety of methods, such as standard newton iterations, gradient descent methods, and the like.
And S404, fusing the image information and the inertia measurement information based on the information acquisition time difference information.
For example, the optimal time difference variable δ is found to be 0.039 seconds, and the first angle increment information and the second angle increment information are fused together by δ being 0.039 seconds, and the data after the fusion processing is shown in fig. 6.
In one embodiment, information acquisition time difference information between the image acquisition device and the inertia measurement device obtained through calculation is stored, and image frame data acquired by the image acquisition device is subjected to synchronous alignment processing according to the information acquisition time difference information and angular velocity sample data acquired by the inertia measurement device; the image frame samples collected by the image collecting device and the inertia measuring samples collected by the inertia measuring device are synchronously aligned by information collection and time difference information, so that the precision of application data fusion can be improved, the shaking amount of the target on the image caused by vehicle shaking and the like is eliminated, the target tracking and distance measurement based on the image is more accurate, and the AR fusion experience of AR-HUD can be improved.
Exemplary devices
In one embodiment, as shown in fig. 7, the present disclosure provides an information acquisition time difference determination apparatus, including: a first increment obtaining module 701, a second increment obtaining module 702 and a time difference calibration module 703.
The first increment obtaining module 701 receives image information collected by an image collecting device, and obtains first angle increment information between image frames based on the image information. The second increment obtaining module 702 receives the inertia measurement information collected by the inertia measurement apparatus, and obtains second angle increment information based on the inertia measurement information. The time difference calibration module 703 determines information acquisition time difference information between the image acquisition device and the inertial measurement unit based on the first angle increment information and the second angle increment information. The time difference calibration module 703 performs fusion processing on the image information and the inertia measurement information based on the information acquisition time difference information.
In one embodiment, the first increment obtaining module 701 determines optical flow information of a target pixel point in an adjacent image frame acquired by an image acquisition device, determines a first displacement of the target pixel point in the adjacent image frame based on the optical flow information, and obtains first angle increment information according to the first displacement. The first increment obtaining module 701 can determine first angle increment information according to the first displacement based on the internal reference of the image acquisition device. The inertial measurement information includes angular velocity information and the like. The second increment obtaining module 702 obtains angular velocity information acquired by the inertia measurement apparatus, and performs integral operation on the angular velocity information to obtain second angle increment information.
In one embodiment, the first increment obtaining module 701 determines a first time interval according to two timestamps corresponding to two image frames, and obtains first angle increment information within the first time interval. The second increment obtaining module 702 constructs a second period interval corresponding to the first period interval based on the time difference variable, and obtains second angle increment information within the second period interval. The time difference calibration module 703 establishes an objective function based on the first angle increment information in the first time interval and the second angle increment information in the second time interval, solves the optimal value of the time difference variable based on the objective function, and sets the optimal value as the information acquisition time difference information.
The time difference calibration module 703 establishes that the objective function is the sum of squares of the differences between the first angle increment information and the corresponding second angle increment information, obtains the value of the time difference variable that minimizes the objective function, and takes the value of the time difference variable as an optimal value.
Fig. 8 is a block diagram of one embodiment of an electronic device of the present disclosure, as shown in fig. 8, the electronic device 81 includes one or more processors 811 and memory 812.
The processor 811 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capability and/or instruction execution capability, and may control other components in the electronic device 81 to perform desired functions.
Memory 812 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory, for example, may include: random Access Memory (RAM) and/or cache memory (cache), etc. The nonvolatile memory, for example, may include: read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 811 to implement the information collection time difference determination methods of the various embodiments of the present disclosure described above and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 81 may further include: an input device 813, an output device 814, etc., which are interconnected by a bus system and/or other form of connection mechanism (not shown). The input device 813 may also include, for example, a keyboard, a mouse, and the like. The output device 814 may output various information to the outside. The output devices 814 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the electronic device 81 relevant to the present disclosure are shown in fig. 8, and components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 81 may include any other suitable components, depending on the particular application.
In addition to the above-described methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the information collection time difference determination methods according to various embodiments of the present disclosure described in the "exemplary methods" section of this specification, above.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the information collection time difference determination method according to various embodiments of the present disclosure described in the "exemplary method" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium may include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
In the information acquisition time difference determining method and device, the electronic device and the storage medium in the embodiments, the information acquisition time difference information between the image acquisition device and the inertia measurement device is determined based on the angle increment information, hardware modification is not required, hardware logic is not relied on, and synchronization is not required to be realized by driving and matching, and the calibration scheme disclosed by the disclosure is also applied to a sensor which does not support a hardware synchronization function; the image frame data collected by the image collecting device can be fused according to the information collecting time difference information and the angular velocity sample data collected by the inertia measuring device, so that the shaking amount of the target on the image caused by shaking and the like is eliminated, the target tracking and distance measurement based on the image is more accurate, and the AR fusion experience of the AR-HUD can be improved.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, and systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," comprising, "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects, and the like, will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (9)

1. An information acquisition time difference determination method includes:
receiving image information acquired by an image acquisition device, and acquiring first angle increment information between image frames based on the image information;
receiving inertia measurement information acquired by an inertia measurement device, and acquiring second angle increment information based on the inertia measurement information;
determining information acquisition time difference information between the image acquisition device and the inertial measurement device based on the first angle increment information and the second angle increment information, including:
determining a first time interval according to two timestamps corresponding to two image frames, and obtaining first angle increment information in the first time interval;
constructing a second time interval corresponding to the first time interval based on the time difference variable, and obtaining second angle increment information in the second time interval;
establishing an objective function based on first angle increment information in the first time interval and second angle increment information in the second time interval, solving an optimal value of the time difference variable based on the objective function, and setting the optimal value as the information acquisition time difference information.
2. The method of claim 1, the obtaining first angular delta information between image frames based on the image information comprising:
determining optical flow information of target pixel points in adjacent image frames acquired by the image acquisition device;
determining a first displacement of a target pixel point in the adjacent image frame based on the optical flow information;
and obtaining the first angle increment information according to the first displacement.
3. The method of claim 2, the obtaining the first angle increment information based on the first displacement information comprising:
and determining the first angle increment information according to the first displacement based on the internal reference of the image acquisition device.
4. The method of claim 1, the inertial measurement information comprising: angular velocity information; the obtaining second angle increment information based on the inertial measurement information comprises:
and obtaining angular velocity information acquired by the inertia measurement device, and performing integral operation on the angular velocity information to obtain second angle increment information.
5. The method of claim 1, the establishing an objective function based on the first angle increment information and the second angle increment information within the first time interval, the solving for the optimal value of the time difference variable based on the objective function comprising:
establishing the objective function as the sum of squares of differences between the first angle increment information and the corresponding second angle increment information;
obtaining a value of the time difference variable that minimizes the objective function, and taking the value of the time difference variable as the optimal value.
6. The method of claim 1, further comprising:
and fusing the image information and the inertia measurement information based on the information acquisition time difference information.
7. An information acquisition time difference determination apparatus comprising:
the first increment obtaining module is used for receiving image information collected by the image collecting device and obtaining first angle increment information between image frames based on the image information;
the second increment obtaining module is used for receiving the inertia measurement information collected by the inertia measurement device and obtaining second angle increment information based on the inertia measurement information;
the time difference calibration module is used for determining information acquisition time difference information between the image acquisition device and the inertial measurement unit based on the first angle increment information and the second angle increment information;
the first increment obtaining module is used for determining a first time interval according to two timestamps corresponding to two image frames and obtaining first angle increment information in the first time interval;
the second increment obtaining module is used for constructing a second time interval corresponding to the first time interval based on the time difference variable and obtaining second angle increment information in the second time interval;
the time difference calibration module is used for establishing an objective function based on first angle increment information in a first time interval and second angle increment information in a second time interval, solving an optimal value of a time difference variable based on the objective function, and setting the optimal value as information acquisition time difference information.
8. A computer-readable storage medium, the storage medium storing a computer program for executing the method of any of the preceding claims 1-6.
9. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the method of any one of claims 1-6.
CN201910550267.0A 2019-06-24 2019-06-24 Information acquisition time difference determining method and device, electronic equipment and storage medium Active CN112129317B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910550267.0A CN112129317B (en) 2019-06-24 2019-06-24 Information acquisition time difference determining method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910550267.0A CN112129317B (en) 2019-06-24 2019-06-24 Information acquisition time difference determining method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112129317A CN112129317A (en) 2020-12-25
CN112129317B true CN112129317B (en) 2022-09-02

Family

ID=73849056

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910550267.0A Active CN112129317B (en) 2019-06-24 2019-06-24 Information acquisition time difference determining method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112129317B (en)

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102124598B1 (en) * 2013-09-30 2020-06-19 삼성전자주식회사 Image acquisition method and apparatus
JP6702323B2 (en) * 2015-07-22 2020-06-03 ソニー株式会社 Camera module, solid-state imaging device, electronic device, and imaging method
WO2017020150A1 (en) * 2015-07-31 2017-02-09 深圳市大疆创新科技有限公司 Image processing method, device and camera
CN106023192B (en) * 2016-05-17 2019-04-09 成都通甲优博科技有限责任公司 A kind of time reference real-time calibration method and system of Image-capturing platform
CN106251305B (en) * 2016-07-29 2019-04-30 长春理工大学 A kind of realtime electronic image stabilizing method based on Inertial Measurement Unit IMU
CN106534692A (en) * 2016-11-24 2017-03-22 腾讯科技(深圳)有限公司 Video image stabilization method and device
US10097757B1 (en) * 2017-03-24 2018-10-09 Fotonation Limited Method for determining bias in an inertial measurement unit of an image acquisition device
TWI640931B (en) * 2017-11-23 2018-11-11 財團法人資訊工業策進會 Image object tracking method and apparatus
CN108765498B (en) * 2018-05-30 2019-08-23 百度在线网络技术(北京)有限公司 Monocular vision tracking, device and storage medium
CN109166149B (en) * 2018-08-13 2021-04-02 武汉大学 Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
CN109194877B (en) * 2018-10-31 2021-03-02 Oppo广东移动通信有限公司 Image compensation method and apparatus, computer-readable storage medium, and electronic device

Also Published As

Publication number Publication date
CN112129317A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
JP6743191B2 (en) Multi-sensor image stabilization technology
CN105933594B (en) Control equipment, picture pick-up device and control method
US8159541B2 (en) Image stabilization method and apparatus
EP2963491B1 (en) Shake amount detection device and imaging device
CN110035228B (en) Camera anti-shake system, camera anti-shake method, electronic device, and computer-readable storage medium
US20200264011A1 (en) Drift calibration method and device for inertial measurement unit, and unmanned aerial vehicle
US10419675B2 (en) Image pickup apparatus for detecting a moving amount of one of a main subject and a background, and related method and storage medium
US11042984B2 (en) Systems and methods for providing image depth information
JP6098874B2 (en) Imaging apparatus and image processing apparatus
CN110300263B (en) Gyroscope processing method and device, electronic equipment and computer readable storage medium
CN109922264B (en) Camera anti-shake system and method, electronic device, and computer-readable storage medium
JP7182020B2 (en) Information processing method, device, electronic device, storage medium and program
CN113112413A (en) Image generation method, image generation device and vehicle-mounted head-up display system
JP6098873B2 (en) Imaging apparatus and image processing apparatus
KR20220079978A (en) Calibration method and apparatus, processor, electronic device, storage medium
JP2019129410A (en) Monitoring camera, control method thereof, and program
CN114449173A (en) Optical anti-shake control method, device, storage medium and electronic equipment
EP3275173B1 (en) Image capture system with motion compensation
CN109040525B (en) Image processing method, image processing device, computer readable medium and electronic equipment
CN108260360B (en) Scene depth calculation method and device and terminal
CN110177213B (en) Gyroscope parameter adjusting method and device, terminal and computer readable storage medium
CN110720113A (en) Parameter processing method and device, camera equipment and aircraft
CN112129317B (en) Information acquisition time difference determining method and device, electronic equipment and storage medium
CN110113542B (en) Anti-shake method and apparatus, electronic device, computer-readable storage medium
CN108603752B (en) Deflection angle detection method and device and jitter compensation method and device for camera module of terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant