WO2022179555A1 - 视频数据的防抖处理方法、装置、计算机设备和存储介质 - Google Patents

视频数据的防抖处理方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2022179555A1
WO2022179555A1 PCT/CN2022/077636 CN2022077636W WO2022179555A1 WO 2022179555 A1 WO2022179555 A1 WO 2022179555A1 CN 2022077636 W CN2022077636 W CN 2022077636W WO 2022179555 A1 WO2022179555 A1 WO 2022179555A1
Authority
WO
WIPO (PCT)
Prior art keywords
angular velocity
camera
rotation vector
coordinate system
data
Prior art date
Application number
PCT/CN2022/077636
Other languages
English (en)
French (fr)
Inventor
董鹏飞
陈聪
Original Assignee
影石创新科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 影石创新科技股份有限公司 filed Critical 影石创新科技股份有限公司
Publication of WO2022179555A1 publication Critical patent/WO2022179555A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction

Definitions

  • the present application relates to the field of computer technology, and in particular, to a method, device, computer equipment and storage medium for anti-shake processing of video data.
  • the traditional method needs to formulate a calibration object, and the calculated delay is fixed.
  • the delay between the camera and the inertial measurement unit may change.
  • the traditional method calculates the delay. At this time, the accuracy of delay calculation will be low, and clear video data cannot be obtained. Therefore, how to calculate the delay between the camera and the inertial measurement unit in an online manner to improve the definition of video data is a technical problem that needs to be solved at present.
  • a method for anti-shake processing of video data comprising:
  • Fitting processing is performed on the rotation vector of the inertial measurement unit in the world coordinate system to obtain the calculated value of the angular velocity corresponding to the inertial measurement unit;
  • Anti-shake processing is performed on the video data according to the measured angular velocity value, the angular velocity of the camera, and the calculated angular velocity value.
  • the video data includes multiple frames of images
  • the fitting process on the rotation vector of the camera in the world coordinate system to obtain the angular velocity of the camera includes:
  • the continuous rotation vector is fitted and calculated to obtain the angular velocity of the camera.
  • calculating the target rotation vector corresponding to each frame of images according to the rotation vector of the camera in the world coordinate system, and obtaining the continuous rotation vector corresponding to the camera includes:
  • performing anti-shake processing on the video data according to the measured angular velocity, the angular velocity of the camera, and the calculated angular velocity includes:
  • Anti-shake processing is performed on the video data according to the target delay data.
  • the calculating the first delay data according to the angular velocity measurement value and the angular velocity of the camera includes:
  • Optimizing the first translation amount is performed to obtain first delay data.
  • the calculating the second delay data according to the angular velocity measurement value and the angular velocity calculation value includes:
  • the calculating the target delay data corresponding to the video data according to the first delay data and the second delay data includes:
  • Difference processing is performed on the first delay data and the second delay data to obtain target delay data corresponding to the video data by calculation.
  • the method further includes:
  • the rotation vector of the camera in the world coordinate system is calculated according to the target video segment.
  • An anti-shake processing device for video data comprising:
  • an acquisition module used for acquiring video data collected by the camera, and acquiring measurement data of the inertial measurement unit, where the measurement data includes an angular velocity measurement value;
  • a first calculation module configured to calculate the rotation vector of the camera in the world coordinate system according to the video data
  • a first fitting module configured to perform fitting processing on the rotation vector of the camera in the world coordinate system to obtain the angular velocity of the camera
  • a second calculation module configured to calculate the rotation vector of the inertial measurement unit in the world coordinate system according to the measurement data
  • a second fitting module configured to perform fitting processing on the rotation vector of the inertial measurement unit in the world coordinate system to obtain the calculated value of the angular velocity corresponding to the inertial measurement unit;
  • An anti-shake module configured to perform anti-shake processing on the video data according to the measured angular velocity value, the angular velocity of the camera, and the calculated value of the angular velocity.
  • the video data includes multiple frames of images
  • the first fitting module is further configured to calculate the target rotation vector corresponding to each frame of images according to the rotation vector of the camera in the world coordinate system, to obtain the The continuous rotation vector corresponding to the camera is obtained; the continuous rotation vector is fitted and calculated to obtain the angular velocity of the camera.
  • a computer device includes a memory and a processor, wherein the memory stores a computer program that can be executed on the processor, and when the processor executes the computer program, the steps in each of the foregoing method embodiments are implemented.
  • the above-mentioned anti-shake processing method, device, computer equipment and storage medium for video data obtain the video data collected by the camera, and obtain the measurement data of the inertial measurement unit, the measurement data includes the angular velocity measurement value, and calculate the camera in the world coordinate system according to the video data.
  • the rotation vector of the camera in the world coordinate system is fitted, and the angular velocity of the camera is obtained. There is no need to make a calibration object.
  • the rotation vector of the camera in the world coordinate system can be calculated according to the video data. , and calculate the angular velocity of the camera.
  • Fig. 1 is the application environment diagram of the anti-shake processing method of video data in one embodiment
  • FIG. 2 is a schematic flowchart of an anti-shake processing method for video data in one embodiment
  • FIG. 3 is a schematic flowchart of a step of fitting the rotation vector of the camera in the world coordinate system to obtain the angular velocity of the camera in one embodiment
  • FIG. 4 is a schematic diagram of a rotation vector of a camera in a world coordinate system in one embodiment
  • Fig. 5 is the schematic diagram of the continuous rotation vector obtained after conversion in one embodiment
  • FIG. 6 is a schematic diagram of an image region in one embodiment
  • FIG. 7 is a structural block diagram of an apparatus for anti-shake processing of video data in one embodiment
  • FIG. 8 is a diagram of the internal structure of a computer device in one embodiment.
  • first, second, etc. may be used herein to describe various elements and parameters, but these elements and parameters are not limited by these terms. These terms are only used to distinguish a first element from another element or to distinguish one parameter from another parameter.
  • first delayed data may be referred to as second delayed data
  • second delayed data may be referred to as first delayed data, without departing from the scope of this application. Both the first delayed data and the second delayed data are delayed data, but they are not the same delayed data.
  • the anti-shake processing method for video data provided by the present application can be applied to the application environment shown in FIG. 1 .
  • the camera 104 and the inertial measurement unit 106 are installed in the terminal 102 .
  • the terminal 102 obtains the video data collected by the camera 104 and the measurement data of the inertial measurement unit 106, the measurement data includes the angular velocity measurement value, and the terminal 102 calculates the camera 104 in the world coordinate system according to the video data.
  • Rotation vector perform fitting processing on the rotation vector of the camera 104 in the world coordinate system, obtain the angular velocity of the camera 104, calculate the rotation vector of the inertial measurement unit 106 in the world coordinate system according to the measurement data, and calculate the rotation vector of the inertial measurement unit 106 in the world coordinate system according to the measurement data.
  • the rotation vector under the system is fitted to obtain the angular velocity calculation value corresponding to the inertial measurement unit 106, and then the video data is subjected to anti-shake processing according to the angular velocity measurement value, the angular velocity of the camera 104 and the angular velocity calculation value.
  • the terminal 102 can be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices.
  • a method for anti-shake processing of video data is provided, and the method is applied to the terminal in FIG. 1 as an example for description, including the following steps:
  • Step 202 acquiring video data collected by the camera, and acquiring measurement data of the inertial measurement unit, where the measurement data includes an angular velocity measurement value.
  • a camera and an inertial measurement unit are pre-installed in the terminal.
  • the camera is used for taking pictures or videos, and the camera may be one or more of a black-and-white camera, a color camera, a wide-angle camera, or a telephoto camera.
  • the camera can be a camera built into the electronic device or an external camera.
  • the inertial measurement unit is used to record the motion state of the terminal.
  • the inertial measurement unit may include a gyroscope and an accelerometer.
  • the gyroscope may also be called an angular velocity sensor, which is used to measure the angular velocity of the terminal when it is deflected and tilted.
  • the accelerometer is used to measure the acceleration of the terminal, so that the motion state of the terminal can be obtained by integrating the angular velocity and the acceleration.
  • the camera can send the collected video data to the terminal, and the inertial measurement unit will also send the measured measurement data to the terminal.
  • the video data refers to a continuous image sequence, and the video data may include images in a time sequence of multiple consecutive frames. A frame is the smallest visual unit in video data, and each frame in video data can correspond to an image.
  • the video data can refer to the video stream captured by the camera in real time, or it can be the captured video segment that contains rich texture scenes and does not contain sharp turns.
  • the measurement data may include angular velocity measurements as well as acceleration measurements.
  • the angular velocity measurements are obtained with gyroscopes in the inertial measurement unit.
  • the angular velocity measurement refers to the angle the terminal turns in unit time and the direction of rotation. The larger the angular velocity measurement value, the larger the rotation angle of the terminal and the larger the rotation direction, the larger the jitter of the terminal.
  • Step 204 Calculate the rotation vector of the camera in the world coordinate system according to the video data.
  • the video data may include multiple frames of images.
  • the terminal parses the video data to obtain multiple frames of images in the video data.
  • the terminal calculates the rotation vector of the camera corresponding to each frame of image in the world coordinate system.
  • the terminal can use an existing SLAM (Simultaneous Localization and Mapping, simultaneous positioning and mapping) algorithm to extract the feature points of each frame of images, and obtain the feature points of each frame of images and the pixel coordinates of the feature points.
  • the feature points can be ORB (Oriented FAST and Rotated Brief, scale invariant) feature points.
  • the pixel coordinates of the feature points of each frame image are converted into world coordinates, and then the SLAM algorithm is used to calculate the rotation of the camera in the world coordinates according to the world coordinates of the feature points of each frame image.
  • existing visual SLAM algorithms may include at least one of ORB-SLAM2 algorithm, ORB-SLAM algorithm, RTAB-SLAM algorithm, ISD-SLAM algorithm, DVO-SLAM algorithm, SVO algorithm, and the like.
  • the terminal may determine the camera coordinate system corresponding to the first frame of image as the world coordinate system.
  • the rotation vector of the camera in the world coordinate system corresponding to each frame of image refers to the rotation vector of the camera in the world coordinate system when collecting each frame of image.
  • the world coordinate system refers to the three-dimensional world coordinate system, which is used to describe the absolute coordinates of objects in three-dimensional space.
  • the camera coordinate system refers to the optical center of the camera (camera) as the origin, the Zc axis coincides with the optical axis of the camera, and is perpendicular to the imaging plane, and the photographing direction is taken as the positive direction, and the Xc axis and the Yc axis are respectively the image physical coordinate system.
  • the rotation vector refers to the rotation vector of the camera's camera coordinates converted to the world coordinate system.
  • the rotation vector refers to the vector whose direction is the rotation axis and the size is the rotation angle, which represents the rotation angle of the camera in the world coordinate system.
  • the rotation vector can be a 1x3 vector.
  • the calculated rotation vector of the camera in the world coordinate system may not be accurate enough.
  • the terminal can select a video segment that contains rich texture scenes and does not contain sharp turns in the video data, so as to calculate the rotation vector of the camera in the world coordinate system according to the selected video segment. , thereby improving the accuracy of the rotation vector of the camera in the world coordinate system.
  • Step 206 performing fitting processing on the rotation vector of the camera in the world coordinate system to obtain the angular velocity of the camera.
  • the rotation vector of the camera in the world coordinate system calculated by the terminal according to the video data may be discontinuous on the time axis, that is, the same timestamp may correspond to multiple rotation vectors on the time axis.
  • the terminal can convert the rotation vector of the camera in the world coordinate system into a continuous rotation vector, so as to perform fitting processing on the converted continuous rotation vector to obtain the fitted curve, and then calculate the camera according to the fitted curve.
  • the corresponding angular velocity at each timestamp can convert the rotation vector of the camera in the world coordinate system into a continuous rotation vector, so as to perform fitting processing on the converted continuous rotation vector to obtain the fitted curve, and then calculate the camera according to the fitted curve.
  • the corresponding angular velocity at each timestamp can convert the rotation vector of the camera in the world coordinate system into a continuous rotation vector, so as to perform fitting processing on the converted continuous rotation vector to obtain the fitted curve, and then calculate the camera according to the fitted curve.
  • the corresponding angular velocity at each timestamp can convert the rotation vector of the camera in the world coordinate system into a continuous rotation vector, so as to perform fitting processing on the converted continuous rotation vector to obtain the fitted curve, and then calculate the camera according to the fitted curve.
  • the corresponding angular velocity at each timestamp
  • the terminal may use any one of multiple fitting methods, such as a B-spline curve method, a polynomial fitting method, and a Bezier curve fitting method, to fit the continuous rotation vector, so as to fit the continuous rotation vector according to the The curve obtained after fitting calculates the angular velocity of the camera at each timestamp.
  • multiple fitting methods such as a B-spline curve method, a polynomial fitting method, and a Bezier curve fitting method
  • Step 208 Calculate the rotation vector of the inertial measurement unit in the world coordinate system according to the measurement data.
  • Step 210 Perform fitting processing on the rotation vector of the inertial measurement unit in the world coordinate system to obtain a calculated value of the angular velocity corresponding to the inertial measurement unit.
  • the measurement data may include angular velocity measurements and acceleration measurements from inertial measurement units.
  • the terminal performs integral operation on the angular velocity measurement value and the acceleration measurement value to obtain the rotation vector of the inertial measurement unit in the world coordinate system.
  • the rotation vector of the inertial measurement unit in the world coordinate system can be used to represent the rotation of the inertial measurement unit in the world coordinate system.
  • the terminal may use the existing IMU pre-integration method to calculate the rotation vector of the inertial measurement unit in the world coordinate system.
  • the existing IMU and method may be at least one of an extended Kalman filter algorithm, an okvis algorithm, a vins-mono algorithm, and the like.
  • the terminal After calculating the rotation vector of the inertial measurement unit in the world coordinate system, the terminal can fit the rotation vector of the inertial measurement unit in the world coordinate system according to the above steps of fitting the rotation vector of the camera in the world coordinate system Process to obtain the calculated value of the angular velocity corresponding to the inertial measurement unit.
  • the terminal may simultaneously calculate the rotation vector of the camera in the world coordinate system, and the rotation vector of the inertial measurement unit in the world coordinate system.
  • the rotation vector is obtained, and the rotation vectors of the camera and the inertial measurement unit in the world coordinate system are respectively fitted to obtain the angular velocity of the camera and the calculated value of the angular velocity corresponding to the inertial measurement unit.
  • the angular velocity of the camera and the calculated value of the angular velocity corresponding to the inertial measurement unit may also be calculated in sequence.
  • the calculation sequence of the angular velocity of the camera and the calculation value of the angular velocity corresponding to the inertial measurement unit is not limited here.
  • Step 212 Perform anti-shake processing on the video data according to the measured angular velocity value, the angular velocity of the camera, and the calculated angular velocity value.
  • the modulo length or size of the angular velocity corresponding to the camera and the inertial measurement unit should be the same, but due to system reasons, the data corresponding to the camera and the inertial measurement unit exist on the time axis.
  • a delay can calculate the delay data of the camera and the inertial measurement unit on the time axis according to the angular velocity measurement value, the angular velocity of the camera, and the calculated angular velocity value, so as to align the video data and the measurement data of the inertial measurement unit according to the delay data to realize the video
  • the data is subjected to anti-shake processing.
  • performing anti-shake processing on the video data according to the measured angular velocity value, the angular velocity of the camera, and the calculated angular velocity value includes: calculating first delay data according to the measured angular velocity value and the angular velocity of the camera; according to the measured angular velocity value and the calculated angular velocity value Calculate the second delay data; calculate target delay data corresponding to the video data according to the first delay data and the second delay data; perform anti-shake processing on the video data according to the target delay data.
  • the terminal can calculate the inertial
  • the translation amount between the angular velocity measurement value of the unit and the angular velocity of the camera on the time axis is measured, and the translation amount is determined as the first delay data.
  • the angular velocity measurement value of the inertial measurement unit will be integrated, which may cause a delay error in the calculated first delay data.
  • the terminal can calculate the angular velocity of the inertial measurement unit.
  • the translation amount on the time axis between the value and the angular velocity measurement value of the inertial measurement unit is calculated, and the translation amount is determined as the second delay data, and the second delay data is used to represent the delay error.
  • the calculation process of the first delay data and the second delay data can be the same, so the calculation process of the second delay data can be regarded as replacing the angular velocity of the camera in the calculation process of the first delay data with the calculated angular velocity for processing again. the process of.
  • the terminal performs calculation according to the first delay data and the second delay data, obtains the target delay data corresponding to the video data, realizes the online calculation of the delay between the camera and the inertial measurement unit, and then compares the video data with the target delay data according to the target delay data.
  • the measurement data of the inertial measurement unit is aligned, the anti-shake processing is completed, and the clarity of the video data is improved.
  • calculating the target delay data corresponding to the video data according to the first delay data and the second delay data includes: performing a difference process on the first delay data and the second delay data, and calculating the target delay corresponding to the video data.
  • the first delay data is the translation amount of the angular velocity measurement value of the inertial measurement unit and the angular velocity of the camera on the time axis
  • the second delay data is the delay error.
  • the terminal subtracts the second delay data from the first delay data to obtain the target.
  • Delay data can accurately align the video data with the measurement data of the inertial measurement unit.
  • the video data collected by the camera and the measurement data of the inertial measurement unit are obtained, the measurement data includes the angular velocity measurement value, the rotation vector of the camera in the world coordinate system is calculated according to the video data, and the camera is in the world coordinate system.
  • the angular velocity of the camera can be obtained by fitting the rotation vector of the camera. There is no need to make a calibration object.
  • the rotation vector of the camera in the world coordinate system can be calculated according to the video data, and the angular velocity of the camera can be calculated.
  • the steps of fitting the rotation vector of the camera in the world coordinate system to obtain the angular velocity of the camera include:
  • Step 302 Calculate the target rotation vector corresponding to each frame of image according to the rotation vector of the camera in the world coordinate system, and obtain the continuous rotation vector corresponding to the camera.
  • Step 304 Perform fitting calculation on the continuous rotation vector to obtain the angular velocity of the camera.
  • Video data includes multiple frames of images.
  • the rotation vector of the camera in the world coordinate system calculated by the terminal includes the rotation vector corresponding to each frame of image.
  • the terminal converts the calculated rotation vector of the camera in the world coordinate system into a continuous rotation vector.
  • a continuous rotation vector means that the rotation vectors on the time axis are all continuous.
  • each frame of image has a corresponding timestamp.
  • the terminal can determine the target rotation vector corresponding to the timestamp in the rotation vector of the camera in the world coordinate system, and the target rotation vector corresponding to the timestamp can be Represents the target rotation vector corresponding to the corresponding frame image of this timestamp.
  • the continuous rotation vector can be obtained.
  • FIG. 4 it is a schematic diagram of the rotation vector of the camera in the world coordinate system, wherein the time series refers to the time axis, and the time stamp corresponding to each frame of image can be determined on the time axis.
  • the rotation vector is represented by a three-dimensional vector composed of the x-axis, y-axis, and z-axis coordinates of the timestamps of the corresponding frame images.
  • FIG. 5 it is a continuous rotation vector obtained after conversion in one embodiment.
  • the terminal can use any of the B-spline curve method, polynomial fitting method, Bezier curve fitting method and other fitting methods to fit the continuous rotation vector, so as to calculate according to the curve obtained after fitting.
  • the angular velocity of the camera at each timestamp is used to fit the continuous rotation vector as an example.
  • the terminal fits the continuous rotation vector according to the continuous rotation vector, the timestamp corresponding to the continuous rotation vector, and the preset function, and obtains the fitted curve, so as to calculate the angular velocity of the camera according to the fitted curve.
  • the preset function may be a k-order B-spline basis function.
  • the fitted curve is a smooth k-order B-spline curve.
  • the fitted k-th order B-spline curve can be expressed in the following form:
  • V(t) represents the fitted k-order B-spline curve
  • vi represents the control point, that is, the above-mentioned rotation vector
  • V i ,k (t) represents the k-order B-spline basis function
  • Vi ,k (t) is calculated using the following recursive formula:
  • t i represents the node, that is, the above timestamp.
  • i represents the quantity, the ith.
  • wi represents the angular velocity of the camera
  • k represents the order of the fitted B-spline curve
  • v i +1 , vi rotation vectors.
  • the target rotation vector corresponding to each frame of image is calculated according to the rotation vector of the camera in the world coordinate system, and the continuous rotation vector corresponding to the camera is obtained, and the continuous rotation vector is fitted and calculated to obtain the angular velocity of the camera.
  • the discontinuous rotation vector is converted into a continuous rotation vector, which is conducive to fitting the rotation vector.
  • calculating the target rotation vector corresponding to each frame of image according to the rotation vector of the camera in the world coordinate system, and obtaining the continuous rotation vector corresponding to the camera includes: acquiring the current frame image, the rotation vector of the camera in the world coordinate system Obtain the original rotation vector corresponding to the current frame image and the target rotation vector corresponding to the previous frame image; calculate the target rotation vector corresponding to the current frame image according to the original rotation vector corresponding to the current frame image and the target rotation vector corresponding to the previous frame image; Update the image of the next frame to the image of the current frame, and return to the steps of obtaining the original rotation vector corresponding to the current frame image and the target rotation vector corresponding to the previous frame image from the rotation vector of the camera in the world coordinate system, until the last calculation is obtained.
  • the target rotation vector corresponding to the frame image is obtained, and the continuous rotation vector corresponding to the camera is obtained.
  • the original rotation vector refers to the rotation vector calculated from the video data without conversion processing.
  • the target rotation vector refers to the rotation vector that has been converted to ensure that the obtained rotation vector is continuous.
  • the terminal may sequentially calculate the target rotation vector corresponding to each frame of images according to the sequence of time stamps corresponding to the multiple frames of images. Specifically, the terminal acquires the current frame image, and acquires the original rotation vector corresponding to the current frame image according to the timestamp corresponding to the current frame image. Before acquiring the current frame image, the original rotation vector corresponding to the previous frame image has been converted, and the rotation vector corresponding to the previous frame image is the target rotation vector. The terminal obtains the target rotation vector corresponding to the image of the previous frame.
  • the terminal calculates the rotation angle corresponding to the original rotation vector according to the first relational expression, and calculates the rotation axis corresponding to the original rotation vector according to the original rotation vector, the rotation angle corresponding to the original rotation vector, and the second relational expression, so as to calculate the rotation angle corresponding to the original rotation vector according to the original rotation vector.
  • the rotation axis corresponding to the original rotation vector, the target rotation vector corresponding to the previous frame image, and the third relational expression to calculate the target integer value corresponding to the original rotation vector, and the target integer value is used to calculate the target rotation vector corresponding to the current frame image.
  • the terminal further calculates the target rotation vector corresponding to the current frame image according to the target integer value, the rotation angle corresponding to the original rotation vector, the rotation axis corresponding to the original rotation vector, and the fourth relational expression.
  • the first relational expression refers to the calculation formula of the rotation angle corresponding to the original rotation vector, as shown below:
  • ⁇ i represents the rotation angle corresponding to the original rotation vector
  • norm() represents the norm function
  • v i represents the original rotation vector
  • i represents the quantity, the ith.
  • the second relational expression refers to the calculation formula of the rotation axis corresponding to the original rotation vector, as shown below:
  • Ni represents the rotation axis corresponding to the original rotation vector.
  • the third relational expression refers to the calculation formula of the target integer value, as follows:
  • n i represents the target integer value corresponding to the original rotation vector
  • v′ i-1 represents the target rotation vector corresponding to the previous frame image
  • argmin represents the value of the variable when the following formula reaches the minimum value, for example, formula (7 ) indicates The value of n when the minimum value is reached.
  • the fourth relational expression refers to the calculation formula of the rotation vector corresponding to the current frame image, as shown below:
  • v′ i represents the target rotation vector corresponding to the current frame image.
  • the terminal continues to obtain the next frame image, updates the next frame image to the current frame image, and returns to obtain the original rotation vector corresponding to the current frame image and the target rotation vector corresponding to the previous frame image from the rotation vector of the camera in the world coordinate system.
  • step that is, according to the above formulas (5)-(8), calculate the target rotation vector corresponding to the next frame of image, until the target rotation vector corresponding to the last frame image is calculated, and the terminal obtains the continuous rotation vector corresponding to the camera.
  • the current frame image is acquired, the target rotation vector corresponding to the current frame image is calculated according to the original rotation vector corresponding to the current frame image and the target rotation vector corresponding to the previous frame image, and the next frame image is updated to the current frame image , and repeat the above steps of calculating the target rotation vector until the target rotation vector corresponding to the last frame of image is obtained by calculation, and the continuous rotation vector corresponding to the camera is obtained.
  • the target rotation vector of the timestamp corresponding to each frame of image each rotation of the camera can be represented by a unique rotation vector, thereby ensuring the continuity of the rotation vector on the time axis.
  • calculating the first delay data according to the angular velocity measurement value and the angular velocity of the camera includes: performing resampling processing on the angular velocity measurement value to obtain a resampled angular velocity measurement value; comparing the resampled angular velocity measurement value with the camera's angular velocity measurement value The cross-correlation operation is performed on the angular velocity to obtain the first translation amount; the optimization processing is performed on the first translation amount to obtain the first delay data.
  • the inertial measurement unit will not completely sample at equal intervals during the measurement process. Therefore, it is necessary to resample the angular velocity measurements at equal intervals.
  • there may be various ways of resampling for example, polynomial interpolation, piecewise difference, cubic B-spline interpolation, and the like.
  • the terminal calculates the first delay data between the resampled angular velocity measurement value and the angular velocity of the camera. Specifically, the terminal may perform a cross-correlation operation on the resampled angular velocity measurement value and the angular velocity of the camera, and determine the first translation amount of the two groups of angular velocities on the time axis by maximizing the cross-correlation.
  • the calculation formula of the cross-correlation operation can be as follows:
  • ⁇ t 0 represents the first translation amount
  • ⁇ c (t) represents the angular velocity of the camera when the timestamp is t
  • ⁇ i (t+ ⁇ ) represents the angular velocity measurement value of the inertial measurement unit when the timestamp is t
  • represents The time delay variable between the resampled angular velocity measurement value and the angular velocity of the camera
  • i represents the ith
  • argmax represents the value of the variable when the following formula reaches the maximum value, for example, formula (9) represents When the maximum value is reached, the value of ⁇ .
  • the accuracy of the first translation depends on the sampling rate of the angular velocity measurements. For example, when the sampling rate is 500Hz and the sampling interval is 2ms, the obtained first translation can only be accurate to 2ms at most. This accuracy cannot be used for anti-shake processing of video data. Therefore, it is necessary to carry out the first translation. Optimization, the calculation formula used in the optimization process is:
  • the terminal can substitute ⁇ t 0 as the initial value of ⁇ into the above formula, and use existing nonlinear optimization methods, such as Gauss-Newton method, gradient descent method, conjugate gradient method, LM method (Levenberg-Marquardt, Mcquardt method) At least one of the methods) is used to solve the above formula (10), and the final optimized translation amount is obtained as the first delay data.
  • nonlinear optimization methods such as Gauss-Newton method, gradient descent method, conjugate gradient method, LM method (Levenberg-Marquardt, Mcquardt method) At least one of the methods) is used to solve the above formula (10), and the final optimized translation amount is obtained as the first delay data.
  • the resampling process is performed on the angular velocity measurement value to obtain the resampled angular velocity measurement value, which can ensure that the subsequent cross-correlation operation is effective.
  • a cross-correlation operation is performed between the resampled angular velocity measurement value and the angular velocity of the camera to obtain the first translation amount, and the delay between the camera and the inertial measurement unit can be quickly calculated.
  • Optimizing the first translation amount to obtain the first delay data can improve the calculation accuracy of the delay between the camera and the inertial measurement unit, and effectively perform anti-shake processing on the video data, thereby effectively improving the clarity of the video data. .
  • calculating the second delay data according to the angular velocity measurement value and the angular velocity calculation value includes: performing a resampling process on the angular velocity measurement value to obtain a resampled angular velocity measurement value; calculating the resampled angular velocity measurement value and the angular velocity Perform a cross-correlation operation on the value to obtain a second translation amount; perform optimization processing on the second translation amount to obtain second delay data.
  • the integral operation of the angular velocity measurement value may result in a delay error.
  • the terminal may perform resampling processing on the angular velocity measurement value to obtain the resampled angular velocity measurement value.
  • the method of resampling the angular velocity measurement value may be the same as the method of resampling the angular velocity measurement value in the above process of calculating the first delay data, for example, it may be polynomial interpolation, piecewise difference value, cubic B-spline interpolation, etc.
  • the terminal may perform a cross-correlation operation between the resampled angular velocity measurement value and the angular velocity calculation value corresponding to the inertial measurement unit according to the above formula (9) to obtain the second translation amount.
  • ⁇ t 0 represents the second translation amount
  • ⁇ c (t) represents the calculated value of the angular velocity of the inertial measurement unit when the timestamp is t
  • ⁇ i (t+ ⁇ ) represents the angular velocity of the inertial measurement unit when the timestamp is t Measurements.
  • the terminal thus performs optimization processing on the second translation amount according to the above formula (10) to obtain second delay data.
  • the terminal can substitute ⁇ t 0 as the initial value of ⁇ into the above formula, and use existing nonlinear optimization methods, such as Gauss-Newton method, gradient descent method, conjugate gradient method, LM method (Levenberg-Marquardt, At least one of the McQuarte method) is used to solve the above formula (10), and the final translation amount in the process of calculating the delay error is obtained as the second delay data.
  • nonlinear optimization methods such as Gauss-Newton method, gradient descent method, conjugate gradient method, LM method (Levenberg-Marquardt, At least one of the McQuarte method) is used to solve the above formula (10), and the final translation amount in the process of calculating the delay error is obtained as the second delay data.
  • the terminal performs a re-sampling process on the angular velocity measurement value, performs a cross-correlation operation on the re-sampled angular velocity measurement value and the angular velocity calculation value to obtain a second translation amount, and performs optimization processing on the second translation amount to obtain
  • the second delay data can obtain the delay error caused by the integral operation of the angular velocity measurement value in the process of calculating the first delay data, which is conducive to accurately aligning the target delay data, video data and the measurement data of the inertial measurement unit.
  • the anti-shake processing of the video data further improves the clarity of the video data.
  • the above method further includes: determining a non-rapidly rotating video segment in the video data according to the measurement data; selecting a target video segment from the non-rapidly rotating video segment; calculating the rotation of the camera in the world coordinate system according to the target video segment vector.
  • the terminal can determine the non-rapid rotation video segment in the video data according to the measurement data.
  • the non-sharp video segment may be a video segment through which the camera is rotated by an angle less than or equal to 45 degrees.
  • the terminal may first identify the rapidly rotating video segment in the video data according to the measurement data, so as to determine the video segment other than the rapidly rotating video segment as the non-rapidly rotating video segment.
  • the terminal can perform an integral operation on the angular velocity measurement value in the measurement data within a time window of 1s to obtain the angle turned by the camera, and determine whether the angle is greater than 45 degrees. If it is greater than 45 degrees, determine whether The camera is now spinning rapidly.
  • the terminal can identify the rapidly rotating video segment in the video data through the above method, and determine the video segment other than the rapidly rotating video segment as the non-surgically rotating video segment.
  • the target video segment refers to the video segment that contains rich texture scenes and does not contain sharp turns.
  • the terminal performs feature point detection on each frame of image in the non-rapidly rotating video segment, and obtains each frame of image after detection.
  • Existing detection methods can be used to detect feature points in images, for example, SURF (Speeded Up Robust Features, scale and rotation invariant features) algorithm, ORB (Oriented Fast and Rotated Brief, fast directional rotation) algorithm, SIFT (Scale-invariant) algorithm feature transform, scale-invariant feature transform) algorithm, etc.
  • the terminal may divide each frame of image after detection into regions to obtain multiple image regions. Therefore, the terminal counts the number of feature points distributed in each image area, and calculates the standard deviation corresponding to each frame of image according to the counted number of feature points.
  • the terminal compares the standard deviation with the threshold, and if the standard deviation is less than the threshold, it indicates that the shooting scene corresponding to the current frame image is a rich texture scene. If the standard deviations within the preset time period are all less than the threshold, it means that the shooting scenes within the preset time period are all rich texture scenes. At this time, the selection of the target video segment is stopped, and the video data within the preset time period is determined as the target Video segment, output the start timestamp and end timestamp corresponding to the preset time segment.
  • the preset time period can be 15s. If the standard deviation of each frame of images within 15s is less than the threshold, the selection of the target video segment is stopped, the video data within the last 15s is determined as the target video segment, and the output corresponding to the target video segment The start timestamp and end timestamp of .
  • search for the video data with the largest number of image frames within 15s and satisfy the standard deviation less than the threshold take the found video data within 15s as the target video segment, and output Start timestamp and end timestamp corresponding to the target video segment.
  • the terminal then calculates the rotation vector of the camera in the world coordinate system according to the target video segment.
  • a frame of images may be extracted according to a preset time interval for feature point detection.
  • the preset time interval may be 1s.
  • the terminal may divide the detected image into four different regions to obtain 8 image regions, and a schematic diagram of the image regions may be shown in FIG. 6 .
  • the target video segment is selected from the non-rapidly rotating video segment, and the rotation vector of the camera in the world coordinate system is calculated according to the target video segment. It avoids the influence of rapidly rotating video segments or video data with insufficient texture scenes on the rotation vector corresponding to the camera, improves the accuracy of the rotation vector of the camera in the world coordinate system, and can accurately calculate the distance between the camera and the inertial measurement unit. Delay.
  • steps in the flowcharts of FIGS. 2 to 3 are shown in sequence according to the arrows, these steps are not necessarily executed in the sequence shown by the arrows. Unless explicitly stated herein, the execution of these steps is not strictly limited to the order, and these steps may be performed in other orders. Moreover, at least a part of the steps in FIGS. 2 to 3 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed and completed at the same time, but may be executed at different times. These sub-steps or stages are not necessarily completed at the same time. The order of execution of the steps is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of sub-steps or stages of other steps.
  • an apparatus for anti-shake processing of video data including: an acquisition module 702, a first calculation module 704, a first fitting module 706, a second calculation module 708, a first Two fitting module 710 and anti-shake module 712, wherein:
  • the acquiring module 702 is configured to acquire video data collected by the camera and acquire measurement data of the inertial measurement unit, where the measurement data includes an angular velocity measurement value.
  • the first calculation module 704 is configured to calculate the rotation vector of the camera in the world coordinate system according to the video data.
  • the first fitting module 706 is configured to perform fitting processing on the rotation vector of the camera in the world coordinate system to obtain the angular velocity of the camera.
  • the second calculation module 708 is configured to calculate the rotation vector of the inertial measurement unit in the world coordinate system according to the measurement data.
  • the second fitting module 710 is configured to perform fitting processing on the rotation vector of the inertial measurement unit in the world coordinate system to obtain the calculated value of the angular velocity corresponding to the inertial measurement unit.
  • the anti-shake module 712 is configured to perform anti-shake processing on the video data according to the measured value of the angular velocity, the angular velocity of the camera, and the calculated value of the angular velocity.
  • the video data includes multiple frames of images
  • the first fitting module 706 is further configured to calculate the target rotation vector corresponding to each frame of image according to the rotation vector of the camera in the world coordinate system, to obtain the continuous rotation vector corresponding to the camera;
  • the continuous rotation vector is fitted and calculated to obtain the angular velocity of the camera.
  • the first fitting module 706 is further configured to obtain the current frame image, and obtain the original rotation vector corresponding to the current frame image and the target rotation vector corresponding to the previous frame image from the rotation vector of the camera in the world coordinate system ; Calculate the target rotation vector corresponding to the current frame image according to the original rotation vector corresponding to the current frame image and the target rotation vector corresponding to the previous frame image; update the next frame image to the current frame image, and return the camera in the world coordinate system.
  • the anti-shake module 712 is further configured to calculate the first delay data according to the angular velocity measurement value and the angular velocity of the camera; calculate the second delay data according to the angular velocity measurement value and the angular velocity calculation value; calculate the second delay data according to the first delay data and the second delay
  • the target delay data corresponding to the video data is calculated from the data; the anti-shake processing is performed on the video data according to the target delay data.
  • the anti-shake module 712 is further configured to perform resampling processing on the angular velocity measurement value to obtain a resampled angular velocity measurement value; perform a cross-correlation operation on the resampled angular velocity measurement value and the angular velocity of the camera to obtain the first A translation amount; performing optimization processing on the first translation amount to obtain first delay data.
  • the anti-shake module 712 is further configured to perform resampling processing on the angular velocity measurement value to obtain a resampled angular velocity measurement value; perform a cross-correlation operation on the resampled angular velocity measurement value and the angular velocity calculation value to obtain the first Second, the translation amount; the second translation amount is optimized to obtain the second delay data.
  • the anti-shake module 712 is further configured to perform difference processing between the first delayed data and the second delayed data, and calculate the target delayed data corresponding to the video data.
  • the above-mentioned apparatus further comprises: a video segment selection module for determining a non-rapidly rotating video segment in the video data according to the measurement data; selecting a target video segment from the non-rapidly rotating video segment; calculating the camera according to the target video segment The rotation vector in world coordinates.
  • each module in the above-mentioned video data anti-shake processing apparatus can be implemented in whole or in part by software, hardware and combinations thereof.
  • the above modules can be embedded in or independent of the processor in the computer device in the form of hardware, or can be stored in the memory in the computer device in the form of software, so that the processor can call and execute the corresponding operations of the above modules.
  • a computer device is provided, and the computer device may be a terminal, and its internal structure diagram may be as shown in FIG. 8 .
  • the computer equipment includes a processor, memory, a network interface, a display screen, and an input device connected by a system bus.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium, an internal memory.
  • the nonvolatile storage medium stores an operating system and a computer program.
  • the internal memory provides an environment for the execution of the operating system and computer programs in the non-volatile storage medium.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the display screen of the computer equipment may be a liquid crystal display screen or an electronic ink display screen
  • the input device of the computer equipment may be a touch layer covered on the display screen, or a button, a trackball or a touchpad set on the shell of the computer equipment , or an external keyboard, trackpad, or mouse.
  • FIG. 8 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied. Include more or fewer components than shown in the figures, or combine certain components, or have a different arrangement of components.
  • a computer device which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps in each of the foregoing embodiments when the processor executes the computer program.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the steps in each of the foregoing embodiments.
  • Nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in various forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Road (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous chain Road (Synchlink) DRAM
  • SLDRAM synchronous chain Road (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

本申请涉及一种视频数据的防抖处理方法、装置、计算机设备和存储介质。方法包括:获取摄像头采集的视频数据,以及获取惯性测量单元的测量数据,测量数据包括角速度测量值;根据视频数据计算摄像头在世界坐标系下的旋转向量;对摄像头在世界坐标系下的旋转向量进行拟合处理,得到摄像头的角速度;根据测量数据计算惯性测量单元在世界坐标系下的旋转向量;对惯性测量单元在世界坐标系下的旋转向量进行拟合处理,得到惯性测量单元对应的角速度计算值;根据角速度测量值、摄像头的角速度以及角速度计算值对视频数据进行防抖处理。采用本方法能够以在线的方式准确计算摄像头与惯性测量单元之间的延时,有效提高了视频数据的清晰度。

Description

视频数据的防抖处理方法、装置、计算机设备和存储介质 技术领域
本申请涉及计算机技术领域,特别是涉及一种视频数据的防抖处理方法、装置、计算机设备和存储介质。
背景技术
随着计算机技术的不断发展,各种具有不同功能的智能终端的出现使得人们的生活越来越丰富多样、也越来越便捷。其中,具有摄像头的智能终端等使得人们可以随时随地拍摄图像或视频来记录生活。但是,在视频拍摄过程中,由于智能终端系统的自身原因,摄像头启动拍摄的时刻与惯性测量单元(Inertial measurement unit,简称IMU)启动记录运动状态的时刻之间相差一个延时,导致视频数据存在抖动。为了防止视频数据抖动,需要计算摄像头与惯性测量单元之间的延时。传统方式中是采用离线的时间标定算法如kalibr算法来计算摄像头与惯性测量单元之间的延时。
技术问题
然而传统方式需要制定标定物,且计算得到的延时是固定的,在实际应用时,当拍摄不同的视频时,摄像头与惯性测量单元之间的延时可能会发生变化,通过传统方式计算延时会致使延时计算准确性较低,无法得到清晰的视频数据。因此,如何通过在线的方式计算摄像头与惯性测量单元之间的延时,以提高视频数据的清晰度称为目前需要解决的一个技术问题。
技术解决方案
基于此,有必要针对上述技术问题,提供一种能够通过在线的方式计算摄像头与惯性测量单元之间的延时,以提高视频数据的清晰度的视频数据的防抖方法、装置、计算机设备和存储介质。
一种视频数据的防抖处理方法,所述方法包括:
获取摄像头采集的视频数据,以及获取惯性测量单元的测量数据,所述测量数据包括角速度测量值;
根据所述视频数据计算所述摄像头在世界坐标系下的旋转向量;
对所述摄像头在世界坐标系下的旋转向量进行拟合处理,得到所述摄像头的角速度;
根据所述测量数据计算所述惯性测量单元在世界坐标系下的旋转向量;
对所述惯性测量单元在世界坐标系下的旋转向量进行拟合处理,得到所述惯性测量单元对应的角速度计算值;
根据所述角速度测量值、所述摄像头的角速度以及所述角速度计算值对所述视频数据进行防抖处理。
在其中一个实施例中,所述视频数据包括多帧图像,所述对所述摄像头在世界坐标系下的旋转向量进行拟合处理,得到所述摄像头的角速度包括:
根据所述摄像头在世界坐标系下的旋转向量计算每帧图像对应的目标旋转向量,得到所述摄像头对应的连续旋转向量;
将所述连续旋转向量进行拟合计算,得到所述摄像头的角速度。
在其中一个实施例中,所述根据所述摄像头在世界坐标系下的旋转向量计算每帧图像对应的目标旋转向量,得到所述摄像头对应的连续旋转向量包括:
获取当前帧图像,在所述摄像头在世界坐标系下的旋转向量中获取所述当前帧图像对应的原始旋转向量以及上一帧图像对应的目标旋转向量;
根据所述当前帧图像对应的原始旋转向量以及所述上一帧图像对应的目标旋转向量计算所述当前帧图像对应的目标旋转向量;
将下一帧图像更新为所述当前帧图像,返回所述在所述摄像头在世界坐标系下的旋转向量中获取所述当前帧图像对应的原始旋转向量以及上一帧图像对应的目标旋转向量的步骤,直至计算得到最后一帧图像对应的目标旋转向量,得到所述摄像头对应的连续旋转向量。
在其中一个实施例中,所述根据所述角速度测量值、所述摄像头的角速度以及所述角速度计算值对所述视频数据进行防抖处理包括:
根据所述角速度测量值以及所述摄像头的角速度计算第一延迟数据;
根据所述角速度测量值以及所述角速度计算值计算第二延迟数据;
根据所述第一延迟数据以及所述第二延迟数据计算所述视频数据对应的目标延迟数据;
根据所述目标延迟数据对所述视频数据进行防抖处理。
在其中一个实施例中,所述根据所述角速度测量值以及所述摄像头的角速度计算第一延迟数据包括:
对所述角速度测量值进行重采样处理,得到重采样后的角速度测量值;
将所述重采样后的角速度测量值与所述摄像头的角速度进行互相关运算,得到第一平移量;
对所述第一平移量进行优化处理,得到第一延迟数据。
在其中一个实施例中,所述根据所述角速度测量值以及所述角速度计算值计算第二延迟数据包括:
对所述角速度测量值进行重采样处理,得到重采样后的角速度测量值;
将所述重采样后的角速度测量值与所述角速度计算值进行互相关运算,得到第二平移量;
对所述第二平移量进行优化处理,得到第二延迟数据。
在其中一个实施例中,所述根据所述第一延迟数据以及所述第二延迟数据计算所述视频数据对应的目标延迟数据包括:
将所述第一延迟数据与所述第二延迟数据进行作差处理,计算得到所述视频数据对应的目标延迟数据。
在其中一个实施例中,所述方法还包括:
根据所述测量数据在所述视频数据中确定非急速旋转视频段;
在所述非急速旋转视频段中选取目标视频段;
根据所述目标视频段计算所述摄像头在世界坐标系下的旋转向量。
一种视频数据的防抖处理装置,所述装置包括:
获取模块,用于获取摄像头采集的视频数据,以及获取惯性测量单元的测量数据,所述测量数据包括角速度测量值;
第一计算模块,用于根据所述视频数据计算所述摄像头在世界坐标系下的旋转向量;
第一拟合模块,用于对所述摄像头在世界坐标系下的旋转向量进行拟合处理,得到所述摄像头的角速度;
第二计算模块,用于根据所述测量数据计算所述惯性测量单元在世界坐标系下的旋转向量;
第二拟合模块,用于对所述惯性测量单元在世界坐标系下的旋转向量进行拟合处理,得到所述惯性测量单元对应的角速度计算值;
防抖模块,用于根据所述角速度测量值、所述摄像头的角速度以及所述角速度计算值对所述视频数据进行防抖处理。
在其中一个实施例中,所述视频数据包括多帧图像,所述第一拟合模块还用于根据所述摄像头在世界坐标系下的旋转向量计算每帧图像对应的目标旋转向量,得到所述摄像头对应 的连续旋转向量;将所述连续旋转向量进行拟合计算,得到所述摄像头的角速度。
一种计算机设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述各个方法实施例中的步骤。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述各个方法实施例中的步骤。
技术效果
上述视频数据的防抖处理方法、装置、计算机设备和存储介质,获取摄像头采集的视频数据,以及获取惯性测量单元的测量数据,测量数据包括角速度测量值,根据视频数据计算摄像头在世界坐标系下的旋转向量,对摄像头在世界坐标系下的旋转向量进行拟合处理,得到摄像头的角速度,无需制作标定物,可以在拍摄不同的视频时,根据视频数据计算摄像头在世界坐标系下的旋转向量,并计算摄像头的角速度。根据测量数据计算惯性测量单元在世界坐标系下的旋转向量,对惯性测量单元在世界坐标系下的旋转向量进行拟合处理,得到惯性测量单元对应的角速度计算值,有利于后续计算延迟误差。由于摄像头的角速度是在视频拍摄过程中计算得到的,惯性测量单元对应的角速度计算值用于计算延迟误差,根据角速度测量值、摄像头的角速度以及角速度计算值对视频数据进行防抖处理,能够在拍摄不同的视频时,分别计算摄像头与惯性测量单元之间的延时,同时能够消除延迟误差,实现以在线的方式准确计算摄像头与惯性测量单元之间的延时,有效提高了视频数据的清晰度。
附图说明
图1为一个实施例中视频数据的防抖处理方法的应用环境图;
图2为一个实施例中视频数据的防抖处理方法的流程示意图;
图3为一个实施例中对摄像头在世界坐标系下的旋转向量进行拟合处理,得到摄像头的角速度步骤的流程示意图;
图4为一个实施例中摄像头在世界坐标系下的旋转向量的示意图;
图5为一个实施例中转换后得到的连续的旋转向量的示意图;
图6为一个实施例中图像区域的示意图;
图7为一个实施例中视频数据的防抖处理装置的结构框图;
图8为一个实施例中计算机设备的内部结构图。
本发明的实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件和参数,但这些元件和参数不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分、或者将一个参数和另一个参数区分。举例来说,在不脱离本申请的范围的情况下,可以将第一延迟数据称为第二延迟数据,且类似地,可将第二延迟数据称为第一延迟数据。第一延迟数据和第二延迟数据两者都是延时数据,但其不是同一延迟数据。
本申请提供的视频数据的防抖处理方法,可以应用于如图1所示的应用环境中。其中,终端102中安装有摄像头104和惯性测量单元106。当用户启动摄像头104进行拍摄时,终端102获取摄像头104采集的视频数据,以及获取惯性测量单元106的测量数据,测量数据包括角速度测量值,终端102根据视频数据计算摄像头104在世界坐标系下的旋转向量,对摄像头104在世界坐标系下的旋转向量进行拟合处理,得到摄像头104的角速度,根据测量数据计算惯性测量单元106在世界坐标系下的旋转向量,根据惯性测量单元106在世界坐标系下的旋转向量进行拟合处理,得到惯性测量单元106对应的角速度计算值,进而根据角速度测量值、摄像头104的角速度以及角速度计算值对视频数据进行防抖处理。其中,终端102可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备。
在一个实施例中,如图2所示,提供了一种视频数据的防抖处理方法,以该方法应用于图1中的终端为例进行说明,包括以下步骤:
步骤202,获取摄像头采集的视频数据,以及获取惯性测量单元的测量数据,测量数据包括角速度测量值。
终端中预先安装有摄像头和惯性测量单元(Inertial measurement unit,简称IMU)。其中,摄像头用于拍摄图片或者视频,摄像头可以是黑白摄像头、彩色摄像头、广角摄像头、或长焦摄像头等中的一种或多种。摄像头可以是内置于电子设备的摄像头,也可以是外置摄像头。惯性测量单元用于记录终端的运动状态,惯性测量单元可以包括陀螺仪和加速度计,陀螺仪还可以称为角速度传感器,用于测量终端偏转、倾斜时的角速度。加速度计用于测量终端的加速度,从而通过对角速度和加速度进行积分计算,得到终端的运动状态。
在用户启动终端中的摄像头进行视频拍摄的过程中,摄像头可以将采集到的视频数据发送至终端,惯性测量单元也会将测量得到的测量数据发送至终端。视频数据是指连续的图像 序列,视频数据可以包括连续多帧在时间上存在先后顺序的图像。帧是视频数据中的最小视觉单元,视频数据中的每一帧可以对应一个图像。视频数据可以是指摄像头实时采集的视频流,也可以是采集到的包含丰富纹理场景,且不包含急速转弯的视频段。测量数据可以包括角速度测量值以及加速度测量值。角速度测量值是通过惯性测量单元中的陀螺仪测量得到的。角速度测量值是指终端在单位时间内转过的角度以及转动的方向。角速度测量值越大,表示终端转动的角度越大,转动的方向越大,则终端的抖动越大。
步骤204,根据视频数据计算摄像头在世界坐标系下的旋转向量。
视频数据可以包括多帧图像,终端在获取到摄像头发送的视频数据后,对视频数据进行解析,得到视频数据中的多帧图像。终端计算每帧图像对应的摄像头在世界坐标系下的旋转向量。具体的,终端可以采用现有的SLAM(Simultaneous Localization and Mapping,同步定位与建图)算法提取各帧图像的特征点,得到各帧图像的特征点以及特征点的像素坐标。其中,特征点可以是ORB(Oriented FAST and Rotated BRIEF,尺度不变)特征点。从而根据像素坐标与世界坐标之间的转换关系将各帧图像的特征点的像素坐标转换为世界坐标,进而采用SLAM算法根据各帧图像的特征点的世界坐标计算得到摄像头在世界坐标下的旋转向量。例如,现有的视觉SLAM算法可以包括ORB-SLAM2算法、ORB-SLAM算法、RTAB-SLAM算法、ISD-SLAM算法、DVO-SLAM算法、SVO算法等中的至少一种。
终端可以将第一帧图像对应的相机坐标系确定为世界坐标系。每帧图像对应的摄像头在世界坐标系下的旋转向量是指摄像头在采集每帧图像时,在世界坐标下的旋转向量。世界坐标系是指三维世界坐标系,用以描述三维空间中物体的绝对坐标。相机坐标系是指以摄像头(相机)的光心作为原点,Zc轴与相机光轴重合,并且垂直于成像平面,且取摄影方向为正方向,Xc轴、Yc轴分别与图像物理坐标系的x轴、y轴平行的坐标系。可以将Xc、Yc轴和Zc轴称为摄像轴。旋转向量是指摄像头的相机坐标转换到世界坐标系的旋转向量。旋转向量是指方向为旋转轴,大小为旋转角的向量,表示摄像头在世界坐标系下的旋转角度。旋转向量可以是1×3的矢量。通过计算每帧图像对应的摄像头在世界坐标系下的旋转向量,得到每帧图像对应的时间戳t i,以及每帧图像对应的旋转向量v i
在其中一个实施例中,由于摄像头采集的视频数据中可能包含急速转弯视频段或者纹理场景不够丰富的视频段,会导致计算得到的摄像头在世界坐标系下的旋转向量不够准确。终端可以在计算摄像头在世界坐标系下的旋转向量之前,在视频数据中选取包含丰富纹理场景,且不包含急速转弯的视频段,从而根据选取的视频段计算摄像头在世界坐标系下的旋转向量,由此提高了摄像头在世界坐标系下的旋转向量的准确性。
步骤206,对摄像头在世界坐标系下的旋转向量进行拟合处理,得到摄像头的角速度。
对于摄像头的同一个旋转动作,可能会对应多个旋转向量,例如,旋转向量(2nπ+θ)*(n x,n y,n z),n=0,±1对应同一个旋转动作,其中,n表示用于计算旋转向量的整数值,θ表示旋转向量的旋转角,(n x,n y,n z)表示旋转轴向量,即以该向量为旋转轴进行旋转。因此,终端根据视频数据计算得到的摄像头在世界坐标系下的旋转向量在时间轴上会出现不连续的情况,即在时间轴上同一时间戳可能对应多个旋转向量。终端可以将摄像头在世界坐标系下的旋转向量转换为连续的旋转向量,从而对转换得到的连续的旋转向量进行拟合处理,得到拟合后的曲线,进而根据拟合后的曲线计算得到摄像头在每个时间戳对应的角速度。
在其中一个实施例中,终端可以采用B样条曲线方法、多项式拟合方法、贝塞尔曲线拟合方法等多种拟合方法中的任意一种对连续的旋转向量进行拟合,从而根据拟合后得到的曲线计算摄像头在每个时间戳对应的角速度。
步骤208,根据测量数据计算惯性测量单元在世界坐标系下的旋转向量。
步骤210,对惯性测量单元在世界坐标系下的旋转向量进行拟合处理,得到惯性测量单元对应的角速度计算值。
测量数据可以包括惯性测量单元测量得到的角速度测量值和加速度测量值。终端对角速度测量值和加速度测量值进行积分运算,得到惯性测量单元在世界坐标系下的旋转向量。惯性测量单元在世界坐标系下的旋转向量可以用于表示惯性测量单元在世界坐标系下的旋转。在其中一个实施例中,终端可以采用现有的IMU预积分方式计算惯性测量单元在世界坐标系下的旋转向量。例如,现有的IMU与方式可以是扩展卡尔曼滤波算法、okvis算法、vins-mono算法等中的至少一种。
在计算得到惯性测量单元在世界坐标下的旋转向量之后,终端可以按照上述对摄像头在世界坐标系下的旋转向量进行拟合处理的步骤对惯性测量单元在世界坐标系下的旋转向量进行拟合处理,得到惯性测量单元对应的角速度计算值。
在其中一个实施例中,终端可以在获取到摄像头采集的视频数据,以及获取惯性测量单元的测量数据后,同时计算摄像头在世界坐标系下的旋转向量,以及惯性测量单元在世界坐标系下的旋转向量,并分别对摄像头和惯性测量单元在世界坐标系下的旋转向量进行拟合处理,得到摄像头的角速度和惯性测量单元对应的角速度计算值。也可以按照先后顺序依次计算摄像头的角速度和惯性测量单元对应的角速度计算值。此处对摄像头的角速度和惯性测量单元对应的角速度计算值的计算顺序不作限定。
步骤212,根据角速度测量值、摄像头的角速度以及角速度计算值对视频数据进行防抖处理。
由于摄像头和惯性测量单元连接在同一刚体上,摄像头和惯性测量单元对应的角速度的模长或大小应该是一致的,但由于系统原因,摄像头与惯性测量单元相对应数据之间在时间轴上存在一个延迟,可以根据角速度测量值、摄像头的角速度以及角速度计算值计算摄像头与惯性测量单元在时间轴上的延迟数据,从而根据延迟数据将视频数据与惯性测量单元的测量数据进行对齐,实现对视频数据进行防抖处理。
在其中一个实施例中,根据角速度测量值、摄像头的角速度以及角速度计算值对视频数据进行防抖处理包括:根据角速度测量值以及摄像头的角速度计算第一延迟数据;根据角速度测量值以及角速度计算值计算第二延迟数据;根据第一延迟数据以及第二延迟数据计算视频数据对应的目标延迟数据;根据目标延迟数据对视频数据进行防抖处理。
由于终端中安装的摄像头和惯性测量单元对应的角速度的模长或大小应该是一致的,但由于系统原因,摄像头与惯性测量单元相对应数据之间在时间轴上存在一个延迟,终端可以计算惯性测量单元的角速度测量值与摄像头的角速度在时间轴上的平移量,将该平移量确定为第一延迟数据。由于在计算第一延迟数据的过程中会对惯性测量单元的角速度测量值进行积分处理,可能会导致计算得到的第一延迟数据存在延迟误差,为了减少延迟误差,终端可以计算惯性测量单元的角速度计算值与惯性测量单元的角速度测量值在时间轴上的平移量,并将该平移量确定为第二延迟数据,第二延迟数据用于表示延迟误差。第一延迟数据与第二延迟数据的计算过程可以是相同的,因此第二延迟数据的计算过程可以看作是将第一延时数据计算过程中摄像头的角速度替换为角速度计算值进行再一次处理的过程。终端从而根据第一延迟数据以及第二延迟数据进行计算,得到视频数据对应的目标延迟数据,实现以在线的方式计算摄像头与惯性测量单元之间的延时,进而根据目标延迟数据将视频数据与惯性测量单元的测量数据进行对齐,完成防抖处理,提高了视频数据的清晰度。
在其中一个实施例中,根据第一延迟数据以及第二延迟数据计算视频数据对应的目标延迟数据包括:将第一延迟数据与第二延迟数据进行作差处理,计算得到视频数据对应的目标延迟数据。第一延时数据为惯性测量单元的角速度测量值与摄像头的角速度在时间轴上的平移量,第二延迟数据为延迟误差,终端将第一延时数据减去第二延迟数据,从而得到目标延迟数据,能够准确对齐视频数据与惯性测量单元的测量数据。
在本实施例中,获取摄像头采集的视频数据,以及获取惯性测量单元的测量数据,测量数据包括角速度测量值,根据视频数据计算摄像头在世界坐标系下的旋转向量,对摄像头在 世界坐标系下的旋转向量进行拟合处理,得到摄像头的角速度,无需制作标定物,可以在拍摄不同的视频时,根据视频数据计算摄像头在世界坐标系下的旋转向量,并计算摄像头的角速度。根据测量数据计算惯性测量单元在世界坐标系下的旋转向量,对惯性测量单元在世界坐标系下的旋转向量进行拟合处理,得到惯性测量单元对应的角速度计算值,有利于后续计算延迟误差。由于摄像头的角速度是在视频拍摄过程中计算得到的,惯性测量单元对应的角速度计算值用于计算延迟误差,根据角速度测量值、摄像头的角速度以及角速度计算值对视频数据进行防抖处理,能够在拍摄不同的视频时,分别计算摄像头与惯性测量单元之间的延时,同时能够消除延迟误差,实现以在线的方式准确计算摄像头与惯性测量单元之间的延时,有效提高了视频数据的清晰度。
在一个实施例中,如图3所示,对摄像头在世界坐标系下的旋转向量进行拟合处理,得到摄像头的角速度的步骤包括:
步骤302,根据摄像头在世界坐标系下的旋转向量计算每帧图像对应的目标旋转向量,得到摄像头对应的连续旋转向量。
步骤304,将连续旋转向量进行拟合计算,得到摄像头的角速度。
视频数据包括多帧图像。终端计算得到的摄像头在世界坐标系下的旋转向量中包括每帧图像对应的旋转向量。终端将计算得到的摄像头在世界坐标系下的旋转向量转换为连续的旋转向量。连续的旋转向量是指在时间轴上的旋转向量均为连续的。具体的,每帧图像存在一个对应的时间戳,针对每个时间戳,终端可以在摄像头在世界坐标系下的旋转向量中确定该时间戳对应的目标旋转向量,时间戳对应的目标旋转向量可以表示该时间戳的相应帧图像对应的目标旋转向量。终端在计算得到多帧图像对应的目标旋转向量后,即可得到连续的旋转向量。如图4所示,为摄像头在世界坐标系下的旋转向量的示意图,其中,时间序列是指时间轴,在时间轴上可以确定每帧图像对应的时间戳。旋转向量是由用相应帧图像的时间戳的x轴坐标、y轴坐标和z轴坐标构成的三维向量进行表示的。如图5所示,为一个实施例中转换后得到的连续的旋转向量。
终端计算得到的连续旋转向量中每个旋转向量存在对应的时间戳,连续旋转向量可以用V={v 0,v 1,v 2,...v i}来表示,连续旋转向量对应的时间戳可以用T={t 0,t 1,t 2,...t i}来表示。终端可以采用B样条曲线方法、多项式拟合方法、贝塞尔曲线拟合方法等多种拟合方法中的任意一种对连续的旋转向量进行拟合,从而根据拟合后得到的曲线计算摄像头在每个时间戳对应的角速度。具体以采用B样条曲线算法对连续旋转向量进行拟合为例进行说明,终端根据连续旋转向量、连续旋转向量对应的时间戳以及预设函数对连续旋转向量进行拟合,得到 拟合后的曲线,从而根据拟合后的曲线计算摄像头的角速度。其中,预设函数可以是k阶B样条基函数。拟合后的曲线为一条光滑的k阶B样条曲线。拟合后的k阶B样条曲线,可以表示为如下形式:
Figure PCTCN2022077636-appb-000001
其中,V(t)表示拟合后的k阶B样条曲线,v i表示控制点,即上述的旋转向量,V i,k(t)表示k阶B样条基函数。
V i,k(t)采用如下递归公式进行计算:
Figure PCTCN2022077636-appb-000002
Figure PCTCN2022077636-appb-000003
其中,t i表示节点,即上述的时间戳。i表示数量,第i个。
基于拟合后的B样条曲线,对旋转向量求一阶导数,最终得到的得到摄像头的角速度用如下公式所示:
Figure PCTCN2022077636-appb-000004
其中,w i表示摄像头的角速度,k表示拟合后的B样条曲线的阶数,t i+k+1、t i+1表示时间戳,v i+1、v i表示旋转向量。
在本实施例中,根据摄像头在世界坐标系下的旋转向量计算每帧图像对应的目标旋转向量,得到摄像头对应的连续旋转向量,将连续旋转向量进行拟合计算,得到摄像头的角速度,通过将不连续的旋转向量转换为连续的旋转向量,有利于对旋转向量进行拟合,通过对连续旋转向量进行拟合计算,能够准确计算得到摄像头在每个时间戳的角速度。
在一个实施例中,根据摄像头在世界坐标系下的旋转向量计算每帧图像对应的目标旋转向量,得到摄像头对应的连续旋转向量包括:获取当前帧图像,在摄像头在世界坐标系下的旋转向量中获取当前帧图像对应的原始旋转向量以及上一帧图像对应的目标旋转向量;根据当前帧图像对应的原始旋转向量以及上一帧图像对应的目标旋转向量计算当前帧图像对应的目标旋转向量;将下一帧图像更新为当前帧图像,返回在摄像头在世界坐标系下的旋转向量中获取当前帧图像对应的原始旋转向量以及上一帧图像对应的目标旋转向量的步骤,直至计算得到最后一帧图像对应的目标旋转向量,得到摄像头对应的连续旋转向量。
原始旋转向量是指根据视频数据计算得到,未经转换处理的旋转向量。目标旋转向量是 指经过转换处理,以保证得到的旋转向量为连续性的旋转向量。
终端可以根据多帧图像对应的时间戳的先后顺序依次计算每帧图像对应的目标旋转向量。具体的,终端获取当前帧图像,根据当前帧图像对应的时间戳获取当前帧图像对应的原始旋转向量。在获取当前帧图像之前,上一帧图像对应的原始旋转向量已完成转换处理,上一帧图像对应的旋转向量为目标旋转向量。终端获取上一帧图像对应的目标旋转向量。终端根据第一关系式计算原始旋转向量对应的旋转角,根据原始旋转向量、原始旋转向量对应的旋转角以及第二关系式计算原始旋转向量对应的旋转轴,从而根据原始旋转向量对应的旋转角、原始旋转向量对应的旋转轴、上一帧图像对应的目标旋转向量以及第三关系式计算原始旋转向量对应的目标整数值,目标整数值用于计算当前帧图像对应的目标旋转向量。终端进而根据目标整数值、原始旋转向量对应的旋转角、原始旋转向量对应的旋转轴和第四关系式计算当前帧图像对应的目标旋转向量。
第一关系式是指原始旋转向量对应的旋转角的计算公式,如下所示:
θ i=norm(v i)    (5)
其中,θ i表示原始旋转向量对应的旋转角,norm()表示范数函数,v i表示原始旋转向量,i表示数量,第i个。
第二关系式是指原始旋转向量对应的旋转轴的计算公式,如下所示:
Figure PCTCN2022077636-appb-000005
其中,Ni表示原始旋转向量对应的旋转轴。
第三关系式是指目标整数值的计算公式,如下所示:
Figure PCTCN2022077636-appb-000006
其中,n i表示原始旋转向量对应的目标整数值,v′ i-1表示上一帧图像对应的目标旋转向量,argmin表示使后面公式达到最小值时的变量的取值,例如,公式(7)中表示
Figure PCTCN2022077636-appb-000007
达到最小值时,n的取值。
第四关系式是指当前帧图像对应的旋转向量的计算公式,如下所示:
v′ i=(2n iπ+θ i)*N i        (8)
其中,v′ i表示当前帧图像对应的目标旋转向量。
终端继续获取下一帧图像,将下一帧图像更新为当前帧图像,返回在摄像头在世界坐标系下的旋转向量中获取当前帧图像对应的原始旋转向量以及上一帧图像对应的目标旋转向量的步骤,即根据上述公式(5)-(8)计算下一帧图像对应的目标旋转向量,直至计算得到最后一帧图像对应的目标旋转向量,此时终端得到摄像头对应的连续旋转向量。
在本实施例中,获取当前帧图像,根据当前帧图像对应的原始旋转向量以及上一帧图像对应的目标旋转向量计算当前帧图像对应的目标旋转向量,将下一帧图像更新为当前帧图像,重复上述计算目标旋转向量的步骤,直至计算得到最后一帧图像对应的目标旋转向量,得到摄像头对应的连续旋转向量。通过计算每帧图像对应的时间戳的目标旋转向量,使得摄像头的每一个旋转均可用唯一的一个旋转向量来表示,从而确保旋转向量在时间轴上的连续性。
在一个实施例中,根据角速度测量值以及摄像头的角速度计算第一延迟数据包括:对角速度测量值进行重采样处理,得到重采样后的角速度测量值;将重采样后的角速度测量值与摄像头的角速度进行互相关运算,得到第一平移量;对第一平移量进行优化处理,得到第一延迟数据。
由于惯性测量单元在测量的过程中,不会完全按照等间隔的方式进行采样。因此,需要对角速度测量值进行等间隔重采样。在其中一个实施例中,重采样的方式可以有多种,例如,多项式插值、分段差值、三次B样条插值等。
终端在得到重采样后的角速度测量值后,计算重采样后的角速度测量值与摄像头的角速度之间的第一延迟数据。具体的,终端可以将重采样后的角速度测量值与摄像头的角速度进行互相关运算,通过使互相关达到最大,来确定该两组角速度在时间轴上的第一平移量。互相关运算的计算公式可以如下所示:
Figure PCTCN2022077636-appb-000008
其中,δt 0表示第一平移量,ω c(t)表示摄像头在时间戳为t时的角速度,ω i(t+δ)表示惯性测量单元在时间戳为t时的角速度测量值,δ表示重采样后的角速度测量值与摄像头的角速度之间的时间延迟变量,i表示第i个,argmax表示使后面公式达到最大值时的变量的取值,例如,公式(9)中表示
Figure PCTCN2022077636-appb-000009
达到最大值时,δ的取值。
第一平移量的精度依赖于角速度测量值的采样率。例如,当采样率为500Hz时,采样间隔为2ms,得到的第一平移量最多只能精确到2ms,这个精度是无法用于对视频数据进行防抖处理的,因此需要对第一平移量进行优化,在优化过程中采用的计算公式为:
Figure PCTCN2022077636-appb-000010
终端可以将δt 0作为δ的初始值代入至上述公式中,并采用现有的非线性优化方法,如高斯牛顿法、梯度下降法、共轭梯度法、LM法(Levenberg-Marquardt,麦夸特法)中的至少一种对上述公式(10)进行求解,得到最终优化后的平移量,作为第一延迟数据。
在本实施例中,对角速度测量值进行重采样处理,得到重采样后的角速度测量值,能够确保后续进行的互相关运算是有效的。将重采样后的角速度测量值与摄像头的角速度进行互相关运算,得到第一平移量,能够快速计算得到摄像头与惯性测量单元之间的延迟。对第一平移量进行优化处理,得到第一延迟数据,能够提高摄像头与惯性测量单元之间的延迟的计算准确性,有效地对视频数据进行防抖处理,从而有效提高了视频数据的清晰度。
在一个实施例中,根据角速度测量值以及角速度计算值计算第二延迟数据包括:对角速度测量值进行重采样处理,得到重采样后的角速度测量值;将重采样后的角速度测量值与角速度计算值进行互相关运算,得到第二平移量;对第二平移量进行优化处理,得到第二延迟数据。
由于在计算第一延迟数据的过程中,对角速度测量值进行积分运算可能会导致存在延迟误差。为了消除延迟误差,终端可以对角速度测量值进行重采样处理,得到重采样后的角速度测量值。在其中一个实施例中,对角速度测量值进行重采样的方式与上述计算第一延迟数据的过程中对角速度测量值进行重采样的方式可以是相同的,例如,可以是多项式插值、分段差值、三次B样条插值法等中的任意一种。
终端在得到重采样后的角速度测量值后,可以根据上述公式(9)将重采样后的角速度测量值与惯性测量单元对应的角速度计算值进行互相关运算,得到第二平移量。此时,δt 0表示第二平移量,ω c(t)表示惯性测量单元在时间戳为t时的角速度计算值,ω i(t+δ)表示惯性测量单元在时间戳为t时的角速度测量值。
终端从而根据上述公式(10)对第二平移量进行优化处理,得到第二延迟数据。同样的,终端可以将δt 0作为δ的初始值代入至上述公式中,并采用现有的非线性优化方法,如高斯牛顿法、梯度下降法、共轭梯度法、LM法(Levenberg-Marquardt,麦夸特法)中的至少一种对上述公式(10)进行求解,得到计算延迟误差过程中的最终平移量,作为第二延迟数据。
在本实施例中,终端通过对角速度测量值进行重采样处理,将重采样后的角速度测量值与角速度计算值进行互相关运算,得到第二平移量,对第二平移量进行优化处理,得到第二 延迟数据,能够得到在计算第一延迟数据的过程中,对角速度测量值进行积分运算导致的延迟误差,有利于准确将目标延迟数据将视频数据与惯性测量单元的测量数据进行对齐,完成视频数据的防抖处理,进一步提高了视频数据的清晰度。
在一个实施例中,上述方法还包括:根据测量数据在视频数据中确定非急速旋转视频段;在非急速旋转视频段中选取目标视频段;根据目标视频段计算摄像头在世界坐标系下的旋转向量。
当视频数据中存在急速旋转视频段或者纹理场景不够丰富时,会影响后续计算得到的摄像头在世界坐标系下的旋转向量的准确性。为了提高摄像头在世界坐标系下的旋转向量的准确性,终端可以根据测量数据在视频数据中确定非急速旋转视频段。例如,非急速旋转视频段可以是摄像头转过的角度小于或者等于45度的视频段。具体的,终端可以先根据测量数据在视频数据中识别急速旋转视频段,从而将急速旋转视频段以外的视频段确定为非急速旋转视频段。在其中一个实施例中,终端可以在1s内的时间窗口内,对测量数据中角速度测量值进行积分运算,得到摄像头转过的角度,判断该角度是否大于45度,若大于45度,则确定摄像头此时处于急速旋转状态。终端可以通过上述方法识别视频数据中的急速旋转视频段,并将急速旋转视频段以外的视频段确定为非急速旋转视频段。
非急速旋转视频段中可以包括多帧图像。目标视频段是指包含丰富纹理场景,且不包含急速转弯的视频段。
终端在非急速旋转视频段内,对每帧图像进行特征点检测,得到检测后的每帧图像。可以采用现有的检测方法对图像进行特征点检测,例如,SURF(Speeded Up Robust Features,尺度和旋转不变特征)算法、ORB(Oriented Fast andRotated Brief,快速定向旋转)算法、SIFT(Scale-invariant feature transform,尺度不变特征变换)算法等中的任意一种。终端可以将检测后的每帧图像进行区域划分,得到多个图像区域。从而终端统计每个图像区域内分布的特征点数量,根据统计的特征点数量计算每帧图像对应的标准差。终端将标准差与阈值进行比较,若标准差小于阈值,则表明当前帧图像对应的拍摄场景为丰富纹理场景。若预设时间段内的标准差均小于阈值,则表明预设时间段内的拍摄场景均为丰富纹理场景,此时停止目标视频段的选取,将预设时间段内的视频数据确定为目标视频段,输出预设时间段对应的开始时间戳以及结束时间戳。若未查找到预设时间段内均满足标准差小于阈值的标准差,则可以查找预设时间段内满足标准差小于阈值的标准差数量最多的视频数据,将查找的视频数据确定为目标视频段。例如,预设时间段可以是15s,若15s内的各帧图像的标准差均小于阈值,则停止目标视频段的选取,将最近15s内的视频数据确定为目标视频段,输出目标视频 段对应的开始时间戳以及结束时间戳。若不存在15s内,均满足标准差小于阈值的图像,则查找15s内,满足标准差小于阈值的图像帧数最多的视频数据,将查找到的该15s内的视频数据作为目标视频段,输出目标视频段对应的开始时间戳以及结束时间戳。终端进而根据目标视频段计算摄像头在世界坐标系下的旋转向量。
在其中一个实施例中,可以根据预设时间间隔提取一帧图像进行特征点检测例如,预设时间间隔可以是1s。
在其中一个实施例中,终端可以对检测后的图像进行4次不同的区域划分,得到8个图像区域,图像区域的示意图可以如图6所示。
在本实施例中,通过在非急速旋转视频段中选取目标视频段,根据目标视频段计算摄像头在世界坐标系下的旋转向量。避免了急速旋转视频段或者纹理场景不够丰富的视频数据对摄像头对应的旋转向量的影响,提高了摄像头在世界坐标系下的旋转向量的准确性,从而能够准确计算摄像头与惯性测量单元之间的延迟。
应该理解的是,虽然图2至3的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2至3中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图7所示,提供了一种视频数据的防抖处理装置,包括:获取模块702、第一计算模块704、第一拟合模块706、第二计算模块708、第二拟合模块710和防抖模块712,其中:
获取模块702,用于获取摄像头采集的视频数据,以及获取惯性测量单元的测量数据,测量数据包括角速度测量值。
第一计算模块704,用于根据视频数据计算摄像头在世界坐标系下的旋转向量。
第一拟合模块706,用于对摄像头在世界坐标系下的旋转向量进行拟合处理,得到摄像头的角速度。
第二计算模块708,用于根据测量数据计算惯性测量单元在世界坐标系下的旋转向量。
第二拟合模块710,用于对惯性测量单元在世界坐标系下的旋转向量进行拟合处理,得到惯性测量单元对应的角速度计算值。
防抖模块712,用于根据角速度测量值、摄像头的角速度以及角速度计算值对视频数据进行防抖处理。
在一个实施例中,视频数据包括多帧图像,第一拟合模块706还用于根据摄像头在世界坐标系下的旋转向量计算每帧图像对应的目标旋转向量,得到摄像头对应的连续旋转向量;将连续旋转向量进行拟合计算,得到摄像头的角速度。
在一个实施例中,第一拟合模块706还用于获取当前帧图像,在摄像头在世界坐标系下的旋转向量中获取当前帧图像对应的原始旋转向量以及上一帧图像对应的目标旋转向量;根据当前帧图像对应的原始旋转向量以及上一帧图像对应的目标旋转向量计算当前帧图像对应的目标旋转向量;将下一帧图像更新为当前帧图像,返回在摄像头在世界坐标系下的旋转向量中获取当前帧图像对应的原始旋转向量以及上一帧图像对应的目标旋转向量的步骤,直至计算得到最后一帧图像对应的目标旋转向量,得到摄像头对应的连续旋转向量。
在一个实施例中,防抖模块712还用于根据角速度测量值以及摄像头的角速度计算第一延迟数据;根据角速度测量值以及角速度计算值计算第二延迟数据;根据第一延迟数据以及第二延迟数据计算视频数据对应的目标延迟数据;根据目标延迟数据对视频数据进行防抖处理。
在一个实施例中,防抖模块712还用于对角速度测量值进行重采样处理,得到重采样后的角速度测量值;将重采样后的角速度测量值与摄像头的角速度进行互相关运算,得到第一平移量;对第一平移量进行优化处理,得到第一延迟数据。
在一个实施例中,防抖模块712还用于对角速度测量值进行重采样处理,得到重采样后的角速度测量值;将重采样后的角速度测量值与角速度计算值进行互相关运算,得到第二平移量;对第二平移量进行优化处理,得到第二延迟数据。
在一个实施例中,防抖模块712还用于将第一延迟数据与第二延迟数据进行作差处理,计算得到视频数据对应的目标延迟数据。
在一个实施例中,上述装置还包括:视频段选取模块,用于根据测量数据在视频数据中确定非急速旋转视频段;在非急速旋转视频段中选取目标视频段;根据目标视频段计算摄像头在世界坐标系下的旋转向量。
关于视频数据的防抖处理装置的具体限定可以参见上文中对于视频数据的防抖处理方法的限定,在此不再赘述。上述视频数据的防抖处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对 应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图8所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口、显示屏和输入装置。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种视频数据的防抖处理方法。该计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,该计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图8中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,该存储器存储有计算机程序,该处理器执行计算机程序时实现上述各个实施例中的步骤。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述各个实施例中的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应 当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (13)

  1. 一种视频数据的防抖处理方法,其特征在于,所述方法包括:
    获取摄像头采集的视频数据,以及获取惯性测量单元的测量数据,所述测量数据包括角速度测量值;
    根据所述视频数据计算所述摄像头在世界坐标系下的旋转向量;
    对所述摄像头在世界坐标系下的旋转向量进行拟合处理,得到所述摄像头的角速度;
    根据所述测量数据计算所述惯性测量单元在世界坐标系下的旋转向量;
    对所述惯性测量单元在世界坐标系下的旋转向量进行拟合处理,得到所述惯性测量单元对应的角速度计算值;
    根据所述角速度测量值、所述摄像头的角速度以及所述角速度计算值对所述视频数据进行防抖处理。
  2. 根据权利要求1所述的方法,其特征在于,所述视频数据包括多帧图像,所述对所述摄像头在世界坐标系下的旋转向量进行拟合处理,得到所述摄像头的角速度包括:
    根据所述摄像头在世界坐标系下的旋转向量计算每帧图像对应的目标旋转向量,得到所述摄像头对应的连续旋转向量;
    将所述连续旋转向量进行拟合计算,得到所述摄像头的角速度。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述摄像头在世界坐标系下的旋转向量计算每帧图像对应的目标旋转向量,得到所述摄像头对应的连续旋转向量包括:
    获取当前帧图像,在所述摄像头在世界坐标系下的旋转向量中获取所述当前帧图像对应的原始旋转向量以及上一帧图像对应的目标旋转向量;
    根据所述当前帧图像对应的原始旋转向量以及所述上一帧图像对应的目标旋转向量计算所述当前帧图像对应的目标旋转向量;
    将下一帧图像更新为所述当前帧图像,返回所述在所述摄像头在世界坐标系下的旋转向量中获取所述当前帧图像对应的原始旋转向量以及上一帧图像对应的目标旋转向量的步骤,直至计算得到最后一帧图像对应的目标旋转向量,得到所述摄像头对应的连续旋转向量。
  4. 根据权利要求1所述的方法,其特征在于,所述根据所述角速度测量值、所述摄像头的角速度以及所述角速度计算值对所述视频数据进行防抖处理包 括:
    根据所述角速度测量值以及所述摄像头的角速度计算第一延迟数据;
    根据所述角速度测量值以及所述角速度计算值计算第二延迟数据;
    根据所述第一延迟数据以及所述第二延迟数据计算所述视频数据对应的目标延迟数据;
    根据所述目标延迟数据对所述视频数据进行防抖处理。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述角速度测量值以及所述摄像头的角速度计算第一延迟数据包括:
    对所述角速度测量值进行重采样处理,得到重采样后的角速度测量值;
    将所述重采样后的角速度测量值与所述摄像头的角速度进行互相关运算,得到第一平移量;
    对所述第一平移量进行优化处理,得到第一延迟数据。
  6. 根据权利要求4所述的方法,其特征在于,所述根据所述角速度测量值以及所述角速度计算值计算第二延迟数据包括:
    对所述角速度测量值进行重采样处理,得到重采样后的角速度测量值;
    将所述重采样后的角速度测量值与所述角速度计算值进行互相关运算,得到第二平移量;
    对所述第二平移量进行优化处理,得到第二延迟数据。
  7. 根据权利要求4所述的方法,其特征在于,所述根据所述第一延迟数据以及所述第二延迟数据计算所述视频数据对应的目标延迟数据包括:
    将所述第一延迟数据与所述第二延迟数据进行作差处理,计算得到所述视频数据对应的目标延迟数据。
  8. 根据权利要求1至7任意一项所述的方法,其特征在于,所述方法还包括:
    根据所述测量数据在所述视频数据中确定非急速旋转视频段;
    在所述非急速旋转视频段中选取目标视频段;
    根据所述目标视频段计算所述摄像头在世界坐标系下的旋转向量。
  9. 根据权利要求8所述的方法,其特征在于,所述非急速旋转视频段中包括多帧图像,所述在所述非急速旋转视频段中选取目标视频段包括:
    对所述非急速旋转视频段内的每帧图像进行特征点检测,并计算所述每帧 图像对应的特征点数量的标准差;
    根据所述每帧图像对应的特征点数量的标准差选取所述目标视频段。
  10. 一种视频数据的防抖处理装置,其特征在于,所述装置包括:
    获取模块,用于获取摄像头采集的视频数据,以及获取惯性测量单元的测量数据,所述测量数据包括角速度测量值;
    第一计算模块,用于根据所述视频数据计算所述摄像头在世界坐标系下的旋转向量;
    第一拟合模块,用于对所述摄像头在世界坐标系下的旋转向量进行拟合处理,得到所述摄像头的角速度;
    第二计算模块,用于根据所述测量数据计算所述惯性测量单元在世界坐标系下的旋转向量;
    第二拟合模块,用于对所述惯性测量单元在世界坐标系下的旋转向量进行拟合处理,得到所述惯性测量单元对应的角速度计算值;
    防抖模块,用于根据所述角速度测量值、所述摄像头的角速度以及所述角速度计算值对所述视频数据进行防抖处理。
  11. 根据权利要求10所述的装置,其特征在于,所述视频数据包括多帧图像,所述第一拟合模块还用于根据所述摄像头在世界坐标系下的旋转向量计算每帧图像对应的目标旋转向量,得到所述摄像头对应的连续旋转向量;将所述连续旋转向量进行拟合计算,得到所述摄像头的角速度。
  12. 一种计算机设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至9中任一项所述的方法的步骤。
  13. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至9中任一项所述的方法的步骤。
PCT/CN2022/077636 2021-02-26 2022-02-24 视频数据的防抖处理方法、装置、计算机设备和存储介质 WO2022179555A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110215905.0A CN114979456B (zh) 2021-02-26 2021-02-26 视频数据的防抖处理方法、装置、计算机设备和存储介质
CN202110215905.0 2021-02-26

Publications (1)

Publication Number Publication Date
WO2022179555A1 true WO2022179555A1 (zh) 2022-09-01

Family

ID=82972834

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/077636 WO2022179555A1 (zh) 2021-02-26 2022-02-24 视频数据的防抖处理方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN114979456B (zh)
WO (1) WO2022179555A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140036101A1 (en) * 2011-04-12 2014-02-06 Fujifilm Corporation Image pickup apparatus
CN108629793A (zh) * 2018-03-22 2018-10-09 中国科学院自动化研究所 使用在线时间标定的视觉惯性测程法与设备
CN110678898A (zh) * 2017-06-09 2020-01-10 厦门美图之家科技有限公司 一种视频防抖方法及移动设备
CN110880189A (zh) * 2018-09-06 2020-03-13 舜宇光学(浙江)研究院有限公司 联合标定方法及其联合标定装置和电子设备

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4926920B2 (ja) * 2007-11-16 2012-05-09 キヤノン株式会社 防振画像処理装置及び防振画像処理方法
KR100968974B1 (ko) * 2008-08-26 2010-07-14 삼성전기주식회사 손떨림 보정 장치
US20130107066A1 (en) * 2011-10-27 2013-05-02 Qualcomm Incorporated Sensor aided video stabilization
CN102865881B (zh) * 2012-03-06 2014-12-31 武汉大学 一种惯性测量单元的快速标定方法
CN106709222B (zh) * 2015-07-29 2019-02-01 中国科学院沈阳自动化研究所 基于单目视觉的imu漂移补偿方法
CN107255476B (zh) * 2017-07-06 2020-04-21 青岛海通胜行智能科技有限公司 一种基于惯性数据和视觉特征的室内定位方法和装置
CN107687850B (zh) * 2017-07-26 2021-04-23 哈尔滨工业大学深圳研究生院 一种基于视觉和惯性测量单元的无人飞行器位姿估计方法
CN107801014B (zh) * 2017-10-25 2019-11-08 深圳岚锋创视网络科技有限公司 一种全景视频防抖的方法、装置及便携式终端
WO2019080046A1 (zh) * 2017-10-26 2019-05-02 深圳市大疆创新科技有限公司 惯性测量单元的漂移标定方法、设备及无人飞行器
CN109561254B (zh) * 2018-12-18 2020-11-03 影石创新科技股份有限公司 一种全景视频防抖的方法、装置及便携式终端
CN110139031B (zh) * 2019-05-05 2020-11-06 南京大学 一种基于惯性感知的视频防抖系统及其工作方法
CN110166695B (zh) * 2019-06-26 2021-10-01 Oppo广东移动通信有限公司 摄像头防抖方法、装置、电子设备和计算机可读存储介质
CN110503684A (zh) * 2019-08-09 2019-11-26 北京影谱科技股份有限公司 相机位姿估计方法和装置
CN111314604B (zh) * 2020-02-19 2021-08-31 Oppo广东移动通信有限公司 视频防抖方法和装置、电子设备、计算机可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140036101A1 (en) * 2011-04-12 2014-02-06 Fujifilm Corporation Image pickup apparatus
CN110678898A (zh) * 2017-06-09 2020-01-10 厦门美图之家科技有限公司 一种视频防抖方法及移动设备
CN108629793A (zh) * 2018-03-22 2018-10-09 中国科学院自动化研究所 使用在线时间标定的视觉惯性测程法与设备
CN110880189A (zh) * 2018-09-06 2020-03-13 舜宇光学(浙江)研究院有限公司 联合标定方法及其联合标定装置和电子设备

Also Published As

Publication number Publication date
CN114979456A (zh) 2022-08-30
CN114979456B (zh) 2023-06-30

Similar Documents

Publication Publication Date Title
US11668571B2 (en) Simultaneous localization and mapping (SLAM) using dual event cameras
WO2022028595A1 (zh) 图像处理方法、装置、计算机可读存储介质及计算机设备
US11227397B2 (en) Block-matching optical flow and stereo vision for dynamic vision sensors
WO2019062291A1 (zh) 一种双目视觉定位方法、装置及系统
EP3028252B1 (en) Rolling sequential bundle adjustment
CN111354042A (zh) 机器人视觉图像的特征提取方法、装置、机器人及介质
CN108871311B (zh) 位姿确定方法和装置
CN110660098B (zh) 基于单目视觉的定位方法和装置
CN111127524A (zh) 一种轨迹跟踪与三维重建方法、系统及装置
Zheng et al. Photometric patch-based visual-inertial odometry
WO2022028594A1 (zh) 图像处理方法、装置、计算机可读存储介质及计算机设备
CN111225155B (zh) 视频防抖方法、装置、电子设备、计算机设备和存储介质
CN108827341A (zh) 用于确定图像采集装置的惯性测量单元中的偏差的方法
CN111623773B (zh) 一种基于鱼眼视觉和惯性测量的目标定位方法及装置
CN109040525B (zh) 图像处理方法、装置、计算机可读介质及电子设备
TW202314593A (zh) 定位方法及設備、電腦可讀儲存媒體
WO2022016909A1 (zh) 获取Wi-Fi指纹空间分布的方法、装置和电子设备
WO2022179555A1 (zh) 视频数据的防抖处理方法、装置、计算机设备和存储介质
US20220174217A1 (en) Image processing method and device, electronic device, and computer-readable storage medium
WO2021114883A1 (zh) 图像配准方法、终端及存储介质
CN110785792A (zh) 3d建模方法、电子设备、存储介质及程序产品
CN113436349A (zh) 一种3d背景替换方法、装置、存储介质和终端设备
CN114697542A (zh) 视频处理方法、装置、终端设备及存储介质
CN117671007B (zh) 一种位移监测方法、装置、电子设备及存储介质
CN117007078A (zh) 陀螺仪零偏的估计方法、装置、设备、存储介质和产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22758919

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22758919

Country of ref document: EP

Kind code of ref document: A1