CN111712857A - Image processing method, device, holder and storage medium - Google Patents
Image processing method, device, holder and storage medium Download PDFInfo
- Publication number
- CN111712857A CN111712857A CN201980011946.9A CN201980011946A CN111712857A CN 111712857 A CN111712857 A CN 111712857A CN 201980011946 A CN201980011946 A CN 201980011946A CN 111712857 A CN111712857 A CN 111712857A
- Authority
- CN
- China
- Prior art keywords
- frame image
- optimized
- current frame
- rotation parameters
- processed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 18
- 238000012545 processing Methods 0.000 claims abstract description 50
- 238000000034 method Methods 0.000 claims abstract description 34
- 230000006870 function Effects 0.000 claims description 46
- 238000004422 calculation algorithm Methods 0.000 claims description 32
- 238000001914 filtration Methods 0.000 claims description 23
- 238000001514 detection method Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 7
- 238000002187 spin decoupling employing ultra-broadband-inversion sequences generated via simulated annealing Methods 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 3
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 description 7
- 230000003416 augmentation Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000006641 stabilisation Effects 0.000 description 2
- 238000011105 stabilization Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000007687 exposure technique Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
The invention provides an image processing method, an image processing device, a holder and a storage medium, wherein the method comprises the following steps: establishing a reprojection error constraint function according to the original rotation parameters; wherein the original rotation parameters are rotation parameters of the camera pose of the current frame image relative to the camera pose of the previous frame image; or, the original rotation parameter is a rotation parameter of the pan-tilt attitude of the current frame image relative to the pan-tilt attitude of the previous frame image; optimizing the reprojection error constraint function to obtain optimized rotation parameters; according to the optimized rotation parameters, the current frame image is corrected, the scheme can smooth the shot motion track, and the shot image is corrected, so that the efficiency is high.
Description
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to an image processing method, an image processing device, a holder and a storage medium.
Background
The wide-range moving delay (Hyperlapse), also called high dynamic delay, is an emerging exposure technique in time-lapse photography, which photographs an object in a continuously moving manner while time-lapse photography by changing the position of each exposure of a camera. Unlike the simple "moving delay" (push rail lens) that moves the camera through the camera slide rail, moving the delay over a large range requires the camera to move for a long distance, which easily results in large jitter. Therefore, the stabilization processing is required for the shot video.
Disclosure of Invention
The invention provides an image processing method, an image processing device, a holder and a storage medium, and processing efficiency is improved.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
establishing a reprojection error constraint function according to the original rotation parameters; the original rotation parameters are rotation parameters of the camera pose of the current frame image relative to the camera pose of the previous frame image; or, the original rotation parameter is a rotation parameter of the pan-tilt attitude of the current frame image relative to the pan-tilt attitude of the previous frame image;
optimizing the reprojection error constraint function to obtain optimized rotation parameters;
and correcting the current frame image according to the optimized rotation parameters.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including: a memory, a processor; wherein the memory is to store instructions;
the processor is configured to execute the instructions to implement:
establishing a reprojection error constraint function according to the original rotation parameters; the original rotation parameters are rotation parameters of the camera pose of the current frame image relative to the camera pose of the previous frame image; or, the original rotation parameter is a rotation parameter of the pan-tilt attitude of the current frame image relative to the pan-tilt attitude of the previous frame image;
optimizing the reprojection error constraint function to obtain optimized rotation parameters;
and correcting the current frame image according to the optimized rotation parameters.
In a third aspect, an embodiment of the present invention provides a storage medium, including: a readable storage medium and a computer program for implementing the image processing method provided in any of the above-mentioned embodiments of the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a cradle head, including:
at least one rotating shaft;
an angle sensor; and
an image processing apparatus according to any of the second aspect above.
The invention provides an image processing method, an image processing device, a holder and a storage medium, wherein a reprojection error constraint function is established according to an original rotation parameter; wherein the original rotation parameters are rotation parameters of the camera pose of the current frame image relative to the camera pose of the previous frame image; or, the original rotation parameter is a rotation parameter of the pan-tilt attitude of the current frame image relative to the pan-tilt attitude of the previous frame image; optimizing the reprojection error constraint function to obtain optimized rotation parameters; according to the optimized rotation parameters, the current frame image is corrected, the relative rotation parameters between the images are optimized by combining the prior information of the original rotation parameters, the smooth processing of the shooting motion track is realized, the image is corrected according to the optimized relative rotation parameters, the shake can be effectively inhibited, the stability augmentation processing is completed, and the efficiency is high.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of an application scenario provided by the present invention;
FIG. 2 is a flowchart of an image processing method according to an embodiment of the present invention;
FIG. 3 is a flowchart of an image processing method according to another embodiment of the present invention;
FIG. 4 is a flowchart of an image processing method according to another embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Firstly, the application scene related to the invention is introduced:
the image processing method provided by the embodiment of the invention is applied to mobile time-delay photography, and can be used for performing stability enhancement processing on the images or videos acquired by mobile time-delay photography, so that the processing efficiency is improved.
Wherein the method may be performed by an image processing apparatus.
In an embodiment of the present invention, as shown in fig. 1, the image processing apparatus may be applied in a pan/tilt head, and specifically, may be a remote control device integrated in the pan/tilt head or wirelessly connected to the pan/tilt head, which is not limited in the embodiment of the present invention.
Wherein, the cloud platform can set up on equipment such as unmanned aerial vehicle or robot.
In an embodiment of the present invention, the image processing apparatus may be applied to an electronic device such as a mobile terminal, and for example, the image processing apparatus includes: smart phones, tablet computers, wearable devices, and the like.
The method provided by the embodiment of the invention can be realized by executing a corresponding software code by an image processing device such as a processor of the image processing device, and can also be realized by performing data interaction with a server while executing the corresponding software code by the image processing device, for example, the server executes partial operation to control the image processing device to execute the image processing method.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the present invention. As shown in fig. 2, the method provided by this embodiment includes:
Specifically, a user shoots and acquires a plurality of images by using a mobile time-delay shooting function, and the stability augmentation processing can be performed after one image is acquired each time, or after all the images are acquired, or after a plurality of frames of images are extracted and spliced into a video, the stability augmentation processing is performed.
Further, in order to improve the processing efficiency and increase the calculation speed, if the resolution of the image is large, compression, for example, converting the image into a 1080p image, may be performed.
Furthermore, the color image can be converted into a gray image, specifically, the color image can be converted into a YUV format, and then the image information of the Y channel is directly used.
When the stability augmentation processing is performed, firstly, acquiring an original rotation parameter of a current frame image relative to a previous frame image, and representing a relative rotation relationship between the two frames, wherein the original rotation parameter can be an original rotation parameter of a holder attitude of the current frame image relative to a holder attitude of the previous frame image, and the original rotation parameter can be acquired through a gyroscope of a holder inertial measurement Unit (IMU for short); alternatively, the original rotation parameters may be original rotation parameters of the camera pose of the current frame image with respect to the camera pose of the previous frame image.
And further establishing a reprojection error constraint function according to the obtained original rotation parameters.
And 202, optimizing the reprojection error constraint function to obtain optimized rotation parameters.
And step 203, correcting the current frame image according to the optimized rotation parameters.
Specifically, the reprojection error constraint function is optimized to obtain optimized rotation parameters, namely, the shot motion trajectory is smoothed, and then the current frame image is corrected according to the optimized rotation parameters, so that the shake can be effectively inhibited.
Further, after the correction is performed on the images of the plurality of frames, the plurality of corrected images may be combined into the video after the stabilization.
In the method of the embodiment, a reprojection error constraint function is established according to the original rotation parameters; wherein the original rotation parameters are rotation parameters of the camera pose of the current frame image relative to the camera pose of the previous frame image; or, the original rotation parameter is a rotation parameter of the pan-tilt attitude of the current frame image relative to the pan-tilt attitude of the previous frame image; optimizing the reprojection error constraint function to obtain optimized rotation parameters; according to the optimized rotation parameters, the current frame image is corrected, the relative rotation parameters between the images are optimized by combining the prior information of the original rotation parameters, the smooth processing of the shooting motion track is realized, the image is corrected according to the optimized relative rotation parameters, the shake can be effectively inhibited, the stability augmentation processing is completed, and the efficiency is high.
On the basis of the above embodiment, further, before step 201, the following operations may be performed:
and respectively acquiring the feature points to be processed of the current frame image and the feature points to be processed of the previous frame image which are matched with each other.
Specifically, in order to reduce the amount of calculation, when optimizing the original rotation parameters, especially when establishing the reprojection error constraint function, only part of the pixel points of the image, that is, the feature points to be processed, may be considered.
And acquiring the feature points to be processed, which are matched with the current frame image and the previous frame image.
The feature points to be processed can be obtained specifically by the following method:
extracting feature points to be processed in the current frame image according to an angular point detection algorithm;
and determining the matched characteristic points to be processed in the previous frame image by utilizing a KLT algorithm according to the pixel coordinates of the characteristic points to be processed in the current frame image.
Wherein the corner detection algorithm comprises at least one of: FAST algorithm, SUSAN algorithm, Harris corner detection algorithm.
Specifically, feature points to be processed in the current frame image can be extracted through a corner detection algorithm, and then matched feature points to be processed in the previous frame image are determined through a KLT algorithm.
For example, the Harris corner detection algorithm extracts feature points to be processed in a current image frame, and defines a matrix a as a structure tensor.
Where w (u, v) is a window function, u and v are the offsets of the window in the horizontal and vertical directions, respectively, IxAnd IyRespectively gradient information in x and y directions at a point on the image.
The function Mc is defined as:
Mc=det(A)-κtrace2(A)
where det (A) is the determinant of matrix A, trace (A) is the trace of matrix A, κ is the parameter for adjusting sensitivity, and a threshold Mth is set, and when Mc > Mth, the point is the feature point to be processed.
Further, the matched feature points to be processed in the previous frame of image can be tracked according to the KLT algorithm.
And iterating the characteristic points to be processed in the current frame image by using a KLT algorithm to obtain the offset h of the current frame image relative to the previous frame image, so as to obtain the matched characteristic points in the previous frame image according to the offset and the characteristic points to be processed in the current frame image.
For each feature point to be processed, the following formula is adopted:
wherein x represents the coordinate of the feature point to be processed in the previous frame image, and F (x + h)k) Represents the characteristic point to be processed in the current frame image, F' (x + h)k) Denotes F (x + h)k) G (x) represents the feature point to be processed in the previous frame image, h0Original value, h, representing the offsetkDenotes the offset, h, of the kth iterationk+1Represents the offset of the (k + 1) th iteration; w (x) represents a preset weight.
In an embodiment of the present invention, it may be further verified whether the feature points to be processed in the two frames of images are matched, and an offset h 'of the previous frame of image relative to the current frame of image is obtained by iteration using a KLT algorithm (Kanade-Lucas-Tomasi tracking method) according to the feature points to be processed in the previous frame of image, and if h' is-h, the two feature points to be processed are matched.
In other embodiments of the present invention, feature points to be processed in the previous frame image may also be extracted through a corner detection algorithm, and then the matched feature points to be processed in the current frame image are determined through a KLT algorithm.
Further, as shown in fig. 3, the method of the present embodiment includes the following steps:
In an embodiment of the present invention, the reprojection error constraint function F may be established by using the following formula (1);
wherein, PiThe three-dimensional coordinates, p, of the feature points to be processed in the previous frame of image in the camera coordinate systemiThe image optimization method comprises the steps of obtaining a pixel coordinate of a feature point to be processed in a previous frame image, obtaining a rotation parameter to be optimized, obtaining a translation parameter of a current frame image to be optimized relative to a previous frame image, obtaining a projection function, obtaining a preset weight, obtaining an engineering empirical value, and adjusting according to requirements, wherein the n represents a projection function, the n is the number of the feature points to be processed in the previous frame image, and the w is the preset weight.For the original rotation parameter Rgyro,iI is an identity matrix.
Wherein,representing three-dimensional points in the world coordinate system, K being an internal reference matrix of the image acquisition component (e.g. camera),representing the pixel coordinates of the current frame image, PiIs a three-dimensional coordinate in a camera coordinate system.
For a finite projection camera, the internal reference matrix K may be expressed as:
wherein, αx=fmx,αy=fmyF is cokeDistance, mxAnd myThe number of pixels per unit distance in the x and y directions, respectively. γ is the distortion parameter between the x and y axes (for a CCD camera, the pixels are not square). Mu.s0,v0Is the optical center position.
And 302, optimizing the reprojection error constraint function to obtain an optimized rotation parameter which enables the reprojection error constraint function to be minimum.
I.e. solving for R that minimizes the reprojection error constraint function.
Furthermore, according to the optimized rotation parameters, the feature points to be processed of the current frame image are corrected, and all pixel points in the current frame image can be corrected.
and step 304, correcting the current frame image according to the optimized holder posture or the optimized camera posture to obtain the corrected current frame image.
Specifically, after the optimized rotation parameter is obtained, filtering processing can be performed on the optimized rotation parameter of the current frame image relative to the previous frame image, and because the optimized rotation parameter is specific to a plurality of to-be-processed pixel points, filtering can be performed on the plurality of to-be-processed pixel points to obtain an optimized pan-tilt posture or an optimized camera pose corresponding to the current frame image;
and correcting the current frame image according to the optimized holder posture or the optimized camera posture to obtain the corrected current frame image.
In an embodiment of the present invention, as shown in fig. 4, the filtering process in step 303 may be specifically implemented by the following steps:
3031, converting the optimized rotation parameters into quaternion expression;
and 3032, filtering the quaternion representation, and converting the quaternion representation after filtering into the optimized holder posture or the optimized camera posture corresponding to the current frame image.
Specifically, the optimized rotation parameters may be converted into quaternion representation, the quaternion representation is subjected to gaussian smooth filtering, and then normalized into quaternion, and further converted into optimized pan-tilt posture corresponding to the current previous frame image or optimized camera posture state Ri′。
In an embodiment of the present invention, the optimized rotation parameter may be further converted into a rotation parameter of the current frame image relative to the first frame image in the image to be processed. The rotation parameters are then subjected to a filtering process, wherein the process of the filtering process is similar to the above-described embodiment.
Further, step 304 may specifically be implemented as follows:
correcting the current frame image according to the optimized holder attitude and the original holder attitude to obtain a corrected current frame image; or,
and correcting the current frame image according to the optimized camera pose and the original camera pose to obtain a corrected current frame image.
Illustratively, the corrected current frame image can be obtained according to the optimized pan-tilt attitude or the optimized camera pose by using the following formula (2);
wherein p is a pixel point on the current frame image, p 'is a pixel point corresponding to the feature point to be processed on the corrected image, R'jRepresenting the optimized holder attitude or the optimized camera pose corresponding to the current frame image,is RjTranspose of RjAnd K is an internal reference matrix of an image acquisition component (such as a camera).
In an embodiment of the present invention, the relative translation of the current frame image with respect to the previous frame image may be optimized according to the reprojection error constraint function, and then the optimized pan-tilt attitude or camera pose is determined according to the optimized relative translation and the rotation parameter, so as to correct the image.
It should be noted that the method steps may be processing of one frame image in real time, or processing of one frame image in partial steps, and processing of all frame images in partial steps, which is not limited in the embodiment of the present invention. For example, steps 201 and 202 may be processed in real time based on each frame image, and step 203 (or step 3031 and step 3032) may be executed after all frame images of the image to be processed are acquired.
According to the method provided by the embodiment of the invention, the relative rotation parameters are optimized by estimating the motion trend of shooting, namely the relative rotation parameters between the images, so that the smooth processing of the motion track is realized, and the images are corrected according to the optimized relative rotation parameters, so that the stability enhancement processing is completed, and the efficiency is high.
Fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention. The image processing apparatus provided in this embodiment is configured to execute the image processing method provided in any one of the embodiments shown in fig. 2 to 4. As shown in fig. 5, the image processing apparatus provided in this embodiment may include: memory 22, processor 21. The memory 22 stores instructions for storage.
The processor 21 is operable to execute instructions to implement:
establishing a reprojection error constraint function according to the original rotation parameters; wherein the original rotation parameters are rotation parameters of the camera pose of the current frame image relative to the camera pose of the previous frame image; or, the original rotation parameter is a rotation parameter of the pan-tilt attitude of the current frame image relative to the pan-tilt attitude of the previous frame image;
optimizing the reprojection error constraint function to obtain optimized rotation parameters;
and correcting the current frame image according to the optimized rotation parameters.
In a possible implementation manner, the processor 21 is specifically configured to:
respectively acquiring mutually matched feature points to be processed of a current frame image and feature points to be processed of a previous frame image;
establishing the reprojection error constraint function according to the original rotation parameters, the feature points to be processed in the previous frame of image and a preset projection function;
and correcting the feature points to be processed of the current frame image according to the optimized rotation parameters.
In a possible implementation manner, the processor 21 is specifically configured to:
and optimizing the reprojection error constraint function to obtain an optimized rotation parameter which enables the reprojection error constraint function to be minimum.
In a possible implementation manner, the processor 21 is specifically configured to:
filtering the optimized rotation parameters to obtain the optimized holder posture or the optimized camera pose corresponding to the current frame image;
and correcting the current frame image according to the optimized holder posture or the optimized camera posture to obtain the corrected current frame image.
In a possible implementation manner, the processor 21 is specifically configured to:
converting the optimized rotation parameters into quaternion representation;
and filtering the quaternion representation, and converting the quaternion representation after filtering into the optimized holder attitude or the optimized camera attitude corresponding to the current frame image.
In a possible implementation manner, the processor 21 is specifically configured to:
performing Gaussian filtering processing on the quaternion expression, and performing normalization processing on the processed quaternion expression to obtain a normalized quaternion expression;
and converting the normalized quaternion representation into an optimized holder posture or an optimized camera posture corresponding to the current frame image.
In a possible implementation manner, the processor 21 is specifically configured to:
and converting the optimized rotation parameters into rotation parameters of the current frame image relative to a first frame image in the image to be processed.
In a possible implementation manner, the processor 21 is specifically configured to:
correcting the current frame image according to the optimized holder attitude and the original holder attitude to obtain a corrected current frame image; or,
and correcting the current frame image according to the optimized camera pose and the original camera pose to obtain a corrected current frame image.
In a possible implementation manner, the processor 21 is specifically configured to:
and synthesizing the corrected images into a stabilized video.
In a possible implementation manner, the processor 21 is specifically configured to:
extracting feature points to be processed in the current frame image according to an angular point detection algorithm;
and determining matched feature points to be processed in the previous frame image by using a KLT algorithm according to the pixel coordinates of the feature points to be processed in the current frame image.
In one possible implementation, the corner detection algorithm includes at least one of: FAST algorithm, SUSAN algorithm, Harris corner detection algorithm.
The image processing apparatus provided in this embodiment is used to execute the image processing method provided in any one of the embodiments shown in fig. 2 to fig. 4, and the technical principle and the technical effect are similar, and are not described herein again.
An embodiment of the present invention further provides a cradle head, including:
at least one rotating shaft;
an angle sensor;
and an image processing apparatus as described in any of the preceding embodiments.
The cradle head provided by the embodiment is similar to the technical principle and the technical effect of the previous embodiment, and is not repeated herein.
In a preferred embodiment, the holder further comprises an image acquisition device for acquiring multiple frames of images.
An embodiment of the present invention further provides an unmanned aerial vehicle, including:
the device comprises a body, an image acquisition component, a holder and an image processing device as described in any one of the preceding embodiments;
wherein the image acquisition component is arranged on the holder.
The unmanned aerial vehicle that this embodiment provided, similar with the technological principle and the technological effect of aforementioned embodiment, no longer describe here.
An embodiment of the present invention further provides an electronic device, including:
an image acquisition component, and an image processing apparatus as described in any of the preceding embodiments.
The electronic device provided by the embodiment is similar to the technical principle and the technical effect of the foregoing embodiments, and is not described herein again.
Illustratively, the electronic device may include: smart phones, tablet computers, wearable devices, and the like.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method in the foregoing method embodiment is implemented.
Also provided in an embodiment of the present invention is a program product including a computer program (i.e., execution instructions) stored in a readable storage medium. The processor may read the computer program from a readable storage medium, and execute the computer program to perform the method for detecting an ice and snow covered road surface provided by any one of the foregoing method embodiments.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the embodiments of the present invention, and are not limited thereto; although embodiments of the present invention have been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (26)
1. An image processing method, comprising:
establishing a reprojection error constraint function according to the original rotation parameters; the original rotation parameters are rotation parameters of the camera pose of the current frame image relative to the camera pose of the previous frame image; or, the original rotation parameter is a rotation parameter of the pan-tilt attitude of the current frame image relative to the pan-tilt attitude of the previous frame image;
optimizing the reprojection error constraint function to obtain optimized rotation parameters;
and correcting the current frame image according to the optimized rotation parameters.
2. The method of claim 1, wherein before establishing the reprojection error constraint function, further comprising:
respectively acquiring mutually matched feature points to be processed of a current frame image and feature points to be processed of a previous frame image;
establishing a reprojection error constraint function according to the original rotation parameters, including:
establishing the reprojection error constraint function according to the original rotation parameters, the feature points to be processed in the previous frame of image and a preset projection function;
according to the optimized rotation parameters, correcting the current frame image comprises the following steps:
and correcting the feature points to be processed of the current frame image according to the optimized rotation parameters.
3. The method according to claim 1 or 2, wherein the optimizing the reprojection error constraint function to obtain the optimized rotation parameter comprises:
and optimizing the reprojection error constraint function to obtain an optimized rotation parameter which enables the reprojection error constraint function to be minimum.
4. The method according to claim 1 or 2, wherein the correcting the current frame image according to the optimized rotation parameter comprises:
filtering the optimized rotation parameters to obtain the optimized holder posture or the optimized camera pose corresponding to the current frame image;
and correcting the current frame image according to the optimized holder posture or the optimized camera posture to obtain the corrected current frame image.
5. The method according to claim 4, wherein the filtering the optimized rotation parameters to obtain the optimized pan-tilt attitude or the optimized camera pose corresponding to the current frame image comprises:
converting the optimized rotation parameters into quaternion representation;
and filtering the quaternion representation, and converting the quaternion representation after filtering into the optimized holder attitude or the optimized camera attitude corresponding to the current frame image.
6. The method of claim 5, wherein the filtering the quaternion representation and converting the filtered quaternion representation into an optimized pan-tilt attitude or an optimized camera pose corresponding to the current frame image comprises:
performing Gaussian filtering processing on the quaternion expression, and performing normalization processing on the processed quaternion expression to obtain a normalized quaternion expression;
and converting the normalized quaternion representation into an optimized holder posture or an optimized camera posture corresponding to the current frame image.
7. The method of claim 5, wherein before converting the optimized rotation parameters into the quaternion representation corresponding to the current frame image, the method further comprises:
and converting the optimized rotation parameters into rotation parameters of the current frame image relative to a first frame image in the image to be processed.
8. The method according to claim 4, wherein obtaining the corrected current frame image according to the optimized pan-tilt attitude or the optimized camera pose comprises:
correcting the current frame image according to the optimized holder attitude and the original holder attitude to obtain a corrected current frame image; or,
and correcting the current frame image according to the optimized camera pose and the original camera pose to obtain a corrected current frame image.
9. The method according to claim 1 or 2, wherein after the correcting the current frame image, the method further comprises:
and synthesizing the corrected images into a stabilized video.
10. The method according to claim 2, wherein the obtaining the feature points to be processed of the current frame image and the feature points to be processed of the previous frame image that match each other respectively comprises:
extracting feature points to be processed in the current frame image according to an angular point detection algorithm;
and determining matched feature points to be processed in the previous frame image by using a KLT algorithm according to the pixel coordinates of the feature points to be processed in the current frame image.
11. The method according to claim 10, characterized in that the corner detection algorithm comprises at least one of: FAST algorithm, SUSAN algorithm, Harris corner detection algorithm.
12. An image processing apparatus characterized by comprising: a memory, a processor;
wherein the memory is to store instructions;
the processor is configured to execute the instructions to implement:
establishing a reprojection error constraint function according to the original rotation parameters; the original rotation parameters are rotation parameters of the camera pose of the current frame image relative to the camera pose of the previous frame image; or, the original rotation parameter is a rotation parameter of the pan-tilt attitude of the current frame image relative to the pan-tilt attitude of the previous frame image;
optimizing the reprojection error constraint function to obtain optimized rotation parameters;
and correcting the current frame image according to the optimized rotation parameters.
13. The apparatus of claim 12, wherein the processor is specifically configured to:
respectively acquiring mutually matched feature points to be processed of a current frame image and feature points to be processed of a previous frame image;
establishing the reprojection error constraint function according to the original rotation parameters, the feature points to be processed in the previous frame of image and a preset projection function;
and correcting the feature points to be processed of the current frame image according to the optimized rotation parameters.
14. The apparatus of claim 12 or 13, wherein the processor is specifically configured to:
and optimizing the reprojection error constraint function to obtain an optimized rotation parameter which enables the reprojection error constraint function to be minimum.
15. The apparatus of claim 12 or 13, wherein the processor is specifically configured to:
filtering the optimized rotation parameters to obtain the optimized holder posture or the optimized camera pose corresponding to the current frame image;
and correcting the current frame image according to the optimized holder posture or the optimized camera posture to obtain the corrected current frame image.
16. The apparatus of claim 15, wherein the processor is specifically configured to:
converting the optimized rotation parameters into quaternion representation;
and filtering the quaternion representation, and converting the quaternion representation after filtering into the optimized holder attitude or the optimized camera attitude corresponding to the current frame image.
17. The apparatus of claim 16, wherein the processor is specifically configured to:
performing Gaussian filtering processing on the quaternion expression, and performing normalization processing on the processed quaternion expression to obtain a normalized quaternion expression;
and converting the normalized quaternion representation into an optimized holder posture or an optimized camera posture corresponding to the current frame image.
18. The apparatus of claim 16, wherein the processor is specifically configured to:
and converting the optimized rotation parameters into rotation parameters of the current frame image relative to a first frame image in the image to be processed.
19. The apparatus of claim 15, wherein the processor is specifically configured to:
correcting the current frame image according to the optimized holder attitude and the original holder attitude to obtain a corrected current frame image; or,
and correcting the current frame image according to the optimized camera pose and the original camera pose to obtain a corrected current frame image.
20. The apparatus of claim 12 or 13, wherein the processor is specifically configured to:
and synthesizing the corrected images into a stabilized video.
21. The apparatus of claim 13, wherein the processor is specifically configured to:
extracting feature points to be processed in the current frame image according to an angular point detection algorithm;
and determining matched feature points to be processed in the previous frame image by using a KLT algorithm according to the pixel coordinates of the feature points to be processed in the current frame image.
22. The apparatus of claim 21, wherein the corner detection algorithm comprises at least one of: FAST algorithm, SUSAN algorithm, Harris corner detection algorithm.
23. The apparatus of claim 12, wherein the image processing apparatus is a pan-tilt.
24. A storage medium, comprising: readable storage medium and computer program for implementing the image processing method according to any one of claims 1 to 13.
25. A head, comprising:
at least one rotating shaft;
an angle sensor; and
an image processing apparatus as claimed in any one of claims 12 to 23.
26. A head according to claim 25, further comprising image acquisition means; the image acquisition device is used for acquiring multi-frame images.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/092638 WO2020257999A1 (en) | 2019-06-25 | 2019-06-25 | Method, apparatus and platform for image processing, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111712857A true CN111712857A (en) | 2020-09-25 |
Family
ID=72536788
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201980011946.9A Pending CN111712857A (en) | 2019-06-25 | 2019-06-25 | Image processing method, device, holder and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111712857A (en) |
WO (1) | WO2020257999A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112734653A (en) * | 2020-12-23 | 2021-04-30 | 影石创新科技股份有限公司 | Motion smoothing processing method, device and equipment for video image and storage medium |
CN113029128A (en) * | 2021-03-25 | 2021-06-25 | 浙江商汤科技开发有限公司 | Visual navigation method and related device, mobile terminal and storage medium |
CN114531580A (en) * | 2020-11-23 | 2022-05-24 | 北京四维图新科技股份有限公司 | Image processing method and device |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008204384A (en) * | 2007-02-22 | 2008-09-04 | Canon Inc | Image pickup device, object detection method and posture parameter calculation method |
JP2011004075A (en) * | 2009-06-17 | 2011-01-06 | Olympus Imaging Corp | Blur correction device |
CN103514450A (en) * | 2012-06-29 | 2014-01-15 | 华为技术有限公司 | Image feature extracting method and image correcting method and equipment |
CN106791363A (en) * | 2015-11-23 | 2017-05-31 | 鹦鹉无人机股份有限公司 | It is equipped with the unmanned plane for sending the video camera for correcting the image sequence for rocking effect |
CN108335328A (en) * | 2017-01-19 | 2018-07-27 | 富士通株式会社 | Video camera Attitude estimation method and video camera attitude estimating device |
CN108475075A (en) * | 2017-05-25 | 2018-08-31 | 深圳市大疆创新科技有限公司 | A kind of control method, device and holder |
CN108780577A (en) * | 2017-11-30 | 2018-11-09 | 深圳市大疆创新科技有限公司 | Image processing method and equipment |
CN109844632A (en) * | 2016-10-13 | 2019-06-04 | 富士胶片株式会社 | Shake correction device, photographic device and method of compensating for hand shake |
CN109923583A (en) * | 2017-07-07 | 2019-06-21 | 深圳市大疆创新科技有限公司 | A kind of recognition methods of posture, equipment and moveable platform |
-
2019
- 2019-06-25 WO PCT/CN2019/092638 patent/WO2020257999A1/en active Application Filing
- 2019-06-25 CN CN201980011946.9A patent/CN111712857A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008204384A (en) * | 2007-02-22 | 2008-09-04 | Canon Inc | Image pickup device, object detection method and posture parameter calculation method |
JP2011004075A (en) * | 2009-06-17 | 2011-01-06 | Olympus Imaging Corp | Blur correction device |
CN103514450A (en) * | 2012-06-29 | 2014-01-15 | 华为技术有限公司 | Image feature extracting method and image correcting method and equipment |
CN106791363A (en) * | 2015-11-23 | 2017-05-31 | 鹦鹉无人机股份有限公司 | It is equipped with the unmanned plane for sending the video camera for correcting the image sequence for rocking effect |
CN109844632A (en) * | 2016-10-13 | 2019-06-04 | 富士胶片株式会社 | Shake correction device, photographic device and method of compensating for hand shake |
CN108335328A (en) * | 2017-01-19 | 2018-07-27 | 富士通株式会社 | Video camera Attitude estimation method and video camera attitude estimating device |
CN108475075A (en) * | 2017-05-25 | 2018-08-31 | 深圳市大疆创新科技有限公司 | A kind of control method, device and holder |
CN109923583A (en) * | 2017-07-07 | 2019-06-21 | 深圳市大疆创新科技有限公司 | A kind of recognition methods of posture, equipment and moveable platform |
CN108780577A (en) * | 2017-11-30 | 2018-11-09 | 深圳市大疆创新科技有限公司 | Image processing method and equipment |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114531580A (en) * | 2020-11-23 | 2022-05-24 | 北京四维图新科技股份有限公司 | Image processing method and device |
CN114531580B (en) * | 2020-11-23 | 2023-11-21 | 北京四维图新科技股份有限公司 | Image processing method and device |
CN112734653A (en) * | 2020-12-23 | 2021-04-30 | 影石创新科技股份有限公司 | Motion smoothing processing method, device and equipment for video image and storage medium |
CN112734653B (en) * | 2020-12-23 | 2024-09-06 | 影石创新科技股份有限公司 | Motion smoothing method, device and equipment for video image and storage medium |
CN113029128A (en) * | 2021-03-25 | 2021-06-25 | 浙江商汤科技开发有限公司 | Visual navigation method and related device, mobile terminal and storage medium |
CN113029128B (en) * | 2021-03-25 | 2023-08-25 | 浙江商汤科技开发有限公司 | Visual navigation method and related device, mobile terminal and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2020257999A1 (en) | 2020-12-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2849428B1 (en) | Image processing device, image processing method, image processing program, and storage medium | |
CN108363946B (en) | Face tracking system and method based on unmanned aerial vehicle | |
CN107241544B (en) | Video image stabilization method, device and camera shooting terminal | |
JP6090786B2 (en) | Background difference extraction apparatus and background difference extraction method | |
CN113436113B (en) | Anti-shake image processing method, device, electronic equipment and storage medium | |
JP6087671B2 (en) | Imaging apparatus and control method thereof | |
US9838604B2 (en) | Method and system for stabilizing video frames | |
KR101071352B1 (en) | Apparatus and method for tracking object based on PTZ camera using coordinate map | |
JP7224526B2 (en) | Camera lens smoothing method and mobile terminal | |
CN111712857A (en) | Image processing method, device, holder and storage medium | |
US8965105B2 (en) | Image processing device and method | |
JP6202879B2 (en) | Rolling shutter distortion correction and image stabilization processing method | |
KR102069269B1 (en) | Apparatus and method for stabilizing image | |
EP2446612A1 (en) | Real time video stabilization | |
CN113556464A (en) | Shooting method and device and electronic equipment | |
CN111405187A (en) | Image anti-shake method, system, device and storage medium for monitoring equipment | |
CN114429191A (en) | Electronic anti-shake method, system and storage medium based on deep learning | |
TW201523516A (en) | Video frame stabilization method for the moving camera | |
JP6282133B2 (en) | Imaging device, control method thereof, and control program | |
WO2018116322A1 (en) | System and method for generating pan shots from videos | |
CN107743190A (en) | Video anti-fluttering method | |
JP7164873B2 (en) | Image processing device and program | |
JP2021033015A5 (en) | ||
Li et al. | Real Time and Robust Video Stabilization Based on Block-Wised Gradient Features | |
CN115278049A (en) | Shooting method and device thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200925 |