CN114071019A - Image processing method and device, electronic equipment and computer readable storage medium - Google Patents

Image processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN114071019A
CN114071019A CN202111416688.8A CN202111416688A CN114071019A CN 114071019 A CN114071019 A CN 114071019A CN 202111416688 A CN202111416688 A CN 202111416688A CN 114071019 A CN114071019 A CN 114071019A
Authority
CN
China
Prior art keywords
image
processed
point
images
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111416688.8A
Other languages
Chinese (zh)
Inventor
董晓龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111416688.8A priority Critical patent/CN114071019A/en
Publication of CN114071019A publication Critical patent/CN114071019A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing apparatus, an electronic device, and a non-volatile computer-readable storage medium. The image processing method comprises the following steps: acquiring attitude data of continuous multi-frame images to be processed; processing continuous multi-frame images to be processed according to the attitude data to generate a plurality of first processed images; and processing the first processed image corresponding to the current image to be processed according to the deviation between the feature points corresponding to the same acquisition point in the plurality of first processed images to generate a target image, wherein the acquisition point is any part in the shooting scene. The image processing method, the image processing apparatus, the electronic device, and the non-volatile computer-readable storage medium according to the embodiments of the present application realize anti-shake of an image to be processed through the attitude data, and further perform anti-shake processing on a first processed image corresponding to a current image to be processed through a deviation between feature points in a plurality of first processed images that are processed through the attitude data, thereby further improving an anti-shake effect of the image.

Description

Image processing method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of electronic technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a non-volatile computer-readable storage medium.
Background
Currently, smart devices (such as smart phones, smart watches, tablet computers, and the like) have become articles for daily use. However, when a user takes a picture or a video through the smart terminal device, the weight of the device and the body of the user inevitably shake, which may cause the shooting system of the electronic device to shake to some extent, resulting in poor imaging effect of the finally obtained picture and video.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, an electronic device and a non-volatile computer readable storage medium.
The image processing method comprises the steps of obtaining attitude data of continuous multi-frame images to be processed; processing continuous frames of the images to be processed according to the attitude data to generate a plurality of first processed images; processing the first processed image corresponding to the current image to be processed according to the deviation between the feature points corresponding to the same acquisition point in the plurality of first processed images to generate a target image, wherein the acquisition point is any part of a shooting scene.
The image processing device of the embodiment of the application comprises an acquisition module, a first generation module and a second generation module. The acquisition module is used for acquiring the attitude data of continuous multi-frame images to be processed. The first generating module is used for processing continuous frames of the images to be processed according to the attitude data so as to generate a plurality of first processed images. And the second generation module is used for processing the first processed image corresponding to the current image to be processed according to the deviation between the feature points corresponding to the same acquisition point in the plurality of first processed images to generate a target image, wherein the acquisition point is any part of a shooting scene.
The electronic device of the embodiment of the application comprises a processor. The processor is used for acquiring the attitude data of continuous multiframe images to be processed; processing continuous frames of the images to be processed according to the attitude data to generate a plurality of first processed images; processing the first processed image corresponding to the current image to be processed according to the deviation between the feature points corresponding to the same acquisition point in the plurality of first processed images to generate a target image, wherein the acquisition point is any part of a shooting scene.
The non-transitory computer-readable storage medium of the embodiments of the present application contains a computer program that, when executed by one or more processors, causes the processors to perform the following image processing method: acquiring attitude data of continuous multi-frame images to be processed; processing continuous frames of the images to be processed according to the attitude data to generate a plurality of first processed images; processing the first processed image corresponding to the current image to be processed according to the deviation between the feature points corresponding to the same acquisition point in the plurality of first processed images to generate a target image, wherein the acquisition point is any part of a shooting scene.
According to the image processing method, the image processing device, the electronic equipment and the nonvolatile computer readable storage medium, the to-be-processed image is processed into the first processed image through the attitude data, the attitude data in the first processed image can be guaranteed to be the same attitude data, so that the anti-shaking of the to-be-processed image is achieved through the attitude data, the first processed image corresponding to the current to-be-processed image is further subjected to anti-shaking processing through the deviation among the characteristic points in the plurality of first processed images subjected to the attitude data processing, the influence of errors of the attitude data on the anti-shaking effect is prevented, and the anti-shaking effect of the image is further improved.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 2 is a schematic diagram of an image processing apparatus according to some embodiments of the present application;
FIG. 3 is a schematic plan view of an electronic device of some embodiments of the present application;
FIG. 4 is a schematic view of a scene of an image processing method according to some embodiments of the present application
FIGS. 5 and 6 are schematic flow diagrams of image processing methods according to certain embodiments of the present application;
FIG. 7 is a schematic view of a scene of an image processing method according to some embodiments of the present application;
FIGS. 8 and 9 are schematic flow diagrams of image processing methods according to certain embodiments of the present application;
FIG. 10 is a schematic view of a scene of an image processing method according to some embodiments of the present application;
FIG. 11 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 12 is a schematic view of a scene of an image processing method according to some embodiments of the present application;
FIG. 13 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 14 is a schematic diagram of a connection state of a non-volatile computer readable storage medium and a processor of some embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1, an embodiment of the present application provides an image processing method. The image processing method includes the steps of:
01: acquiring attitude data of continuous multi-frame images to be processed;
03: processing continuous multi-frame images to be processed according to the attitude data to generate a plurality of first processed images; and
05: and processing the first processed image corresponding to the current image to be processed according to the deviation between the feature points corresponding to the same acquisition point in the plurality of first processed images to generate a target image, wherein the acquisition point is any part in the shooting scene.
Referring to fig. 2, an image processing apparatus 10 is provided in the present embodiment. The image processing apparatus 10 includes an acquisition module 11, a first generation module 12, and a second generation module 13. The image processing method according to the embodiment of the present application is applicable to the image processing apparatus 10. The obtaining module 11 is configured to execute step 01, the first generating module 12 is configured to execute step 03, and the second generating module 13 is configured to execute step 05. Namely, the obtaining module 11 is configured to obtain pose data of a plurality of consecutive frames of images to be processed. The first generating module 12 is configured to process a plurality of consecutive frames of images to be processed according to the pose data to generate a plurality of first processed images. The second generating module 13 is configured to process the first processed image corresponding to the current image to be processed according to a deviation between feature points corresponding to the same acquisition point in the plurality of first processed images, so as to generate a target image, where the acquisition point is any part of a shooting scene.
Referring to fig. 3, an electronic device 100 is further provided in the present embodiment. The electronic device 100 comprises a processor 20. The image processing method according to the embodiment of the present application is applicable to the electronic device 100. Processor 20 is configured to perform step 01, step 03, and step 05. That is, the processor 20 is configured to obtain pose data of a plurality of consecutive frames of images to be processed; processing continuous multi-frame images to be processed according to the attitude data to generate a plurality of first processed images; and processing the first processed image corresponding to the current image to be processed according to the deviation between the feature points corresponding to the same acquisition point in the plurality of first processed images to generate a target image, wherein the acquisition point is any part in the shooting scene.
The electronic device 100 includes a housing 30. The electronic device 100 may be a cell phone, a tablet computer, a display device, a notebook computer, a teller machine, a gate, a smart watch, a head-up display device, a game console, etc. As shown in fig. 3, in the embodiment of the present application, the electronic device 100 is a mobile phone as an example, and it is understood that the specific form of the electronic device 100 is not limited to the mobile phone. The housing 30 may also be used to mount functional modules of the electronic device 100, such as a display device, an imaging device, a power supply device, and a communication device, so that the housing 30 provides protection for the functional modules, such as dust prevention, drop prevention, and water prevention.
Specifically, the electronic device 100 further includes a camera 40, the camera 40 may be configured to capture a plurality of consecutive frames of images to be processed, and the processor 20 may obtain the pose data of the plurality of consecutive frames of images to be processed by acquiring the plurality of consecutive frames of images to be processed. Each frame of image to be processed comprises corresponding attitude data, and the attitude data can reflect the position information of a shooting object in the image to be processed. When an image is captured by the camera 40 of the electronic device 100, if the electronic device 100 or the camera 40 shakes, the attitude data of a plurality of frames of images to be processed acquired by the processor 20 may change.
In order to ensure that the final generated target image has a good anti-shake effect, after the processor 20 obtains the posture data of the multiple frames of images to be processed, the processor 20 may first obtain the target posture data through the posture data of the multiple frames of images to be processed. The target posture data is posture data with an anti-shake effect obtained by the processor 20 through calculation according to posture data of multiple frames of images to be processed.
In this way, the processor 20 may sequentially process multiple frames of images to be processed according to the target pose data, that is, transform the pose data in the multiple frames of images to be processed into the target pose data, so as to obtain multiple first processed images. It will be appreciated that the plurality of first processor 20 images are already images having a certain anti-shake effect.
More specifically, the processor 20 may divide the image to be processed into a plurality of meshes (as shown in fig. 12), so that the processor may obtain mesh points of the plurality of meshes in the image to be processed, and when the processor 20 processes a plurality of frames of images to be processed according to the target pose data, the processor may convert the pose data of the mesh points of each mesh in the image to be processed into the target pose data, that is, process the image to be processed into the first processed image line by line. Thus, the consistency of the first processed image can be ensured.
However, although the first processed image has a certain anti-shake effect, the plurality of frames of the to-be-processed images are processed only by the target pose data line by line, and if the pose data has an error, a certain error still exists when the first processed image is directly used as the target image. When multiple frames of images to be processed are processed pixel by pixel through the target pose data, because the pose difference exists between each pixel, the conversion degree between each pixel is different, that is, when the images to be processed are converted into the first processed image, the conversion degree is relatively distant, and the first processed image may be distorted or distorted.
Therefore, in order to solve the above technical problem, in the image processing method according to the embodiment of the present invention, after a plurality of frames of images to be processed are processed line by line through the pose data to obtain a plurality of first processed images, the processor 20 further processes the first processed image corresponding to the current image to be processed according to the deviation between feature points corresponding to the same acquisition point in the plurality of first processed images, so as to generate the target image. The acquisition point is any part in a shooting scene, namely the acquisition point is any point in the image to be processed, and the same acquisition point in a plurality of first processed images corresponds to the position in a plurality of frames of images to be processed.
Specifically, taking the camera 40 shooting 2 frames of images to be processed in total as an example, as shown in fig. 4, the processor 20 correspondingly generates 2 first processed images, namely a first processed image a and a first processed image B, so that the processor 20 can respectively search for feature points of the same acquisition point in the first processed image a and the first processed image B, namely, an a point and a point, and then the processor 20 can calculate a deviation between the feature points in the first processed images of adjacent frames, namely, a deviation M1 between the B point and the a point. And the deviation between the characteristic points of the same acquisition point is the position coordinate deviation between the characteristic points.
Next, the processor 20 may process the first processed image corresponding to the current image to be processed according to the deviation M1, so as to generate the target image. Therefore, the anti-shake effect of the finally generated target image can be ensured to be better, and the distortion phenomenon can not occur.
In one embodiment, taking the first processed image corresponding to the current image to be processed as the first processed image B as an example, the deviation M1 between the point B and the point a can directly reflect the offset of the first processed image B relative to the first processed image a, i.e. the error value. Therefore, when generating the target image, the processor 20 needs to compensate the offset to the first processed image to eliminate the error of the first processed image B, and further ensure the anti-shake effect of the target image generated according to the first processed image B.
The image processing method and the image processing apparatus 10 according to the embodiment of the application process the to-be-processed image into the first processed image through the attitude data, and can ensure that the attitude data in the first processed image are the same attitude data, so that the anti-shake of the to-be-processed image is realized through the attitude data, and then the first processed image corresponding to the current to-be-processed image is further subjected to anti-shake processing through the deviation among the characteristic points in the plurality of first processed images subjected to the attitude data processing, thereby preventing the error of the attitude data from influencing the anti-shake effect, and further improving the anti-shake effect of the image.
Referring to fig. 2, 3 and 5, in some embodiments, step 03: processing a plurality of continuous frames of images to be processed according to the attitude data to generate a plurality of first processed images, comprising the steps of:
031: calculating target attitude data according to multi-frame attitude data corresponding to continuous multi-frame images to be processed based on a preset first function, wherein the continuous multi-frame images to be processed comprise the current image to be processed; and
032: and processing each frame of image to be processed according to the attitude data corresponding to each frame of image to be processed and the target attitude data to generate a first processed image corresponding to each frame of image to be processed.
In some embodiments, the first generation module 12 is further configured to perform step 031 and step 032. Namely, the first generating module 12 is configured to calculate target posture data according to multi-frame posture data corresponding to consecutive multi-frame to-be-processed images based on a preset first function, where the consecutive multi-frame to-be-processed images include a current to-be-processed image; and processing each frame of image to be processed according to the attitude data corresponding to each frame of image to be processed and the target attitude data to generate a first processed image corresponding to each frame of image to be processed.
In some embodiments, processor 20 is further configured to perform step 031 and step 032. The processor 20 is configured to calculate target attitude data according to multi-frame attitude data corresponding to consecutive multi-frame images to be processed based on a preset first function, where the consecutive multi-frame images to be processed include a current image to be processed; and processing each frame of image to be processed according to the attitude data corresponding to each frame of image to be processed and the target attitude data to generate a first processed image corresponding to each frame of image to be processed.
Specifically, before the processor 20 processes a plurality of consecutive frames of images to be processed according to the pose data to generate a plurality of first processed images. The processor 20 may calculate the target pose data according to the pose data corresponding to the continuous frames of the to-be-processed image based on a preset first function. The preset first function is shown in the following formula (1).
Figure BDA0003365259490000051
Wherein, PPrIs the target attitude data, r is the frame number corresponding to the current image to be processed, P (t) is the attitude data corresponding to the t-th image to be processed, and f (x) is the Gaussian distribution weight. x varies from 0 to k and y varies from 1 to k, then
Figure BDA0003365259490000061
Representing the product of the attitude data of the first k frames of the t frame to-be-processed image and the weight of Gaussian distribution, and accumulating
Figure BDA0003365259490000062
And expressing the product of the attitude data of the k frame after the t frame of the image to be processed and the weight of Gaussian distribution, and accumulating.
More specifically, the gaussian distribution weight is as shown in the following formula (two).
Figure BDA0003365259490000063
Where σ denotes a variance of the gaussian distribution and μ denotes a mean of the gaussian distribution.
Therefore, according to the above formula (1) and formula (2), the processor 20 may obtain the target pose data corresponding to each frame of the image to be processed, and after determining the pose data and the target pose data of each frame of the image to be processed, the processor 20 may process each frame of the image to be processed, so as to generate a plurality of first processed images.
Referring to fig. 2, 3 and 6, in some embodiments, step 032: processing each frame of image to be processed according to the attitude data corresponding to each frame of image to be processed and the target attitude data to generate a first processed image corresponding to each frame of image to be processed, and the method further comprises the following steps:
0321: mapping a first image coordinate of each pixel of each frame of image to be processed into a three-dimensional position coordinate based on a preset second function;
0322: generating three-dimensional position coordinates corresponding to the target attitude data according to the attitude data, the target attitude data and the three-dimensional position coordinates corresponding to each frame of image to be processed;
0323: mapping the three-dimensional position coordinate corresponding to the target attitude data into a second image coordinate on the basis of a second function, wherein the second image coordinate corresponds to the first image coordinate one by one; and
0324: and adjusting the pixels corresponding to the first image coordinates to the second image coordinates to generate a first processed image.
In certain embodiments, first generation module 12 is used to perform step 0321, step 0322, step 0323 and step 0324. The first generating module 12 is configured to map a first image coordinate of each pixel of each frame of the image to be processed into a three-dimensional position coordinate based on a preset second function; generating three-dimensional position coordinates corresponding to the target attitude data according to the attitude data, the target attitude data and the three-dimensional position coordinates corresponding to each frame of image to be processed; mapping the three-dimensional position coordinate corresponding to the target attitude data into a second image coordinate on the basis of a second function, wherein the second image coordinate corresponds to the first image coordinate one by one; and adjusting the pixels corresponding to the first image coordinates to the second image coordinates to generate a first processed image.
In certain embodiments, processor 20 is used to perform step 0321, step 0322, step 0323 and step 0324. The processor 20 is configured to map a first image coordinate of each pixel of each frame of the image to be processed into a three-dimensional position coordinate based on a preset second function; generating three-dimensional position coordinates corresponding to the target attitude data according to the attitude data, the target attitude data and the three-dimensional position coordinates corresponding to each frame of image to be processed; mapping the three-dimensional position coordinate corresponding to the target attitude data into a second image coordinate on the basis of a second function, wherein the second image coordinate corresponds to the first image coordinate one by one; and adjusting the pixels corresponding to the first image coordinates to the second image coordinates to generate a first processed image.
Specifically, when the processor 20 processes each frame of image to be processed according to the pose data and the target pose data corresponding to each frame of image to be processed, the processor 20 may map each pixel coordinate of each frame of image to be processed into a three-dimensional position coordinate according to a preset second function. The preset second function is shown in the following formula (3).
Referring to fig. 7, fig. 7 is a schematic diagram of an imaging of the camera 40, and a point U (X, Y, Z) in the three-dimensional space can be transformed into a point U (X, Y) in the two-dimensional space through a preset second function.
x=FxkX+Oxy=FykY+Oy (3)
Wherein the content of the first and second substances,
Figure BDA0003365259490000071
α=atan2(R,Z)k=LUTF(α)
specifically, R represents the radial distance between the point U and the optical center position of the camera 40, and can be obtained from the X value and the Y value of the coordinates of the point U. α is an angle between the point U and the optical center position of the camera 40, and is obtained from the coordinates (R, Z) by the above-described arctangent function. The LUTF represents a distortion table of the camera, K is a distortion coefficient, and for each included angle alpha, a corresponding K value, F, can be found in the LUTFx、FyIs the focal length of the camera.
Thus, according to the above formula, the processor 20 may map the first image coordinates of each pixel in each frame of the image to be processed to the corresponding three-dimensional position coordinates. I.e. the three-dimensional coordinates corresponding to the first image coordinates of each pixel, as deduced inversely by the above equation (3).
Next, since the pose data corresponding to each frame of the to-be-processed image can be associated with the corresponding three-dimensional coordinates, the processor 20 can obtain the three-dimensional position coordinates corresponding to the target pose data according to the relationship between the pose data corresponding to each frame of the to-be-processed image and the corresponding three-dimensional coordinates.
In this way, after obtaining the three-dimensional position coordinates corresponding to the target pose data, the processor 20 may further map the three-dimensional position coordinates corresponding to the target pose data into second image coordinates based on the second function again. It is understood that the second image coordinates are corresponding to the first image coordinates, and are actually the second image coordinates obtained by converting the first image coordinates through the target pose data, that is, the second image coordinates are obtained by converting the first image coordinates of each pixel in the image to be processed.
Finally, the processor 20 may adjust the first image coordinate of each pixel in the image to be processed to the corresponding second image coordinate according to the position of the second image coordinate, so that the first processed image may be generated.
Referring to fig. 2, 3, and 8, in some embodiments, step 05: according to the deviation between the feature points corresponding to the same acquisition point in a plurality of first processed images, processing the first processed image corresponding to the current image to be processed to generate a target image, comprising the following steps:
051: determining image offset according to the deviation of image coordinates among multiple groups of feature points corresponding to the same acquisition point in multiple first processed images; and
052: and transforming the image coordinates of the first processing image corresponding to the current image to be processed according to the image offset to generate a target image.
In certain embodiments, the second generation module 13 is configured to perform steps 51 and 52. That is, the second generating module 13 is configured to determine an image offset according to a deviation of image coordinates between multiple sets of feature points corresponding to the same acquisition point in multiple first processed images; and transforming the image coordinates of the first processing image corresponding to the current image to be processed according to the image offset to generate a target image.
In certain embodiments, processor 20 is configured to perform steps 51 and 52. The processor 20 is configured to determine an image offset according to a deviation of image coordinates between a plurality of sets of feature points corresponding to the same acquisition point in a plurality of first processed images; and transforming the image coordinates of the first processing image corresponding to the current image to be processed according to the image offset to generate a target image.
Specifically, after the processor 20 generates a plurality of first processed images corresponding to a plurality of frames of images to be processed, the processor 20 may determine the image offset according to the deviation of image coordinates between a plurality of sets of feature points corresponding to the same acquisition point in the plurality of first processed images.
More specifically, after obtaining a plurality of first processed images, the processor 20 may find a feature point in each first processed image, and if there are 10 first processed images, there may be 10 feature points corresponding to each first processed image, and each feature point corresponds to the same acquisition point.
Therefore, the processor 20 may use the feature point of the current image to be processed as a reference, so as to obtain a coordinate difference between the feature point of the current image to be processed and the feature points of other non-current images to be processed. If there are 10 first processed images and there are 9 corresponding to non-current images to be processed, the obtained coordinate difference values of the feature point of the current image to be processed and the feature points of other non-current images to be processed are 9 in total, and thus, the processor 20 may determine the image offset according to the average value of the sum of the absolute values of the 9 coordinate difference values.
Finally, when the processor 20 transforms the image coordinates of the first processed image corresponding to the current image to be processed according to the image offset, the average value of the sum of the absolute values of all the coordinate differences may be compensated to the first processed image, thereby generating the target image.
Referring to fig. 2, 3 and 9, in some embodiments, step 051: determining image offset according to deviation of image coordinates between multiple groups of feature points corresponding to the same acquisition point in multiple first processed images, comprising the following steps:
0511: acquiring a first feature point and a second feature point corresponding to the same acquisition point in a first image to be processed and a second image to be processed, wherein the first feature point is positioned in the first image to be processed, and the second feature point is positioned in the second image to be processed;
0512: acquiring a third feature point corresponding to the first feature point in the first processed image corresponding to the first image to be processed, and acquiring a fourth feature point corresponding to the second feature point in the first processed image corresponding to the second image to be processed; and
0513: and determining the image offset according to the deviation of the image coordinates of the third characteristic point and the fourth characteristic point.
In certain embodiments, second generation module 13 is configured to perform step 0511, step 0512, and step 0513. That is, the second generating module 13 is configured to obtain a first feature point and a second feature point, corresponding to the same acquisition point, in the first image to be processed and the second image to be processed, where the first feature point is located in the first image to be processed, and the second feature point is located in the second image to be processed; : acquiring a third feature point corresponding to the first feature point in the first processed image corresponding to the first image to be processed, and acquiring a fourth feature point corresponding to the second feature point in the first processed image corresponding to the second image to be processed; and determining the image offset according to the deviation of the image coordinates of the third characteristic point and the fourth characteristic point.
In certain embodiments, processor 20 is configured to perform step 0511, step 0512, and step 0513. That is, the processor 20 is configured to obtain a first feature point and a second feature point, corresponding to the same acquisition point, in the first image to be processed and the second image to be processed, where the first feature point is located in the first image to be processed, and the second feature point is located in the second image to be processed; : acquiring a third feature point corresponding to the first feature point in the first processed image corresponding to the first image to be processed, and acquiring a fourth feature point corresponding to the second feature point in the first processed image corresponding to the second image to be processed; and determining the image offset according to the deviation of the image coordinates of the third characteristic point and the fourth characteristic point.
Specifically, as shown in fig. 10, taking two frames of images to be processed, namely, the first image to be processed P1 and the second image to be processed P2 as an example, the processor 20 may obtain the first feature point T1 in the first image to be processed P1 and the second feature point T2 in the second image to be processed P2, and since the first feature point and the second feature point are the same captured point, the position of the first feature point in the first image to be processed P1 corresponds to the position of the second feature point T2 in the second image to be processed P2.
Next, the processor 20 may find a third feature point T3 in the first processed image P3 corresponding to the first image to be processed P1 and a fourth feature point T4 in the first processed image P4 corresponding to the second image to be processed P2 according to the first feature point T1 and the second feature point T2.
Then, processor 20 may determine an image offset between first processed image P3 and first processed image P4 based on the deviation of the image coordinates of third feature point T3 and fourth feature point T4. For example, if the image coordinates of the third feature point T3 are (1, 1) and the image coordinates of the fourth feature point T4 are (2, 2), and the deviation between the image coordinates of the third feature point T3 and the image coordinates of the fourth feature point T4 is (1, 1), it means that the first processed image P4 is shifted by one unit in both the positive directions of the X axis and the Y axis with respect to the first processed image P3.
Referring to fig. 2, 3 and 11, in some embodiments, step 0512: the method for acquiring the third feature point corresponding to the first feature point in the first processed image corresponding to the first image to be processed and acquiring the fourth feature point corresponding to the second feature point in the first processed image corresponding to the second image to be processed comprises the following steps:
05121: respectively establishing a first grid and a second grid on the first image to be processed and the second image to be processed, wherein the first grid and the second grid are both composed of a plurality of rectangular image areas;
05122: acquiring a first image area where the first characteristic point is located and a second image area where the second characteristic point is located;
05123: interpolating to obtain a third feature point according to the image coordinates of the corner point of the first image area in the first processing image corresponding to the first image to be processed; and
05124: and interpolating to obtain a fourth feature point according to the image coordinates of the corner point of the second image area in the first processing image corresponding to the second image to be processed.
In certain embodiments, the second generation module 13 is configured to perform step 05121, step 05122, step 05123 and step 05124. Namely, the second generating module 13 is configured to respectively establish a first grid and a second grid in the first image to be processed and the second image to be processed, where the first grid and the second grid are both composed of a plurality of rectangular image regions; acquiring a first image area where the first characteristic point is located and a second image area where the second characteristic point is located; interpolating to obtain a third feature point according to the image coordinates of the corner point of the first image area in the first processing image corresponding to the first image to be processed; and interpolating to obtain a fourth feature point according to the image coordinates of the corner point of the second image area in the first processed image corresponding to the second image to be processed.
In certain embodiments, processor 20 is used for steps 05121, 05122, 05123, and 05124. That is, the processor 20 is configured to respectively establish a first grid and a second grid in the first image to be processed and the second image to be processed, where the first grid and the second grid are both composed of a plurality of rectangular image regions; acquiring a first image area where the first characteristic point is located and a second image area where the second characteristic point is located; interpolating to obtain a third feature point according to the image coordinates of the corner point of the first image area in the first processing image corresponding to the first image to be processed; and interpolating to obtain a fourth feature point according to the image coordinates of the corner point of the second image area in the first processed image corresponding to the second image to be processed.
Specifically, before the processor 20 obtains the third feature point corresponding to the first feature point and the fourth feature point corresponding to the second feature point, the processor 20 may further respectively establish the first mesh and the second mesh in the first image to be processed and the second image to be processed. Wherein the first grid and the second grid are both composed of a plurality of rectangular image areas.
As shown in fig. 12, the first to-be-processed image Q1 for creating the first mesh includes a plurality of first image regions R, and the second to-be-processed image Q2 for creating the second mesh includes a plurality of second image regions S. The processor 20 obtains a first image region R1 where the first feature point T5 is located and a second image region S1 where the second feature point T6 is located.
Next, the processor 20 may determine the third feature point T7 by the corner point of the first image region R1 corresponding to the image coordinates in the first processed image Q3, and similarly, the processor 20 may determine the fourth feature point T8 by the corner point of the second image region S1 corresponding to the image coordinates in the first processed image Q4.
Specifically, when the third feature point T7 is obtained, the processor 20 may first obtain coordinates of four corner points R1, R2, R3, and R4 of the first image region R1, for example, the processor 20 may use the corner point at the lower left corner of the first image to be processed as an origin to establish a coordinate system, where the coordinate of the corner point R1 is (1, 1), the coordinate of the corner point R2 is (2, 1), the coordinate of the corner point R3 is (2, 2), the coordinate of the corner point R4 is (1, 2), and the coordinates of the corner point R1, the corner point R2, the corner point R3, and the corner point R4 in the first processed image are (1.2, 1.5), (2.2, 2.5), and (1.2, 2.5), respectively. The processor 20 can thus obtain the offset of the first processed image with respect to the first to-be-processed image in the first image region R. That is, the amount of shift of the X coordinate is the average of the coordinate differences between the four corner points and the image coordinates corresponding to the first processed image, and the amount of shift of the Y coordinate is the average of the coordinate differences between the four corner points and the image coordinates corresponding to the first processed image. The offset of the X coordinate is 0.2 and the offset of the Y coordinate is 0.5 through calculation.
Finally, the processor 20 may interpolate the first feature point T5 by using the coordinates of the first feature point T5 and the calculated offset to obtain the coordinates of the third feature point T7 in the first processed image Q3. If the coordinate of the first feature point T5 is (1.5 ) and the offset is (0.2, 0.5), the coordinate of the third feature point T7 is (1.7, 2).
Similarly, when the processor 20 acquires the fourth feature point T8, it may also calculate the coordinates of the four corner points S1, S2, S3 and S4 of the second image area S1 and the offset value of the image coordinates of the four corner points in the first processed image Q4 corresponding to the second image to be processed Q2, so as to perform a difference on the second feature point T6 by using the offset value, so as to obtain the coordinates of the fourth feature point T8.
Referring to fig. 2, 3 and 13, in some embodiments, step 05123: and interpolating to obtain a third feature point according to the image coordinates of the corner point of the first image area in the first processed image corresponding to the first image to be processed, comprising the following steps of:
05125: and interpolating to obtain a third feature point according to the relative position relationship between the first feature point and the corner point of the first image area and the image coordinate of the corner point of the first image area in the first processing image corresponding to the first image to be processed.
Step 05124: and interpolating to obtain a fourth feature point according to the image coordinates of the corner point of the second image area in the first processing image corresponding to the second image to be processed, wherein the fourth feature point comprises the following steps:
05126: and interpolating to obtain a fourth feature point according to the relative position relationship between the second feature point and the corner point of the second image region and the image coordinate of the corner point of the second image region in the first processing image corresponding to the second image to be processed.
In certain embodiments, second generation module 13 is configured to perform steps 05125 and 05126. That is, the second generating module 13 is configured to interpolate to obtain a third feature point according to the relative position relationship between the first feature point and the corner point of the first image region, and the image coordinate of the corner point of the first image region in the first processed image corresponding to the first image to be processed; and interpolating to obtain a fourth feature point according to the relative position relationship between the second feature point and the corner point of the second image region and the image coordinate of the corner point of the second image region in the first processing image corresponding to the second image to be processed.
In certain embodiments, processor 20 is configured to perform steps 05125 and 05126. That is, the processor 20 is configured to interpolate to obtain a third feature point according to the relative position relationship between the first feature point and the corner point of the first image region, and the image coordinate of the corner point of the first image region in the first processed image corresponding to the first image to be processed; and interpolating to obtain a fourth feature point according to the relative position relationship between the second feature point and the corner point of the second image region and the image coordinate of the corner point of the second image region in the first processing image corresponding to the second image to be processed.
Referring to fig. 12, the third feature point T7 may also be obtained by the processor 20 through calculation of the relative position relationship between the first feature point T5 and four corner points of the first image region R1, and the image coordinates of the corner point of the first image region R1 in the first processed image Q3 corresponding to the first image to be processed Q1.
Specifically, as shown in fig. 12, the coordinates of four corner points of the first image region R1, namely, the corner point R1, the corner point R2, the corner point R3 and the corner point R4 are (1, 1), (2, 2) and (1, 2), and the coordinate of the first feature point T5 is (1.5 ). And the image coordinates of the four corner points of the first image region R1 in the first processed image Q3 corresponding to the first image to be processed Q1 are (2, 2), (3, 3), and (2, 3), respectively.
First, the processor 20 may calculate the relative position relationship between the first feature point T5 and the four corner points of the first image region R1 by using the coordinates of the first feature point T5 and the coordinates of the four corner points. For example, the relative position relationship between the first feature point T5 and the corner point r1 can be obtained by the coordinate difference between the first feature point T5 and the corner point r1, that is, (0.5 ). The relative position relationship between the first feature point T5 and the corner point r2 can be obtained by the coordinate difference between the first feature point T5 and the corner point r2, i.e., -0.5, 0.5. By analogy, the processor 20 may obtain the relative position relationship between the first feature point T5 and the four corner points.
Next, the processor 20 may interpolate the first feature point T5 according to the relative position relationship between the first feature point T5 and the four corner points of the first image region R1, so as to obtain the coordinates of the third feature point T7 in the first processed image.
More specifically, the processor 20 may assign a corresponding weight value according to the magnitude of the relative positional relationship between the first feature point T5 and the four corner points of the first image region R1, so as to obtain an offset value interpolated by the first feature point T5 as the third feature point T7 by the product of the weight value and the coordinate difference value. For example, when the relative position relationship is larger, it indicates that the distance between the first feature point T5 and the corner point is farther, and the processor 20 gives a smaller weight to the coordinate difference between the first feature point T5 and the corner point. For another example, when the relative position relationship is smaller, it indicates that the distance between the first feature point T5 and the corner point is closer, and the processor 20 gives a weight value with a larger coordinate difference between the first feature point T5 and the corner point.
For example, the coordinate difference between the first feature point T5 and the corner point r1 is (x1, y1), the coordinate difference between the first feature point T5 and the corner point r2 is (x2, y2), the coordinate difference between the first feature point T5 and the corner point r3 is (x3, y3), and the coordinate difference between the first feature point T5 and the corner point r4 is (x4, y 4). The processor 20 may assign corresponding weight values according to the relative position relationship between the first feature point T5 and the four corner points, i.e., the corner point r1, the corner point r2, the corner point r3, and the corner point r 4. If the coordinates of the first feature point T5 are (X0, Y0), the X-axis coordinates X5 of the third feature point T7 are X0+ (a X1+ b X2+ c X3+ d X4), and the Y-axis coordinates Y5 are Y0+ (a Y1+ b Y2+ c Y3+ d Y4). Thus, the coordinates of the third feature point T7 can be determined.
Similarly, the fourth characteristic point T8 can also be calculated from the relative positional relationship between the second characteristic point T6 and the focal point of the second image region S1, and the image coordinates of the focal point of the second image region S1 in the first processed image Q4 corresponding to the second image to be processed Q2. The specific calculation method is the same as the above-mentioned method for calculating the third feature point T7, and is not repeated herein.
Referring to fig. 14, the present embodiment further provides a non-volatile computer-readable storage medium 200 containing a computer program 201. The computer program 201, when executed by the one or more processors 20, causes the one or more processors 20 to perform the image processing method of any of the embodiments described above.
For example, the computer program 201, when executed by the one or more processors 20, causes the processors 20 to perform the following image processing method:
01: acquiring attitude data of continuous multi-frame images to be processed;
03: processing continuous multi-frame images to be processed according to the attitude data to generate a plurality of first processed images; and
05: and processing the first processed image corresponding to the current image to be processed according to the deviation between the feature points corresponding to the same acquisition point in the plurality of first processed images to generate a target image, wherein the acquisition point is any part in the shooting scene.
As another example, the computer program 201, when executed by the one or more processors 20, causes the processors 20 to perform the following image processing method:
031: calculating target attitude data according to multi-frame attitude data corresponding to continuous multi-frame images to be processed based on a preset first function, wherein the continuous multi-frame images to be processed comprise the current image to be processed; and
032: and processing each frame of image to be processed according to the attitude data corresponding to each frame of image to be processed and the target attitude data to generate a first processed image corresponding to each frame of image to be processed.
Also for example, the computer program 201, when executed by the one or more processors 20, causes the processors 20 to perform the following image processing method:
0321: mapping a first image coordinate of each pixel of each frame of image to be processed into a three-dimensional position coordinate based on a preset second function;
0322: generating three-dimensional position coordinates corresponding to the target attitude data according to the attitude data, the target attitude data and the three-dimensional position coordinates corresponding to each frame of image to be processed;
0323: mapping the three-dimensional position coordinate corresponding to the target attitude data into a second image coordinate on the basis of a second function, wherein the second image coordinate corresponds to the first image coordinate one by one; and
0324: and adjusting the pixels corresponding to the first image coordinates to the second image coordinates to generate a first processed image.
In the description herein, references to the description of the terms "certain embodiments," "one example," "exemplary," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. An image processing method, comprising:
acquiring attitude data of continuous multi-frame images to be processed;
processing continuous frames of the images to be processed according to the attitude data to generate a plurality of first processed images;
processing the first processed image corresponding to the current image to be processed according to the deviation between the feature points corresponding to the same acquisition point in the plurality of first processed images to generate a target image, wherein the acquisition point is any part of a shooting scene.
2. The image processing method according to claim 1, wherein the processing of the plurality of consecutive frames of the image to be processed according to the pose data to generate a plurality of first processed images comprises:
calculating target attitude data according to the multi-frame attitude data corresponding to the continuous multi-frame images to be processed based on a preset first function, wherein the continuous multi-frame images to be processed comprise the current image to be processed;
processing each frame of the image to be processed according to the attitude data corresponding to each frame of the image to be processed and the target attitude data to generate the first processed image corresponding to each frame of the image to be processed.
3. The method according to claim 2, wherein the processing each frame of the image to be processed according to the pose data and the target pose data corresponding to each frame of the image to be processed to generate the first processed image corresponding to each frame of the image to be processed comprises:
mapping a first image coordinate of each pixel of each frame of the image to be processed into a three-dimensional position coordinate based on a preset second function;
generating a three-dimensional position coordinate corresponding to the target attitude data according to the attitude data, the target attitude data and the three-dimensional position coordinate corresponding to each frame of the image to be processed;
mapping the three-dimensional position coordinate corresponding to the target posture data into a second image coordinate on the basis of the second function, wherein the second image coordinate is in one-to-one correspondence with the first image coordinate;
and adjusting the pixel corresponding to the first image coordinate to the second image coordinate to generate the first processed image.
4. The image processing method according to claim 1, wherein the processing the first processed image corresponding to the current image to be processed according to a deviation between feature points corresponding to a same acquisition point in the plurality of first processed images to generate a target image comprises:
determining image offset according to the deviation of image coordinates among multiple groups of feature points corresponding to the same acquisition point in multiple first processed images;
and transforming the image coordinates of the first processing image corresponding to the current image to be processed according to the image offset to generate a target image.
5. The image processing method according to claim 4, wherein the plurality of consecutive frames of the to-be-processed image include a first to-be-processed image and a second to-be-processed image, and the determining an image offset based on a deviation of image coordinates between a plurality of sets of feature points corresponding to the same captured point in the plurality of first processed images includes:
acquiring a first feature point and a second feature point corresponding to the same acquisition point in the first image to be processed and the second image to be processed, wherein the first feature point is positioned in the first image to be processed, and the second feature point is positioned in the second image to be processed;
acquiring a third feature point corresponding to the first feature point in the first processed image corresponding to the first image to be processed and acquiring a fourth feature point corresponding to the second feature point in the first processed image corresponding to the second image to be processed;
and determining the image offset according to the deviation of the image coordinates of the third characteristic point and the fourth characteristic point.
6. The image processing method according to claim 5, wherein the acquiring a third feature point corresponding to the first feature point in the first processed image corresponding to the first image to be processed and acquiring a fourth feature point corresponding to the second feature point in the first processed image corresponding to the second image to be processed comprises:
respectively establishing a first grid and a second grid on the first image to be processed and the second image to be processed, wherein the first grid and the second grid are both composed of a plurality of rectangular image areas;
acquiring a first image area where the first characteristic point is located and a second image area where the second characteristic point is located;
interpolating to obtain the third feature point according to the image coordinates of the corner point of the first image area in the first processed image corresponding to the first image to be processed;
and interpolating to obtain the fourth feature point according to the image coordinates of the corner point of the second image area in the first processed image corresponding to the second image to be processed.
7. The image processing method according to claim 6, wherein interpolating to obtain the third feature point according to the image coordinates of the corner point of the first image region in the first processed image corresponding to the first image to be processed includes:
interpolating to obtain the third feature point according to the relative position relationship between the first feature point and the corner point of the first image region and the image coordinate of the corner point of the first image region in the first processed image corresponding to the first image to be processed;
the interpolating according to the image coordinates of the corner points of the second image area in the first processed image corresponding to the second image to be processed to obtain the fourth feature point includes:
and interpolating to obtain the fourth feature point according to the relative position relationship between the second feature point and the corner point of the second image region and the image coordinate of the corner point of the second image region in the first processed image corresponding to the second image to be processed.
8. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring the attitude data of continuous multiframe images to be processed;
a first generating module, configured to process consecutive frames of the to-be-processed image according to the pose data to generate a plurality of first processed images; and
the second generation module is used for processing the first processed image corresponding to the current image to be processed according to the deviation between the feature points corresponding to the same acquisition point in the plurality of first processed images to generate a target image, wherein the acquisition point is any part of the shooting scene.
9. An electronic device, comprising a processor configured to:
acquiring attitude data of continuous multi-frame images to be processed;
processing continuous frames of the images to be processed according to the attitude data to generate a plurality of first processed images;
processing the first processed image corresponding to the current image to be processed according to the deviation between the feature points corresponding to the same acquisition point in the plurality of first processed images to generate a target image, wherein the acquisition point is any part of a shooting scene.
10. A non-transitory computer-readable storage medium comprising a computer program which, when executed by a processor, causes the processor to perform the image processing method of any one of claims 1 to 7.
CN202111416688.8A 2021-11-19 2021-11-19 Image processing method and device, electronic equipment and computer readable storage medium Pending CN114071019A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111416688.8A CN114071019A (en) 2021-11-19 2021-11-19 Image processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111416688.8A CN114071019A (en) 2021-11-19 2021-11-19 Image processing method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114071019A true CN114071019A (en) 2022-02-18

Family

ID=80276295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111416688.8A Pending CN114071019A (en) 2021-11-19 2021-11-19 Image processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114071019A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106251317A (en) * 2016-09-13 2016-12-21 野拾(北京)电子商务有限公司 Space photography stabilization processing method and processing device
CN108600622A (en) * 2018-04-12 2018-09-28 联想(北京)有限公司 A kind of method and device of video stabilization
WO2021102893A1 (en) * 2019-11-29 2021-06-03 Oppo广东移动通信有限公司 Method and apparatus for video anti-shaking optimization and electronic device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106251317A (en) * 2016-09-13 2016-12-21 野拾(北京)电子商务有限公司 Space photography stabilization processing method and processing device
CN108600622A (en) * 2018-04-12 2018-09-28 联想(北京)有限公司 A kind of method and device of video stabilization
WO2021102893A1 (en) * 2019-11-29 2021-06-03 Oppo广东移动通信有限公司 Method and apparatus for video anti-shaking optimization and electronic device

Similar Documents

Publication Publication Date Title
CN110473159B (en) Image processing method and device, electronic equipment and computer readable storage medium
USRE45231E1 (en) Taken-image signal-distortion compensation method, taken-image signal-distortion compensation apparatus, image taking method and image-taking apparatus
US10497140B2 (en) Hybrid depth sensing pipeline
WO2015081870A1 (en) Image processing method, device and terminal
CN102547080B (en) Camera module and comprise the messaging device of this camera module
JP5914813B2 (en) Camera, distortion correction apparatus, and distortion correction method
US8289420B2 (en) Image processing device, camera device, image processing method, and program
US10440267B2 (en) System and method for image stitching
EP2847998B1 (en) Systems, methods, and computer program products for compound image demosaicing and warping
US10771758B2 (en) Immersive viewing using a planar array of cameras
CN109495733B (en) Three-dimensional image reconstruction method, device and non-transitory computer readable storage medium thereof
US20200162671A1 (en) Image capturing system, terminal and computer readable medium which correct images
CN111179154A (en) Circular fisheye camera array correction
CN115701125B (en) Image anti-shake method and electronic equipment
CN115049548A (en) Method and apparatus for restoring image obtained from array camera
CN114071019A (en) Image processing method and device, electronic equipment and computer readable storage medium
WO2010141533A1 (en) Generating images with different fields of view
WO2019107513A1 (en) Distribution image generation method
US8792012B2 (en) Image processing device, system, and method for correcting focal plane distortion using a motion vector
JP7219620B2 (en) Delivery image generation method
CN117135456B (en) Image anti-shake method and electronic equipment
US9369630B2 (en) Electronic apparatus and method of controlling the same
JP6083526B2 (en) Information processing apparatus, program, and method
US20220165021A1 (en) Apparatus, system, method, and non-transitory medium
JP2012015982A (en) Method for deciding shift amount between videos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination