CN110611767B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN110611767B
CN110611767B CN201910914273.XA CN201910914273A CN110611767B CN 110611767 B CN110611767 B CN 110611767B CN 201910914273 A CN201910914273 A CN 201910914273A CN 110611767 B CN110611767 B CN 110611767B
Authority
CN
China
Prior art keywords
image
processed
compensation
translation
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910914273.XA
Other languages
Chinese (zh)
Other versions
CN110611767A (en
Inventor
袁梓瑾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201910914273.XA priority Critical patent/CN110611767B/en
Publication of CN110611767A publication Critical patent/CN110611767A/en
Application granted granted Critical
Publication of CN110611767B publication Critical patent/CN110611767B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/685Vibration or motion blur correction performed by mechanical compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image processing method, an image processing device and electronic equipment, wherein the image processing method comprises the following steps: extracting mutually matched feature points of the target object from the reference image and the image to be processed to obtain feature matching point pairs; performing translation compensation on the image to be processed and the feature points of the image to be processed based on the feature matching point pairs; calculating motion estimation parameters according to the characteristic points of the reference image and the characteristic points of the image to be processed after the translation compensation; and performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters so as to align the target object in the image after the motion compensation with the target object in the reference image. According to the image processing method, after the image to be processed is processed, the target object in the obtained image can be aligned with the target object in the reference image, so that when pictures are switched among lenses with different focal lengths, the target object can be kept smooth and stable.

Description

Image processing method and device and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
Dollyzoom is a special effect in photography, and can produce the effect that the size ratio of a shot subject is kept unchanged, and the background field of view is greatly changed (for example, the background field of view is expanded or compressed).
At present, in the process of film shooting, in order to make the shot image have the dolyzoom effect, the optical dolyzoom product is often used for realizing the dolyzoom effect. The specific process is as follows: when shooting a target, the focal length of the lens of the camera is adjusted, and meanwhile, a rail car (Dolly) provided with the camera is required to move in a direction opposite to the zooming direction of the lens, so that the Dolly zoom effect of the shot image is achieved. In practical application, some camera lenses cannot adjust focal lengths (for example, camera lenses mounted on an unmanned aerial vehicle), and in order to make the finally-captured images have the Dollyzoom effect, multiple lenses with different focal lengths (where the focal length of each lens is fixed) are generally required to capture the images so as to achieve the purpose of zooming, and a product adopting the image capturing mode is called a digital Dollyzoom product. However, due to the inconsistency of parameters such as focal length and optical center position of each lens, the parallax phenomenon exists at the overlapping part of the fields of view of the imaging pictures between different lenses, so that when the pictures are switched between different lenses, unnatural effects such as discontinuity and jumping of foreground objects in the pictures occur, and the final shot images are poor in effect compared with the images shot by optical dolyzoom products.
In the prior art, no effective solution exists for the unnatural phenomena of discontinuity, jumping and the like of foreground objects in the images when the images are switched among different lenses.
Disclosure of Invention
In view of the above, the present invention provides an image processing method, an image processing apparatus and an electronic device, so that when a picture is switched between different shots, a foreground object in the picture is continuous and natural.
In a first aspect, an embodiment of the present invention provides an image processing method, including: extracting mutually matched feature points of the target object from the reference image and the image to be processed to obtain feature matching point pairs; the reference image and the image to be processed are images obtained by shooting the target object at the same time through lenses with different focal lengths; performing translation compensation on the image to be processed and the feature points of the image to be processed in the feature matching point pair based on the feature matching point pair to obtain the image to be processed after the translation compensation and the feature points of the image to be processed after the translation compensation; calculating motion estimation parameters according to the characteristic points of the reference image in the characteristic matching point pair and the characteristic points of the image to be processed after the translation compensation; and performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters so as to align a target object in the image after the motion compensation with a target object in the reference image.
Further, the feature points of the reference image in the feature matching point pair are uniformly distributed in the image area of the target object in the reference image, and the feature points of the image to be processed in the feature matching point pair are uniformly distributed in the image area of the target object in the image to be processed.
Further, the step of performing translation compensation on the feature points of the image to be processed and the image to be processed in the feature matching point pair based on the feature matching point pair includes: determining the image translation amount according to the feature matching point pairs; and respectively carrying out translation compensation on the image to be processed and the characteristic points of the image to be processed according to the image translation amount.
Further, the step of determining the image translation amount according to the feature matching point pairs comprises: determining a reference point of the reference image according to the feature point of the reference image in the feature matching point pair, and determining a reference point of the image to be processed according to the feature point of the image to be processed in the feature matching point pair; determining the image translation amount based on the reference point of the reference image and the reference point of the image to be processed.
Further, the step of determining the reference point of the reference image according to the feature point of the reference image in the feature matching point pair, and determining the reference point of the image to be processed according to the feature point of the image to be processed in the feature matching point pair includes: calculating the centroid of a target object in the reference image according to the coordinates of the characteristic points of the reference image, and calculating the centroid of the target object in the image to be processed according to the coordinates of the characteristic points of the image to be processed; taking the calculated centroid of the target object in the reference image as a reference point of the reference image, and taking the calculated centroid of the target object in the image to be processed as a reference point of the image to be processed;
or, calculating the center of the target object in the reference image according to the coordinates of the characteristic points of the reference image, and calculating the center of the target object in the image to be processed according to the coordinates of the characteristic points of the image to be processed; and taking the center of the target object in the reference image obtained by calculation as a reference point of the reference image, and taking the center of the target object in the image to be processed obtained by calculation as a reference point of the image to be processed.
Further, the step of determining the image translation amount based on the reference point of the reference image and the reference point of the image to be processed includes: calculation formula according to image translation amount
Figure BDA0002215504760000031
Calculating the image translation amount; t isxRepresenting the amount of image translation, T, in the X directionyThe amount of image translation in the Y direction is indicated,
Figure BDA0002215504760000032
an abscissa representing a reference point of the reference image,
Figure BDA0002215504760000033
an abscissa representing a reference point of the image to be processed,
Figure BDA0002215504760000034
a ordinate representing a reference point of the reference image,
Figure BDA0002215504760000035
a vertical coordinate representing a reference point of the image to be processed.
Further, the step of respectively performing translation compensation on the image to be processed and the feature points of the image to be processed according to the image translation amount comprises: translating each pixel point in the image to be processed according to the image translation amount; and translating the characteristic points of the image to be processed according to the image translation amount.
Further, the step of calculating the motion estimation parameters according to the feature points of the reference image in the feature matching point pair and the feature points of the image to be processed after the translational compensation comprises: fitting formula according to similarity transformation
Figure BDA0002215504760000041
The characteristic points of the reference image and the characteristic points of the image to be processed after the translation compensation are carried outPerforming similarity transformation fitting to obtain the motion estimation parameters; s, thetaAiI-th feature point, p 'representing the reference picture'BiAnd representing the ith characteristic point of the image to be processed after the translation compensation.
Further, the step of performing motion compensation on the translational compensated image to be processed based on the motion estimation parameter includes: and performing inverse similarity transformation compensation on the image to be processed after the translation compensation based on the motion estimation parameters.
Further, the step of performing inverse similarity transformation compensation on the image to be processed after the translational compensation based on the motion estimation parameters comprises: determining an inverse similarity transformation compensation matrix according to the motion estimation parameters; and processing the image to be processed after the translation compensation through the inverse similarity transformation compensation matrix so as to align a target object in the processed image with a target object in the reference image.
In a second aspect, an embodiment of the present invention further provides an image processing apparatus, including: the characteristic point extraction unit is used for extracting mutually matched characteristic points of the target object from the reference image and the image to be processed to obtain characteristic matching point pairs; the reference image and the image to be processed are images obtained by shooting the target object at the same time through lenses with different focal lengths; the translation compensation unit is used for carrying out translation compensation on the image to be processed and the feature points of the image to be processed in the feature matching point pair based on the feature matching point pair to obtain the image to be processed after the translation compensation and the feature points of the image to be processed after the translation compensation; the calculation unit is used for calculating motion estimation parameters according to the characteristic points of the reference image in the characteristic matching point pair and the characteristic points of the image to be processed after the translation compensation; and the motion compensation unit is used for performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters so as to align a target object in the image after the motion compensation with a target object in the reference image.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to any one of the above first aspects when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a computer-readable medium having non-volatile program code executable by a processor, where the program code causes the processor to perform the steps of the method according to any one of the first aspect.
In the embodiment of the invention, mutually matched feature points of a target object are extracted from a reference image and an image to be processed to obtain feature matching point pairs; then, based on the feature matching point pair, performing translation compensation on the image to be processed and the feature points of the image to be processed in the feature matching point pair to obtain the image to be processed after the translation compensation and the feature points of the image to be processed after the translation compensation; further, calculating motion estimation parameters according to the feature points of the reference image in the feature matching point pair and the feature points of the image to be processed after the translation compensation; and finally, performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters so as to align the target object in the image after the motion compensation with the target object in the reference image. As can be seen from the above description, after the image to be processed is processed by the image processing method in the embodiment of the present invention, the target object in the obtained image can be aligned with the target object in the reference image, so that when the picture is switched from the reference image to the processed image, the target object can be kept smooth, continuous, stable and natural, that is, when the picture is switched between the lenses with different focal lengths, the target object can be kept smooth and stable, and thus, the unnatural technical problems of discontinuity, jumping and the like of the foreground object in the picture when the picture is switched between the lenses with different focal lengths in the prior art are solved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram of an electronic device according to an embodiment of the present invention;
FIG. 2 is a flowchart of an image processing method according to an embodiment of the present invention;
fig. 3 is a flowchart of a method for performing translation compensation on an image to be processed and feature points of the image to be processed according to an embodiment of the present invention;
fig. 4 is a flowchart of a method for performing inverse similarity transformation compensation on a translational compensated image to be processed based on a motion estimation parameter according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1:
first, an electronic device 100 for implementing an embodiment of the present invention, which can be used to execute an image processing method according to embodiments of the present invention, is described with reference to fig. 1.
As shown in FIG. 1, electronic device 100 includes one or more processors 102, one or more memories 104, an input device 106, an output device 108, and a camera 110, which are interconnected via a bus system 112 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 102 may be implemented in at least one hardware form of a Digital Signal Processor (DSP), a Field-Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), and an asic (application Specific Integrated circuit), and the processor 102 may be a Central Processing Unit (CPU) or other form of Processing Unit having data Processing capability and/or instruction execution capability, and may control other components in the electronic device 100 to perform desired functions.
The memory 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The camera 110 is configured to capture a reference image and an image to be processed, where after the reference image and the image to be processed are processed by the image processing method, a target object in the obtained image is aligned with the target object in the reference image, for example, the camera may capture an image (e.g., a photograph, a video, etc.) desired by a user, and then after the image is processed by the image processing method, the target object in the obtained image is aligned with the target object in the reference image, and the camera may further store the captured image in the memory 104 for use by other components.
Exemplarily, the electronic device for implementing the image processing method according to the embodiment of the present invention may be implemented as a smart mobile terminal such as a smart phone, a tablet computer, etc., and may also be implemented as any other device having computing capability.
Example 2:
according to an embodiment of the present invention, there is provided an embodiment of an image processing method, it should be noted that the steps shown in the flowchart of the drawings may be executed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that here.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the present invention, as shown in fig. 2, the method including the steps of:
step S202, extracting mutually matched characteristic points of the target object from the reference image and the image to be processed to obtain characteristic matching point pairs.
The reference image and the image to be processed are images obtained by shooting the target object at the same time through the lens with different focal lengths. The target object may be any real object such as a person, an animal, or a vehicle, and one or more target objects may be provided.
When a movie with the dolyzoom effect is shot through lenses with different focal lengths, the imaging picture needs to be switched between the lenses with different focal lengths, and in the prior art, it has been determined that at time ts (where time ts refers to any time), the imaging picture shot by lens a (i.e., a reference image) should be switched to the imaging picture shot by lens B (i.e., an image to be processed). However, when switching from the imaging picture shot by the lens a to the imaging picture shot by the lens B, discontinuity of foreground objects in the pictures occurs at the time of switching due to inconsistency of parameters such as focal lengths and optical center positions of the respective lenses.
Based on this, the inventor designs an image processing method in the embodiment, which can be applied to a processor, wherein the processor is respectively connected with the lenses with different focal lengths in a communication manner, and is used for processing imaging pictures shot by the lenses with different focal lengths on line in real time; the method can also be applied to a computer, and the post-processing is carried out on the imaging pictures shot by the lenses with different focal lengths through the computer so as to realize smooth and continuous foreground targets when the pictures are switched among the lenses with different focal lengths. The embodiment of the present invention does not specifically limit the implementation manner.
The feature matching point pair includes: the characteristic points of the reference image and the characteristic points of the image to be processed corresponding to the characteristic points of the reference image. Specifically, the feature points of the reference image are feature points of a target object in the reference image, and the feature points of the image to be processed are feature points of the target object in the image to be processed.
And S204, performing translation compensation on the image to be processed and the feature points of the image to be processed in the feature matching point pair based on the feature matching point pair to obtain the image to be processed after the translation compensation and the feature points of the image to be processed after the translation compensation.
Considering that discontinuity of a target object in an image can occur when the reference image is directly switched to the image to be processed, translation compensation needs to be performed on the image to be processed based on the feature matching point pair, so that pixel points of the target object in the image to be processed after translation compensation and pixel points of the target object in the reference image are integrally aligned in position.
However, after the alignment at the above positions is achieved, smooth continuity of the target object during image switching cannot be ensured, and it is also necessary to ensure that the pixel points of the target object in the final image and the pixel points of the target object in the reference image are physically aligned. Therefore, motion compensation on the form is also required, in order to implement the motion compensation on the form, translation compensation needs to be performed on the feature points of the image to be processed based on the feature matching point pairs to obtain the feature points of the image to be processed after the translation compensation, and the process is described in detail below.
And step S206, calculating motion estimation parameters according to the characteristic points of the reference image in the characteristic matching point pair and the characteristic points of the image to be processed after the translation compensation.
And after the characteristic points of the image to be processed after the translational compensation are obtained, calculating motion estimation parameters according to the characteristic points of the image to be processed after the translational compensation and the characteristic points of the reference image.
The motion estimation parameters are used for carrying out shape processing on the pixel points of the image to be processed after the translational compensation.
And step S208, performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters so as to align the target object in the image after the motion compensation with the target object in the reference image.
After the image after motion compensation is obtained, at the time ts, the reference image shot by the lens A can be directly switched to the image after motion compensation, so that smooth and continuous target objects on the same display area are realized when the images are switched among different lenses. The above alignment can be regarded as the target object in the motion compensated image and the target object in the reference image are aligned as a whole.
In the embodiment of the invention, mutually matched feature points of a target object are extracted from a reference image and an image to be processed to obtain feature matching point pairs; then, based on the feature matching point pair, performing translation compensation on the image to be processed and the feature points of the image to be processed in the feature matching point pair to obtain the image to be processed after the translation compensation and the feature points of the image to be processed after the translation compensation; further, calculating motion estimation parameters according to the feature points of the reference image in the feature matching point pair and the feature points of the image to be processed after the translation compensation; and finally, performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters so as to align the target object in the image after the motion compensation with the target object in the reference image. As can be seen from the above description, after the image to be processed is processed by the image processing method in the embodiment of the present invention, the target object in the obtained image can be aligned with the target object in the reference image, so that when the picture is switched from the reference image to the processed image, the target object can be kept smooth, continuous, stable and natural, that is, when the picture is switched between the lenses with different focal lengths, the target object can be kept smooth and stable, and thus, the unnatural technical problems of discontinuity, jumping and the like of the foreground object in the picture when the picture is switched between the lenses with different focal lengths in the prior art are solved.
The foregoing briefly introduces the image processing method of the present invention, and the details thereof are described in detail below.
In this embodiment, given step S202 above, one implementation manner of extracting mutually matched feature points of the target object in the reference image and the image to be processed includes:
and extracting the characteristic points of the target object in the reference image and the image to be processed by adopting a characteristic extraction algorithm to obtain characteristic matching point pairs.
The characteristic points of the reference image in the characteristic matching point pair are uniformly distributed in the image area of the target object in the reference image, and the characteristic points of the image to be processed in the characteristic matching point pair are uniformly distributed in the image area of the target object in the image to be processed.
In the embodiment of the invention, when a feature extraction algorithm is adopted to process a reference image and an image to be processed, feature points of a target object in the reference image are extracted and obtained, and then the feature points of the target object in the image to be processed are extracted and obtained, a mesh-flow technology is fused in the algorithm, the extracted feature points of the reference image and the extracted feature points of the image to be processed can be uniformly distributed in a region of the target object (which can enable subsequent transformation to be more accurate), and the extracted feature points of the reference image and the extracted feature points of the target object are in one-to-one correspondence, namely, finally, feature matching point pairs are obtained.
For convenience of description, in the present embodiment, the reference image may be an image obtained by taking the target object by the lens a at the time ts, and is denoted as IA,ts(ii) a The image to be processed can be an image obtained by shooting a target object by the lens B at ts moment and is marked as IB,ts
The extracted feature matching point pairs are as follows: { pAi→pBiIn which { p }AiDenotes feature points of a reference image, { p }BiDenotes a feature point of the image to be processed, i ═ 1, 2, 3, …, n-1.
In this embodiment, given the above step S204, an implementation manner of performing translational compensation on the to-be-processed image and the feature points of the to-be-processed image in the feature matching point pair based on the feature matching point pair is described with reference to fig. 3, and includes the following steps:
and S301, determining the image translation amount according to the feature matching point pairs.
Specifically, the process of determining the image translation amount includes the following steps (1) and (2):
(1) and determining a reference point of the reference image according to the feature point of the reference image in the feature matching point pair, and determining a reference point of the image to be processed according to the feature point of the image to be processed in the feature matching point pair.
In the embodiment of the present invention, two ways of determining the reference point are given, and the two ways are described below respectively.
The first method is as follows: case where the reference point is the centroid of the target object: and calculating the centroid of the target object in the reference image according to the coordinates of the characteristic points of the reference image, calculating the centroid of the target object in the image to be processed according to the coordinates of the characteristic points of the image to be processed, taking the calculated centroid of the target object in the reference image as the reference point of the reference image, and taking the calculated centroid of the target object in the image to be processed as the reference point of the image to be processed.
Specifically, the calculation formula is calculated according to the mass center
Figure BDA0002215504760000121
Calculating the centroid of the target object in the reference image and the centroid of the target object in the image to be processed;
wherein, CxAbscissa representing centroid, CyOrdinate, P, representing the center of massi xThe abscissa representing the ith feature point,
Figure BDA0002215504760000123
abscissa, P, representing the i +1 th feature pointi yThe ordinate of the i-th feature point is represented,
Figure BDA0002215504760000125
denotes an ordinate of the i +1 th feature point, W denotes a weight coefficient, and
Figure BDA0002215504760000126
and taking the calculated centroid of the target object in the reference image as a reference point of the reference image, and taking the calculated centroid of the target object in the image to be processed as a reference point of the image to be processed.
It should be noted that: when i ═ n-1 (i.e. P)i=Pn-1Last feature point), Pi+1Is the first characteristic point P1(i.e. P)n=P1)。
When the reference image is IA,tsThe image to be processed is IB,tsThe characteristic point of the reference image is { p }AiOf the image to be processedThe characteristic point is { pBiExplaining the calculation process of the centroid of the target object in the reference image and the centroid of the target object in the image to be processed:
when calculating the reference image IA,tsWhen the center of mass of the target object is in the middle, the formula is calculated according to the center of mass
Figure BDA0002215504760000131
Performing a calculation, at this time, CxAbscissa representing centroid of target object in reference image, CyOrdinate, P, representing the centroid of the target object in the reference imagei xRepresents the abscissa of the ith one of the feature points of the reference image,
Figure BDA0002215504760000132
denotes the abscissa, P, of the i +1 th feature point among the feature points of the reference imagei yRepresents the ordinate of the i-th one of the feature points of the reference image,
Figure BDA0002215504760000133
denotes the ordinate of the i +1 th feature point among the feature points of the reference image, W denotes a weight coefficient, and
Figure BDA0002215504760000134
when calculating the image I to be processedB,tsWhen the center of mass of the target object is in the middle, the formula is calculated according to the center of mass
Figure BDA0002215504760000135
Performing a calculation, at this time, CxAbscissa representing centroid of target object in image to be processed, CyOrdinate, P, representing the centroid of a target object in an image to be processedi xThe abscissa representing the ith feature point among the feature points of the image to be processed,
Figure BDA0002215504760000136
represents the abscissa, P, of the i +1 th feature point among the feature points of the image to be processedi yRepresents the ordinate of the ith feature point among the feature points of the image to be processed,
Figure BDA0002215504760000137
represents the ordinate of the (i + 1) th feature point among the feature points of the image to be processed, W represents a weight coefficient, and
Figure BDA0002215504760000138
and then, taking the calculated centroid of the target object in the reference image as a reference point of the reference image, and taking the calculated centroid of the target object in the image to be processed as a reference point of the image to be processed.
The second method comprises the following steps: case where the reference point is the center of the target object: calculating the center of a target object in the reference image according to the coordinates of the characteristic points of the reference image, and calculating the center of the target object in the image to be processed according to the coordinates of the characteristic points of the image to be processed; and taking the center of the target object in the calculated reference image as a reference point of the reference image, and taking the center of the target object in the calculated image to be processed as a reference point of the image to be processed.
Specifically, based on the central calculation formula
Figure BDA0002215504760000141
Calculating the center of a target object in a reference image and the center of the target object in an image to be processed;
wherein, CxAbscissa representing center, CyOrdinate, P, representing the centeri xAbscissa, P, representing the ith feature pointi yThe ordinate of the i-th feature point is represented.
And taking the center of the target object in the calculated reference image as a reference point of the reference image, and taking the center of the target object in the calculated image to be processed as a reference point of the image to be processed.
When the reference image is IA,tsThe image to be processed is IB,tsThe characteristic point of the reference image is { p }AiCharacteristics of the image to be processedThe point is { pBiExplaining the calculation process of the center of the target object in the reference image and the center of the target object in the image to be processed:
when calculating the reference image IA,tsWhen the center of mass of the target object is in the middle, the formula is calculated according to the center
Figure BDA0002215504760000142
Performing a calculation, at this time, CxAbscissa, C, representing the center of the target object in the reference imageyOrdinate, P, representing the center of the target object in the reference imagei xDenotes the abscissa, P, of the ith feature point among the feature points of the reference imagei yAnd represents the ordinate of the i-th one of the feature points of the reference image.
When calculating the image I to be processedB,tsWhen the center of the target object is determined, the formula is calculated according to the center
Figure BDA0002215504760000151
Performing a calculation, at this time, CxAbscissa, C, representing the center of the target object in the image to be processedyOrdinate, P, representing the center of a target object in an image to be processedi xThe abscissa, P, representing the ith feature point among the feature points of the image to be processedi yAnd the ordinate of the ith characteristic point in the characteristic points of the image to be processed is represented.
And then, taking the center of the target object in the calculated reference image as a reference point of the reference image, and taking the center of the target object in the calculated image to be processed as a reference point of the image to be processed.
(2) The image translation amount is determined based on the reference point of the reference image and the reference point of the image to be processed.
Specifically, the calculation formula is calculated according to the image translation amount
Figure BDA0002215504760000152
Calculating the translation amount of the image; wherein, TxRepresenting the amount of image translation, T, in the X directionyThe amount of image translation in the Y direction is indicated,
Figure BDA0002215504760000153
an abscissa indicating a reference point of the reference image,
Figure BDA0002215504760000154
the abscissa representing the reference point of the image to be processed,
Figure BDA0002215504760000155
a ordinate representing a reference point of the reference image,
Figure BDA0002215504760000156
the ordinate of the reference point representing the image to be processed.
And S302, respectively carrying out translation compensation on the image to be processed and the characteristic points of the image to be processed according to the image translation amount.
Specifically, each pixel point in the image to be processed is translated according to the image translation amount; and translating the characteristic points of the image to be processed according to the image translation amount.
When the image to be processed is subjected to translation compensation: calculating formula I 'according to image translation compensation'B,ts(x,y)=IB,ts(x-Tx,y-Ty) And carrying out translation compensation on the image to be processed.
Wherein, I'B,ts(x, y) denotes the image to be processed after the translation compensation, TxRepresenting the amount of image translation, T, in the X directionyIndicating the amount of image translation in the Y direction.
When the image translation amount is the translation compensation amount calculated based on the mass center, after the image to be processed is subjected to translation compensation, the mass center of the target object in the image to be processed after translation compensation is overlapped with the mass center of the target object in the reference image;
and when the image translation amount is the translation compensation amount calculated on the basis of the center, after the image to be processed is subjected to translation compensation, the center of the target object in the image to be processed after the translation compensation is overlapped with the center of the target object in the reference image.
In fact, after the above-mentioned translation compensation is completed, the alignment in position of each pixel point of the target object in the image to be processed after the translation compensation and each pixel point of the target object in the reference image with respect to the reference point is realized.
When the characteristic points of the image to be processed are subjected to translation compensation: translation compensation formula according to characteristic points
Figure BDA0002215504760000161
And carrying out translation compensation on the characteristic points of the image to be processed.
Wherein the content of the first and second substances,
Figure BDA0002215504760000162
the abscissa representing the ith feature point of the image to be processed after the translation compensation,
Figure BDA0002215504760000163
the ordinate of the ith characteristic point of the image to be processed after the translation compensation is represented,
Figure BDA0002215504760000164
the abscissa representing the ith feature point of the image to be processed,
Figure BDA0002215504760000165
ordinate, T, representing the ith feature point of the image to be processedxRepresenting the amount of image translation, T, in the X directionyThe image translation amount in the Y direction is represented, and the value of i is 1, 2, 3, …, n-1.
After the feature points of the image to be processed after the translational compensation are obtained, the motion estimation parameters can be further calculated according to the feature points of the image to be processed after the translational compensation and the feature points of the reference image in the feature matching point pair, and in an optional implementation manner, the process of calculating the motion estimation parameters includes the following steps:
fitting formula according to similarity transformation
Figure BDA0002215504760000171
The characteristic points of the reference image and the characteristic points of the image to be processed after the translation compensation are carried outAnd performing similarity transformation fitting to obtain motion estimation parameters.
Wherein s, theta denote motion estimation parameters, s denotes a size parameter in the motion estimation parameters, theta denotes a rotation angle parameter in the motion estimation parameters, and pAiI-th feature point, p 'representing a reference picture'BiAnd representing the ith characteristic point of the image to be processed after the translation compensation.
In an embodiment of the invention, the motion estimation uses a similarity transformation fit, in terms of
Figure BDA0002215504760000172
Carrying out similarity transformation fitting on the characteristic points of the reference image and the characteristic points of the image to be processed after the translation compensation, and solving
Figure BDA0002215504760000173
At the minimum, the corresponding values of s, θ, where,
Figure BDA0002215504760000174
Figure BDA0002215504760000175
Figure BDA0002215504760000176
the abscissa representing the ith feature point of the reference image,
Figure BDA0002215504760000177
the ordinate of the i-th feature point of the reference image is represented,
Figure BDA0002215504760000178
the abscissa representing the ith feature point of the image to be processed after the translation compensation,
Figure BDA0002215504760000179
and representing the ordinate of the ith characteristic point of the image to be processed after the translation compensation.
It should be noted that, in addition to the above similarity transformation fitting, the motion estimation may also use other motion estimation. For example, affine, perspective, projection, etc., but the motion estimation method of the similarity transformation fitting is the motion estimation method with the best reliability, and the motion estimation is not particularly limited in the embodiment of the present invention.
After obtaining the motion estimation parameter, the image processing apparatus can further perform a physical correction (i.e. motion compensation) on the pixel point of the image to be processed after the translational compensation based on the motion estimation parameter, in the step S208, the step of performing the motion compensation on the image to be processed after the translational compensation based on the motion estimation parameter includes: and performing inverse similarity transformation compensation on the image to be processed after the translational compensation based on the motion estimation parameters, and referring to fig. 4, including the following steps.
Step S401, determining an inverse similarity transformation compensation matrix according to the motion estimation parameters.
And step S402, processing the image to be processed after the translation compensation through the inverse similarity transformation compensation matrix so as to align the target object in the processed image with the target object in the reference image.
Specifically, the inverse similarity transformation compensation matrix may be:
Figure BDA0002215504760000181
and then, processing the image to be processed after the translation compensation through the inverse similarity transformation compensation matrix, wherein the processing result is as follows:
I″B,ts(x,y)=I'B,ts(x ', y'), wherein, I ″)B,ts(x, y) represents the processed image, and
Figure BDA0002215504760000182
thus, obtained I ″)B,ts(x, y) is and IA,tsThe target object in the image is aligned, and further, smooth stabilization of the target object in the image can be realized when the lens A is switched to the lens B at the ts moment in the digital Dollyzoom processing.
Example 3:
an embodiment of the present invention further provides an image processing apparatus, which is mainly used for executing the image processing method provided by the foregoing content of the embodiment of the present invention, and the image processing apparatus provided by the embodiment of the present invention is specifically described below.
Fig. 5 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention, as shown in fig. 5, the image processing apparatus mainly includes: a feature point extraction unit 10, a translation compensation unit 20, a calculation unit 30, and a motion compensation unit 40, wherein:
the characteristic point extraction unit is used for extracting mutually matched characteristic points of the target object from the reference image and the image to be processed to obtain characteristic matching point pairs; the reference image and the image to be processed are images obtained by shooting a target object at the same time through lenses with different focal lengths;
the translation compensation unit is used for carrying out translation compensation on the image to be processed and the feature points of the image to be processed in the feature matching point pair based on the feature matching point pair to obtain the image to be processed after the translation compensation and the feature points of the image to be processed after the translation compensation;
the computing unit is used for computing motion estimation parameters according to the feature points of the reference image in the feature matching point pair and the feature points of the image to be processed after the translation compensation;
and the motion compensation unit is used for performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters so as to align the target object in the image after the motion compensation with the target object in the reference image.
In the embodiment of the invention, mutually matched feature points of a target object are extracted from a reference image and an image to be processed to obtain feature matching point pairs; then, based on the feature matching point pair, performing translation compensation on the image to be processed and the feature points of the image to be processed in the feature matching point pair to obtain the image to be processed after the translation compensation and the feature points of the image to be processed after the translation compensation; further, calculating motion estimation parameters according to the feature points of the reference image in the feature matching point pair and the feature points of the image to be processed after the translation compensation; and finally, performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters so as to align the target object in the image after the motion compensation with the target object in the reference image. As can be seen from the above description, after the image to be processed is processed by the image processing method in the embodiment of the present invention, the target object in the obtained image can be aligned with the target object in the reference image, so that when the picture is switched from the reference image to the processed image, the target object can be kept smooth, continuous, stable and natural, that is, when the picture is switched between the lenses with different focal lengths, the target object can be kept smooth and stable, and thus, the unnatural technical problems of discontinuity, jumping and the like of the foreground object in the picture when the picture is switched between the lenses with different focal lengths in the prior art are solved.
Optionally, the feature points of the reference image in the feature matching point pair are uniformly distributed in the image region of the target object in the reference image, and the feature points of the image to be processed in the feature matching point pair are uniformly distributed in the image region of the target object in the image to be processed.
Optionally, the translation compensation unit is further configured to: determining the image translation amount according to the feature matching point pairs; and respectively carrying out translation compensation on the characteristic points of the image to be processed and the image to be processed according to the image translation amount.
Optionally, the translation compensation unit is further configured to: determining a reference point of the reference image according to the feature points of the reference image in the feature matching point pair, and determining a reference point of the image to be processed according to the feature points of the image to be processed in the feature matching point pair; the image translation amount is determined based on the reference point of the reference image and the reference point of the image to be processed.
Optionally, the translation compensation unit is further configured to: calculating the centroid of the target object in the reference image according to the coordinates of the characteristic points of the reference image, and calculating the centroid of the target object in the image to be processed according to the coordinates of the characteristic points of the image to be processed; taking the calculated centroid of the target object in the reference image as a reference point of the reference image, and taking the calculated centroid of the target object in the image to be processed as a reference point of the image to be processed;
or, calculating the center of the target object in the reference image according to the coordinates of the characteristic points of the reference image, and calculating the center of the target object in the image to be processed according to the coordinates of the characteristic points of the image to be processed; and taking the center of the target object in the calculated reference image as a reference point of the reference image, and taking the center of the target object in the calculated image to be processed as a reference point of the image to be processed.
Optionally, the translation compensation unit is further configured to: calculation formula according to image translation amount
Figure BDA0002215504760000201
Calculating the translation amount of the image; t isxRepresenting the amount of image translation, T, in the X directionyThe amount of image translation in the Y direction is indicated,
Figure BDA0002215504760000202
an abscissa indicating a reference point of the reference image,
Figure BDA0002215504760000203
the abscissa representing the reference point of the image to be processed,
Figure BDA0002215504760000204
a ordinate representing a reference point of the reference image,
Figure BDA0002215504760000205
the ordinate of the reference point representing the image to be processed.
Optionally, the translation compensation unit is further configured to: translating each pixel point in the image to be processed according to the image translation amount; and translating the characteristic points of the image to be processed according to the image translation amount.
Optionally, the computing unit is further configured to: fitting formula according to similarity transformation
Figure BDA0002215504760000211
Performing similarity transformation fitting on the characteristic points of the reference image and the characteristic points of the image to be processed after the translation compensation to obtain motion estimation parameters; s, theta denote motion estimation parameters, s denote size parameters in the motion estimation parameters, and theta denotes a rotation angle parameter in the motion estimation parametersNumber, pAiI-th feature point, p 'representing a reference picture'BiAnd representing the ith characteristic point of the image to be processed after the translation compensation.
Optionally, the motion compensation unit is further configured to: and performing inverse similarity transformation compensation on the image to be processed after the translational compensation based on the motion estimation parameters.
Optionally, the motion compensation unit is further configured to: determining an inverse similarity transformation compensation matrix according to the motion estimation parameters; and processing the image to be processed after the translation compensation through the inverse similarity transformation compensation matrix so as to align the target object in the processed image with the target object in the reference image.
The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the method embodiments without reference to the device embodiments.
In another implementation of the present invention, there is further provided a computer storage medium having a computer program stored thereon, the computer program, when executed by a computer, performing the steps of the method of any one of the above method embodiments 2.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (11)

1. An image processing method, comprising:
extracting mutually matched feature points of the target object from the reference image and the image to be processed to obtain feature matching point pairs; the reference image and the image to be processed are images obtained by shooting the target object at the same time through lenses with different focal lengths;
performing translation compensation on the image to be processed and the feature points of the image to be processed in the feature matching point pair based on the feature matching point pair to obtain the image to be processed after the translation compensation and the feature points of the image to be processed after the translation compensation;
calculating motion estimation parameters according to the characteristic points of the reference image in the characteristic matching point pair and the characteristic points of the image to be processed after the translation compensation;
performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters so as to align a target object in the image after the motion compensation with a target object in the reference image;
wherein, the step of calculating the motion estimation parameters according to the feature points of the reference image in the feature matching point pair and the feature points of the image to be processed after the translation compensation comprises:
performing similarity transformation fitting on the feature points of the reference image and the feature points of the image to be processed after the translational compensation to obtain the motion estimation parameters, wherein the motion estimation parameters include: a dimensional parameter and a rotation angle parameter;
the step of performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters comprises the following steps:
and performing inverse similarity transformation compensation on the image to be processed after the translation compensation based on the motion estimation parameters.
2. The method according to claim 1, wherein the feature points of the reference image in the feature matching point pair are uniformly distributed in the image area of the target object in the reference image, and the feature points of the image to be processed in the feature matching point pair are uniformly distributed in the image area of the target object in the image to be processed.
3. The method according to claim 1, wherein the step of performing translational compensation on the feature points of the image to be processed and the image to be processed in the pair of feature matching points based on the pair of feature matching points comprises:
determining the image translation amount according to the feature matching point pairs;
respectively carrying out translation compensation on the image to be processed and the characteristic points of the image to be processed according to the image translation amount;
respectively carrying out translation compensation on the image to be processed and the characteristic points of the image to be processed according to the image translation amount, wherein the translation compensation comprises the following steps:
calculating formula I 'according to image translation compensation'B,ts(x,y)=IB,ts(x-Tx,y-Ty) Carrying out translation compensation on the image to be processed; i'B,ts(x, y) represents the translation-compensated image to be processed, TxRepresenting the amount of image translation, T, in the X directionyIndicating the image translation amount in the Y direction;
translation compensation formula according to characteristic points
Figure FDA0002972116850000021
Carrying out translation compensation on the characteristic points of the image to be processed;
Figure FDA0002972116850000022
the abscissa representing the ith feature point of the image to be processed after the translation compensation,
Figure FDA0002972116850000023
the ordinate of the ith characteristic point of the image to be processed after the translation compensation is represented,
Figure FDA0002972116850000024
an abscissa representing an ith feature point of the image to be processed,
Figure FDA0002972116850000025
a vertical coordinate, T, representing the ith feature point of the image to be processedxRepresenting the amount of image translation, T, in said X directionyAnd the value of i is 1, 2, 3, …, n-1.
4. The method of claim 3, wherein the step of determining the amount of translation of the image based on the pairs of feature matching points comprises:
determining a reference point of the reference image according to the feature point of the reference image in the feature matching point pair, and determining a reference point of the image to be processed according to the feature point of the image to be processed in the feature matching point pair;
determining the image translation amount based on the reference point of the reference image and the reference point of the image to be processed.
5. The method according to claim 4, wherein the step of determining the reference point of the reference image according to the feature point of the reference image in the feature matching point pair, and determining the reference point of the image to be processed according to the feature point of the image to be processed in the feature matching point pair comprises:
calculating the centroid of a target object in the reference image according to the coordinates of the characteristic points of the reference image, and calculating the centroid of the target object in the image to be processed according to the coordinates of the characteristic points of the image to be processed;
taking the calculated centroid of the target object in the reference image as a reference point of the reference image, and taking the calculated centroid of the target object in the image to be processed as a reference point of the image to be processed;
alternatively, the first and second electrodes may be,
calculating the center of a target object in the reference image according to the coordinates of the characteristic points of the reference image, and calculating the center of the target object in the image to be processed according to the coordinates of the characteristic points of the image to be processed;
and taking the center of the target object in the reference image obtained by calculation as a reference point of the reference image, and taking the center of the target object in the image to be processed obtained by calculation as a reference point of the image to be processed.
6. The method of claim 4, wherein the step of determining the image translation amount based on the reference point of the reference image and the reference point of the image to be processed comprises:
calculation formula according to image translation amount
Figure FDA0002972116850000031
Calculating the image translation amount; t isxRepresenting the amount of image translation, T, in the X directionyThe amount of image translation in the Y direction is indicated,
Figure FDA0002972116850000032
an abscissa representing a reference point of the reference image,
Figure FDA0002972116850000033
an abscissa representing a reference point of the image to be processed,
Figure FDA0002972116850000034
a ordinate representing a reference point of the reference image,
Figure FDA0002972116850000035
a vertical coordinate representing a reference point of the image to be processed.
7. The method according to claim 1, wherein the step of performing similarity transformation fitting on the feature points of the reference image and the feature points of the translation-compensated image to be processed comprises:
fitting formula according to similarity transformation
Figure FDA0002972116850000041
Performing similarity transformation fitting on the characteristic points of the reference image and the characteristic points of the image to be processed after the translation compensation to obtain the motion estimation parameters; s, thetaAiI-th feature point, p 'representing the reference picture'BiAnd representing the ith characteristic point of the image to be processed after the translation compensation.
8. The method of claim 1, wherein the step of performing inverse similarity transformation compensation on the translational compensated image to be processed based on the motion estimation parameters comprises:
determining an inverse similarity transformation compensation matrix according to the motion estimation parameters;
and processing the image to be processed after the translation compensation through the inverse similarity transformation compensation matrix so as to align a target object in the processed image with a target object in the reference image.
9. An image processing apparatus characterized by comprising:
the characteristic point extraction unit is used for extracting mutually matched characteristic points of the target object from the reference image and the image to be processed to obtain characteristic matching point pairs; the reference image and the image to be processed are images obtained by shooting the target object at the same time through lenses with different focal lengths;
the translation compensation unit is used for carrying out translation compensation on the image to be processed and the feature points of the image to be processed in the feature matching point pair based on the feature matching point pair to obtain the image to be processed after the translation compensation and the feature points of the image to be processed after the translation compensation;
the calculation unit is used for calculating motion estimation parameters according to the characteristic points of the reference image in the characteristic matching point pair and the characteristic points of the image to be processed after the translation compensation;
a motion compensation unit, configured to perform motion compensation on the image to be processed after the translational compensation based on the motion estimation parameter, so as to align a target object in the image after the motion compensation with a target object in the reference image;
wherein the computing unit is further configured to: performing similarity transformation fitting on the feature points of the reference image and the feature points of the image to be processed after the translational compensation to obtain the motion estimation parameters, wherein the motion estimation parameters include: a dimensional parameter and a rotation angle parameter;
the motion compensation unit is further configured to: and performing inverse similarity transformation compensation on the image to be processed after the translation compensation based on the motion estimation parameters.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of the preceding claims 1 to 8 when executing the computer program.
11. A computer storage medium, having a computer program stored thereon, which, when executed by a computer, performs the steps of the method of any of claims 1 to 8.
CN201910914273.XA 2019-09-25 2019-09-25 Image processing method and device and electronic equipment Active CN110611767B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910914273.XA CN110611767B (en) 2019-09-25 2019-09-25 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910914273.XA CN110611767B (en) 2019-09-25 2019-09-25 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110611767A CN110611767A (en) 2019-12-24
CN110611767B true CN110611767B (en) 2021-08-10

Family

ID=68893542

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910914273.XA Active CN110611767B (en) 2019-09-25 2019-09-25 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110611767B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111083380B (en) * 2019-12-31 2021-06-11 维沃移动通信有限公司 Video processing method, electronic equipment and storage medium
CN113808227B (en) * 2020-06-12 2023-08-25 杭州普健医疗科技有限公司 Medical image alignment method, medium and electronic equipment
CN113973189B (en) * 2020-07-24 2022-12-16 荣耀终端有限公司 Display content switching method, device, terminal and storage medium
CN114268731B (en) * 2020-09-16 2024-01-19 北京小米移动软件有限公司 Camera switching method, camera switching device and storage medium
CN115134505B (en) * 2021-03-25 2024-06-21 北京小米移动软件有限公司 Preview picture generation method and device, electronic equipment and storage medium
CN113542625A (en) * 2021-05-28 2021-10-22 北京迈格威科技有限公司 Image processing method, device, equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1574892A (en) * 2003-05-30 2005-02-02 佳能株式会社 Photographing device and method for obtaining photographic image having image vibration correction
CN104024906A (en) * 2011-12-28 2014-09-03 奥林巴斯映像株式会社 Optical instrument, and imaging device
CN104104880A (en) * 2014-07-28 2014-10-15 深圳市中兴移动通信有限公司 Mobile terminal and shooting method thereof
CN105027167A (en) * 2013-02-28 2015-11-04 诺基亚技术有限公司 Method and apparatus for automatically rendering dolly zoom effect
CN107292872A (en) * 2017-06-16 2017-10-24 艾松涛 Image processing method/system, computer-readable recording medium and electronic equipment
CN108234879A (en) * 2018-02-02 2018-06-29 成都西纬科技有限公司 It is a kind of to obtain the method and apparatus for sliding zoom video
CN108389155A (en) * 2018-03-20 2018-08-10 北京奇虎科技有限公司 Image processing method, device and electronic equipment
CN109379537A (en) * 2018-12-30 2019-02-22 北京旷视科技有限公司 Slide Zoom effect implementation method, device, electronic equipment and computer readable storage medium
CN109922258A (en) * 2019-02-27 2019-06-21 杭州飞步科技有限公司 Electronic image stabilization method, device and the readable storage medium storing program for executing of in-vehicle camera

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130300875A1 (en) * 2010-04-23 2013-11-14 Flir Systems Ab Correction of image distortion in ir imaging
JP6525617B2 (en) * 2015-02-03 2019-06-05 キヤノン株式会社 Image processing apparatus and control method thereof
US10594945B2 (en) * 2017-04-03 2020-03-17 Google Llc Generating dolly zoom effect using light field image data
JP7045218B2 (en) * 2018-02-28 2022-03-31 キヤノン株式会社 Information processing equipment and information processing methods, programs
CN108377342B (en) * 2018-05-22 2021-04-20 Oppo广东移动通信有限公司 Double-camera shooting method and device, storage medium and terminal
CN109600548B (en) * 2018-11-30 2021-08-31 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN109656367A (en) * 2018-12-24 2019-04-19 深圳超多维科技有限公司 Image processing method, device and electronic equipment under a kind of scene applied to VR
CN109801335A (en) * 2019-01-08 2019-05-24 北京旷视科技有限公司 Image processing method, device, electronic equipment and computer storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1574892A (en) * 2003-05-30 2005-02-02 佳能株式会社 Photographing device and method for obtaining photographic image having image vibration correction
CN104024906A (en) * 2011-12-28 2014-09-03 奥林巴斯映像株式会社 Optical instrument, and imaging device
CN105027167A (en) * 2013-02-28 2015-11-04 诺基亚技术有限公司 Method and apparatus for automatically rendering dolly zoom effect
CN104104880A (en) * 2014-07-28 2014-10-15 深圳市中兴移动通信有限公司 Mobile terminal and shooting method thereof
CN107292872A (en) * 2017-06-16 2017-10-24 艾松涛 Image processing method/system, computer-readable recording medium and electronic equipment
CN108234879A (en) * 2018-02-02 2018-06-29 成都西纬科技有限公司 It is a kind of to obtain the method and apparatus for sliding zoom video
CN108389155A (en) * 2018-03-20 2018-08-10 北京奇虎科技有限公司 Image processing method, device and electronic equipment
CN109379537A (en) * 2018-12-30 2019-02-22 北京旷视科技有限公司 Slide Zoom effect implementation method, device, electronic equipment and computer readable storage medium
CN109922258A (en) * 2019-02-27 2019-06-21 杭州飞步科技有限公司 Electronic image stabilization method, device and the readable storage medium storing program for executing of in-vehicle camera

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Perspective distortion modeling, learning and compensation;Joachim Valente;《2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)》;20151026;全文 *
大范围运动延时影像拍摄方法研究;陶学恺 等;《现代电影技术》;20170911;全文 *

Also Published As

Publication number Publication date
CN110611767A (en) 2019-12-24

Similar Documents

Publication Publication Date Title
CN110611767B (en) Image processing method and device and electronic equipment
CN109242961B (en) Face modeling method and device, electronic equipment and computer readable medium
US11055826B2 (en) Method and apparatus for image processing
CN110730296B (en) Image processing apparatus, image processing method, and computer readable medium
US8345961B2 (en) Image stitching method and apparatus
KR102010712B1 (en) Distortion Correction Method and Terminal
CN105635588B (en) A kind of digital image stabilization method and device
CN111062881A (en) Image processing method and device, storage medium and electronic equipment
KR20120099713A (en) Algorithms for estimating precise and relative object distances in a scene
CN111935398B (en) Image processing method and device, electronic equipment and computer readable medium
CN111445537B (en) Calibration method and system of camera
CN109690568A (en) A kind of processing method and mobile device
JP2015088833A (en) Image processing device, imaging device, and image processing method
CN111866523A (en) Panoramic video synthesis method and device, electronic equipment and computer storage medium
CN112288664A (en) High dynamic range image fusion method and device and electronic equipment
CN113793259B (en) Image zooming method, computer device and storage medium
WO2021031210A1 (en) Video processing method and apparatus, storage medium, and electronic device
CN113286084B (en) Terminal image acquisition method and device, storage medium and terminal
WO2023221969A1 (en) Method for capturing 3d picture, and 3d photographic system
JP2017021430A (en) Panoramic video data processing device, processing method, and program
CN111885297B (en) Image definition determining method, image focusing method and device
JP2016119572A (en) Image processing apparatus, imaging apparatus, image processing method, and program
CN115278071B (en) Image processing method, device, electronic equipment and readable storage medium
CN115514895B (en) Image anti-shake method, apparatus, electronic device, and computer-readable storage medium
CN113766090B (en) Image processing method, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant