CN110611767A - Image processing method and device and electronic equipment - Google Patents
Image processing method and device and electronic equipment Download PDFInfo
- Publication number
- CN110611767A CN110611767A CN201910914273.XA CN201910914273A CN110611767A CN 110611767 A CN110611767 A CN 110611767A CN 201910914273 A CN201910914273 A CN 201910914273A CN 110611767 A CN110611767 A CN 110611767A
- Authority
- CN
- China
- Prior art keywords
- image
- processed
- target object
- compensation
- translation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 22
- 238000013519 translation Methods 0.000 claims abstract description 143
- 238000012545 processing Methods 0.000 claims abstract description 24
- 238000000034 method Methods 0.000 claims description 38
- 230000009466 transformation Effects 0.000 claims description 26
- 238000004364 calculation method Methods 0.000 claims description 18
- 230000015654 memory Effects 0.000 claims description 15
- 239000011159 matrix material Substances 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- 238000003384 imaging method Methods 0.000 description 8
- 230000009191 jumping Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 241000820057 Ithone Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/207—Analysis of motion for motion estimation over a hierarchy of resolutions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/682—Vibration or motion blur correction
- H04N23/685—Vibration or motion blur correction performed by mechanical compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention provides an image processing method, an image processing device and electronic equipment, wherein the image processing method comprises the following steps: extracting mutually matched feature points of the target object from the reference image and the image to be processed to obtain feature matching point pairs; performing translation compensation on the image to be processed and the feature points of the image to be processed based on the feature matching point pairs; calculating motion estimation parameters according to the characteristic points of the reference image and the characteristic points of the image to be processed after the translation compensation; and performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters so as to align the target object in the image after the motion compensation with the target object in the reference image. According to the image processing method, after the image to be processed is processed, the target object in the obtained image can be aligned with the target object in the reference image, so that when pictures are switched among lenses with different focal lengths, the target object can be kept smooth and stable.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
Dollyzoom is a special effect in photography, and can produce the effect that the size ratio of a shot subject is kept unchanged, and the background field of view is greatly changed (for example, the background field of view is expanded or compressed).
At present, in the process of film shooting, in order to make the shot image have the dolyzoom effect, the optical dolyzoom product is often used for realizing the dolyzoom effect. The specific process is as follows: when shooting a target, the focal length of the lens of the camera is adjusted, and meanwhile, a rail car (Dolly) provided with the camera is required to move in a direction opposite to the zooming direction of the lens, so that the Dolly zoom effect of the shot image is achieved. In practical application, some camera lenses cannot adjust focal lengths (for example, camera lenses mounted on an unmanned aerial vehicle), and in order to make the finally-captured images have the Dollyzoom effect, multiple lenses with different focal lengths (where the focal length of each lens is fixed) are generally required to capture the images so as to achieve the purpose of zooming, and a product adopting the image capturing mode is called a digital Dollyzoom product. However, due to the inconsistency of parameters such as focal length and optical center position of each lens, the parallax phenomenon exists at the overlapping part of the fields of view of the imaging pictures between different lenses, so that when the pictures are switched between different lenses, unnatural effects such as discontinuity and jumping of foreground objects in the pictures occur, and the final shot images are poor in effect compared with the images shot by optical dolyzoom products.
In the prior art, no effective solution exists for the unnatural phenomena of discontinuity, jumping and the like of foreground objects in the images when the images are switched among different lenses.
Disclosure of Invention
In view of the above, the present invention provides an image processing method, an image processing apparatus and an electronic device, so that when a picture is switched between different shots, a foreground object in the picture is continuous and natural.
In a first aspect, an embodiment of the present invention provides an image processing method, including: extracting mutually matched feature points of the target object from the reference image and the image to be processed to obtain feature matching point pairs; the reference image and the image to be processed are images obtained by shooting the target object at the same time through lenses with different focal lengths; performing translation compensation on the image to be processed and the feature points of the image to be processed in the feature matching point pair based on the feature matching point pair to obtain the image to be processed after the translation compensation and the feature points of the image to be processed after the translation compensation; calculating motion estimation parameters according to the characteristic points of the reference image in the characteristic matching point pair and the characteristic points of the image to be processed after the translation compensation; and performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters so as to align a target object in the image after the motion compensation with a target object in the reference image.
Further, the feature points of the reference image in the feature matching point pair are uniformly distributed in the image area of the target object in the reference image, and the feature points of the image to be processed in the feature matching point pair are uniformly distributed in the image area of the target object in the image to be processed.
Further, the step of performing translation compensation on the feature points of the image to be processed and the image to be processed in the feature matching point pair based on the feature matching point pair includes: determining the image translation amount according to the feature matching point pairs; and respectively carrying out translation compensation on the image to be processed and the characteristic points of the image to be processed according to the image translation amount.
Further, the step of determining the image translation amount according to the feature matching point pairs comprises: determining a reference point of the reference image according to the feature point of the reference image in the feature matching point pair, and determining a reference point of the image to be processed according to the feature point of the image to be processed in the feature matching point pair; determining the image translation amount based on the reference point of the reference image and the reference point of the image to be processed.
Further, the step of determining the reference point of the reference image according to the feature point of the reference image in the feature matching point pair, and determining the reference point of the image to be processed according to the feature point of the image to be processed in the feature matching point pair includes: calculating the centroid of a target object in the reference image according to the coordinates of the characteristic points of the reference image, and calculating the centroid of the target object in the image to be processed according to the coordinates of the characteristic points of the image to be processed; taking the calculated centroid of the target object in the reference image as a reference point of the reference image, and taking the calculated centroid of the target object in the image to be processed as a reference point of the image to be processed;
or, calculating the center of the target object in the reference image according to the coordinates of the characteristic points of the reference image, and calculating the center of the target object in the image to be processed according to the coordinates of the characteristic points of the image to be processed; and taking the center of the target object in the reference image obtained by calculation as a reference point of the reference image, and taking the center of the target object in the image to be processed obtained by calculation as a reference point of the image to be processed.
Further, the step of determining the image translation amount based on the reference point of the reference image and the reference point of the image to be processed includes: calculation formula according to image translation amountCalculating the image translation amount; t isxRepresenting the amount of image translation, T, in the X directionyThe amount of image translation in the Y direction is indicated,an abscissa representing a reference point of the reference image,an abscissa representing a reference point of the image to be processed,a ordinate representing a reference point of the reference image,a vertical coordinate representing a reference point of the image to be processed.
Further, the step of respectively performing translation compensation on the image to be processed and the feature points of the image to be processed according to the image translation amount comprises: translating each pixel point in the image to be processed according to the image translation amount; and translating the characteristic points of the image to be processed according to the image translation amount.
Further, the step of calculating the motion estimation parameters according to the feature points of the reference image in the feature matching point pair and the feature points of the image to be processed after the translational compensation comprises: fitting formula according to similarity transformationPerforming similarity transformation fitting on the characteristic points of the reference image and the characteristic points of the image to be processed after the translation compensation to obtain the motion estimation parameters; s, thetaAiI-th feature point, p 'representing the reference picture'BiAnd representing the ith characteristic point of the image to be processed after the translation compensation.
Further, the step of performing motion compensation on the translational compensated image to be processed based on the motion estimation parameter includes: and performing inverse similarity transformation compensation on the image to be processed after the translation compensation based on the motion estimation parameters.
Further, the step of performing inverse similarity transformation compensation on the image to be processed after the translational compensation based on the motion estimation parameters comprises: determining an inverse similarity transformation compensation matrix according to the motion estimation parameters; and processing the image to be processed after the translation compensation through the inverse similarity transformation compensation matrix so as to align a target object in the processed image with a target object in the reference image.
In a second aspect, an embodiment of the present invention further provides an image processing apparatus, including: the characteristic point extraction unit is used for extracting mutually matched characteristic points of the target object from the reference image and the image to be processed to obtain characteristic matching point pairs; the reference image and the image to be processed are images obtained by shooting the target object at the same time through lenses with different focal lengths; the translation compensation unit is used for carrying out translation compensation on the image to be processed and the feature points of the image to be processed in the feature matching point pair based on the feature matching point pair to obtain the image to be processed after the translation compensation and the feature points of the image to be processed after the translation compensation; the calculation unit is used for calculating motion estimation parameters according to the characteristic points of the reference image in the characteristic matching point pair and the characteristic points of the image to be processed after the translation compensation; and the motion compensation unit is used for performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters so as to align a target object in the image after the motion compensation with a target object in the reference image.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to any one of the above first aspects when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a computer-readable medium having non-volatile program code executable by a processor, where the program code causes the processor to perform the steps of the method according to any one of the first aspect.
In the embodiment of the invention, mutually matched feature points of a target object are extracted from a reference image and an image to be processed to obtain feature matching point pairs; then, based on the feature matching point pair, performing translation compensation on the image to be processed and the feature points of the image to be processed in the feature matching point pair to obtain the image to be processed after the translation compensation and the feature points of the image to be processed after the translation compensation; further, calculating motion estimation parameters according to the feature points of the reference image in the feature matching point pair and the feature points of the image to be processed after the translation compensation; and finally, performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters so as to align the target object in the image after the motion compensation with the target object in the reference image. As can be seen from the above description, after the image to be processed is processed by the image processing method in the embodiment of the present invention, the target object in the obtained image can be aligned with the target object in the reference image, so that when the picture is switched from the reference image to the processed image, the target object can be kept smooth, continuous, stable and natural, that is, when the picture is switched between the lenses with different focal lengths, the target object can be kept smooth and stable, and thus, the unnatural technical problems of discontinuity, jumping and the like of the foreground object in the picture when the picture is switched between the lenses with different focal lengths in the prior art are solved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram of an electronic device according to an embodiment of the present invention;
FIG. 2 is a flowchart of an image processing method according to an embodiment of the present invention;
fig. 3 is a flowchart of a method for performing translation compensation on an image to be processed and feature points of the image to be processed according to an embodiment of the present invention;
fig. 4 is a flowchart of a method for performing inverse similarity transformation compensation on a translational compensated image to be processed based on a motion estimation parameter according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1:
first, an electronic device 100 for implementing an embodiment of the present invention, which can be used to execute an image processing method according to embodiments of the present invention, is described with reference to fig. 1.
As shown in FIG. 1, electronic device 100 includes one or more processors 102, one or more memories 104, an input device 106, an output device 108, and a camera 110, which are interconnected via a bus system 112 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 102 may be implemented in at least one hardware form of a Digital Signal Processor (DSP), a Field-Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), and an asic (application Specific Integrated circuit), and the processor 102 may be a Central Processing Unit (CPU) or other form of Processing unit having data Processing capability and/or instruction execution capability, and may control other components in the electronic device 100 to perform desired functions.
The memory 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The camera 110 is configured to capture a reference image and an image to be processed, where after the reference image and the image to be processed are processed by the image processing method, a target object in the obtained image is aligned with the target object in the reference image, for example, the camera may capture an image (e.g., a photograph, a video, etc.) desired by a user, and then after the image is processed by the image processing method, the target object in the obtained image is aligned with the target object in the reference image, and the camera may further store the captured image in the memory 104 for use by other components.
Exemplarily, the electronic device for implementing the image processing method according to the embodiment of the present invention may be implemented as a smart mobile terminal such as a smart phone, a tablet computer, etc., and may also be implemented as any other device having computing capability.
Example 2:
according to an embodiment of the present invention, there is provided an embodiment of an image processing method, it should be noted that the steps shown in the flowchart of the drawings may be executed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases, the steps shown or described may be executed in an order different from that here.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the present invention, as shown in fig. 2, the method including the steps of:
step S202, extracting mutually matched characteristic points of the target object from the reference image and the image to be processed to obtain characteristic matching point pairs.
The reference image and the image to be processed are images obtained by shooting the target object at the same time through the lens with different focal lengths. The target object may be any real object such as a person, an animal, or a vehicle, and one or more target objects may be provided.
When a movie with the dolyzoom effect is shot through lenses with different focal lengths, the imaging picture needs to be switched between the lenses with different focal lengths, and in the prior art, it has been determined that at time ts (where time ts refers to any time), the imaging picture shot by lens a (i.e., a reference image) should be switched to the imaging picture shot by lens B (i.e., an image to be processed). However, when switching from the imaging picture shot by the lens a to the imaging picture shot by the lens B, discontinuity of foreground objects in the pictures occurs at the time of switching due to inconsistency of parameters such as focal lengths and optical center positions of the respective lenses.
Based on this, the inventor designs an image processing method in the embodiment, which can be applied to a processor, wherein the processor is respectively connected with the lenses with different focal lengths in a communication manner, and is used for processing imaging pictures shot by the lenses with different focal lengths on line in real time; the method can also be applied to a computer, and the post-processing is carried out on the imaging pictures shot by the lenses with different focal lengths through the computer so as to realize smooth and continuous foreground targets when the pictures are switched among the lenses with different focal lengths. The embodiment of the present invention does not specifically limit the implementation manner.
The feature matching point pair includes: the characteristic points of the reference image and the characteristic points of the image to be processed corresponding to the characteristic points of the reference image. Specifically, the feature points of the reference image are feature points of a target object in the reference image, and the feature points of the image to be processed are feature points of the target object in the image to be processed.
And S204, performing translation compensation on the image to be processed and the feature points of the image to be processed in the feature matching point pair based on the feature matching point pair to obtain the image to be processed after the translation compensation and the feature points of the image to be processed after the translation compensation.
Considering that discontinuity of a target object in an image can occur when the reference image is directly switched to the image to be processed, translation compensation needs to be performed on the image to be processed based on the feature matching point pair, so that pixel points of the target object in the image to be processed after translation compensation and pixel points of the target object in the reference image are integrally aligned in position.
However, after the alignment at the above positions is achieved, smooth continuity of the target object during image switching cannot be ensured, and it is also necessary to ensure that the pixel points of the target object in the final image and the pixel points of the target object in the reference image are physically aligned. Therefore, motion compensation on the form is also required, in order to implement the motion compensation on the form, translation compensation needs to be performed on the feature points of the image to be processed based on the feature matching point pairs to obtain the feature points of the image to be processed after the translation compensation, and the process is described in detail below.
And step S206, calculating motion estimation parameters according to the characteristic points of the reference image in the characteristic matching point pair and the characteristic points of the image to be processed after the translation compensation.
And after the characteristic points of the image to be processed after the translational compensation are obtained, calculating motion estimation parameters according to the characteristic points of the image to be processed after the translational compensation and the characteristic points of the reference image.
The motion estimation parameters are used for carrying out shape processing on the pixel points of the image to be processed after the translational compensation.
And step S208, performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters so as to align the target object in the image after the motion compensation with the target object in the reference image.
After the image after motion compensation is obtained, at the time ts, the reference image shot by the lens A can be directly switched to the image after motion compensation, so that smooth and continuous target objects on the same display area are realized when the images are switched among different lenses. The above alignment can be regarded as the target object in the motion compensated image and the target object in the reference image are aligned as a whole.
In the embodiment of the invention, mutually matched feature points of a target object are extracted from a reference image and an image to be processed to obtain feature matching point pairs; then, based on the feature matching point pair, performing translation compensation on the image to be processed and the feature points of the image to be processed in the feature matching point pair to obtain the image to be processed after the translation compensation and the feature points of the image to be processed after the translation compensation; further, calculating motion estimation parameters according to the feature points of the reference image in the feature matching point pair and the feature points of the image to be processed after the translation compensation; and finally, performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters so as to align the target object in the image after the motion compensation with the target object in the reference image. As can be seen from the above description, after the image to be processed is processed by the image processing method in the embodiment of the present invention, the target object in the obtained image can be aligned with the target object in the reference image, so that when the picture is switched from the reference image to the processed image, the target object can be kept smooth, continuous, stable and natural, that is, when the picture is switched between the lenses with different focal lengths, the target object can be kept smooth and stable, and thus, the unnatural technical problems of discontinuity, jumping and the like of the foreground object in the picture when the picture is switched between the lenses with different focal lengths in the prior art are solved.
The foregoing briefly introduces the image processing method of the present invention, and the details thereof are described in detail below.
In this embodiment, given step S202 above, one implementation manner of extracting mutually matched feature points of the target object in the reference image and the image to be processed includes:
and extracting the characteristic points of the target object in the reference image and the image to be processed by adopting a characteristic extraction algorithm to obtain characteristic matching point pairs.
The characteristic points of the reference image in the characteristic matching point pair are uniformly distributed in the image area of the target object in the reference image, and the characteristic points of the image to be processed in the characteristic matching point pair are uniformly distributed in the image area of the target object in the image to be processed.
In the embodiment of the invention, when a feature extraction algorithm is adopted to process a reference image and an image to be processed, feature points of a target object in the reference image are extracted and obtained, and then the feature points of the target object in the image to be processed are extracted and obtained, a mesh-flow technology is fused in the algorithm, the extracted feature points of the reference image and the extracted feature points of the image to be processed can be uniformly distributed in a region of the target object (which can enable subsequent transformation to be more accurate), and the extracted feature points of the reference image and the extracted feature points of the target object are in one-to-one correspondence, namely, finally, feature matching point pairs are obtained.
For convenience of description, in the present embodiment, the reference image may be an image obtained by taking the target object by the lens a at the time ts, and is denoted as IA,ts(ii) a The image to be processed can be an image obtained by shooting a target object by the lens B at ts moment and is marked as IB,ts。
The extracted feature matching point pairs are as follows: { pAi→pBiIn which { p }AiDenotes feature points of a reference image, { p }BiDenotes a feature point of the image to be processed, i ═ 1, 2, 3, …, n-1.
In this embodiment, given the above step S204, an implementation manner of performing translational compensation on the to-be-processed image and the feature points of the to-be-processed image in the feature matching point pair based on the feature matching point pair is described with reference to fig. 3, and includes the following steps:
and S301, determining the image translation amount according to the feature matching point pairs.
Specifically, the process of determining the image translation amount includes the following steps (1) and (2):
(1) and determining a reference point of the reference image according to the feature point of the reference image in the feature matching point pair, and determining a reference point of the image to be processed according to the feature point of the image to be processed in the feature matching point pair.
In the embodiment of the present invention, two ways of determining the reference point are given, and the two ways are described below respectively.
The first method is as follows: case where the reference point is the centroid of the target object: and calculating the centroid of the target object in the reference image according to the coordinates of the characteristic points of the reference image, calculating the centroid of the target object in the image to be processed according to the coordinates of the characteristic points of the image to be processed, taking the calculated centroid of the target object in the reference image as the reference point of the reference image, and taking the calculated centroid of the target object in the image to be processed as the reference point of the image to be processed.
Specifically, the calculation formula is calculated according to the mass centerCalculating the centroid of the target object in the reference image and the centroid of the target object in the image to be processed;
wherein, CxAbscissa representing centroid, CyOrdinate, P, representing the center of massi xThe abscissa representing the ith feature point,abscissa, P, representing the i +1 th feature pointi yThe ordinate of the i-th feature point is represented,denotes an ordinate of the i +1 th feature point, W denotes a weight coefficient, and
and taking the calculated centroid of the target object in the reference image as a reference point of the reference image, and taking the calculated centroid of the target object in the image to be processed as a reference point of the image to be processed.
It should be noted that: when i ═ n-1 (i.e. P)i=Pn-1Last feature point), Pi+1Is the first characteristic point P1(i.e. P)n=P1)。
When the reference image is IA,tsThe image to be processed is IB,tsThe characteristic point of the reference image is { p }AiThe characteristic point of the image to be processed is { p }BiExplaining the calculation process of the centroid of the target object in the reference image and the centroid of the target object in the image to be processed:
when calculating the reference image IA,tsWhen the center of mass of the target object is in the middle, the formula is calculated according to the center of massPerforming a calculation, at this time, CxAbscissa representing centroid of target object in reference image, CyOrdinate, P, representing the centroid of the target object in the reference imagei xRepresents the abscissa of the ith one of the feature points of the reference image,denotes the abscissa, P, of the i +1 th feature point among the feature points of the reference imagei yRepresents the ordinate of the i-th one of the feature points of the reference image,denotes the ordinate of the i +1 th feature point among the feature points of the reference image, W denotes a weight coefficient, and
when calculating the image I to be processedB,tsWhen the center of mass of the target object is in the middle, the formula is calculated according to the center of massPerforming a calculation, at this time, CxTo indicate a waitProcessing the abscissa of the centroid of a target object in an image, CyOrdinate, P, representing the centroid of a target object in an image to be processedi xThe abscissa representing the ith feature point among the feature points of the image to be processed,represents the abscissa, P, of the i +1 th feature point among the feature points of the image to be processedi yRepresents the ordinate of the ith feature point among the feature points of the image to be processed,represents the ordinate of the (i + 1) th feature point among the feature points of the image to be processed, W represents a weight coefficient, and
and then, taking the calculated centroid of the target object in the reference image as a reference point of the reference image, and taking the calculated centroid of the target object in the image to be processed as a reference point of the image to be processed.
The second method comprises the following steps: case where the reference point is the center of the target object: calculating the center of a target object in the reference image according to the coordinates of the characteristic points of the reference image, and calculating the center of the target object in the image to be processed according to the coordinates of the characteristic points of the image to be processed; and taking the center of the target object in the calculated reference image as a reference point of the reference image, and taking the center of the target object in the calculated image to be processed as a reference point of the image to be processed.
Specifically, based on the central calculation formulaCalculating the center of a target object in a reference image and the center of the target object in an image to be processed;
wherein, CxAbscissa representing center, CyOrdinate, P, representing the centeri xAbscissa, P, representing the ith feature pointi yThe ordinate of the i-th feature point is represented.
And taking the center of the target object in the calculated reference image as a reference point of the reference image, and taking the center of the target object in the calculated image to be processed as a reference point of the image to be processed.
When the reference image is IA,tsThe image to be processed is IB,tsThe characteristic point of the reference image is { p }AiThe characteristic point of the image to be processed is { p }BiExplaining the calculation process of the center of the target object in the reference image and the center of the target object in the image to be processed:
when calculating the reference image IA,tsWhen the center of mass of the target object is in the middle, the formula is calculated according to the centerPerforming a calculation, at this time, CxAbscissa, C, representing the center of the target object in the reference imageyOrdinate, P, representing the center of the target object in the reference imagei xDenotes the abscissa, P, of the ith feature point among the feature points of the reference imagei yAnd represents the ordinate of the i-th one of the feature points of the reference image.
When calculating the image I to be processedB,tsWhen the center of the target object is determined, the formula is calculated according to the centerPerforming a calculation, at this time, CxAbscissa, C, representing the center of the target object in the image to be processedyOrdinate, P, representing the center of a target object in an image to be processedi xThe abscissa, P, representing the ith feature point among the feature points of the image to be processedi yAnd the ordinate of the ith characteristic point in the characteristic points of the image to be processed is represented.
And then, taking the center of the target object in the calculated reference image as a reference point of the reference image, and taking the center of the target object in the calculated image to be processed as a reference point of the image to be processed.
(2) The image translation amount is determined based on the reference point of the reference image and the reference point of the image to be processed.
Specifically, the calculation formula is calculated according to the image translation amountCalculating the translation amount of the image; wherein, TxRepresenting the amount of image translation, T, in the X directionyThe amount of image translation in the Y direction is indicated,an abscissa indicating a reference point of the reference image,the abscissa representing the reference point of the image to be processed,a ordinate representing a reference point of the reference image,the ordinate of the reference point representing the image to be processed.
And S302, respectively carrying out translation compensation on the image to be processed and the characteristic points of the image to be processed according to the image translation amount.
Specifically, each pixel point in the image to be processed is translated according to the image translation amount; and translating the characteristic points of the image to be processed according to the image translation amount.
When the image to be processed is subjected to translation compensation: calculating formula I 'according to image translation compensation'B,ts(x,y)=IB,ts(x-Tx,y-Ty) And carrying out translation compensation on the image to be processed.
Wherein, I'B,ts(x, y) denotes the image to be processed after the translation compensation, TxRepresenting the amount of image translation, T, in the X directionyIndicating the amount of image translation in the Y direction.
When the image translation amount is the translation compensation amount calculated based on the mass center, after the image to be processed is subjected to translation compensation, the mass center of the target object in the image to be processed after translation compensation is overlapped with the mass center of the target object in the reference image;
and when the image translation amount is the translation compensation amount calculated on the basis of the center, after the image to be processed is subjected to translation compensation, the center of the target object in the image to be processed after the translation compensation is overlapped with the center of the target object in the reference image.
In fact, after the above-mentioned translation compensation is completed, the alignment in position of each pixel point of the target object in the image to be processed after the translation compensation and each pixel point of the target object in the reference image with respect to the reference point is realized.
When the characteristic points of the image to be processed are subjected to translation compensation: translation compensation formula according to characteristic pointsAnd carrying out translation compensation on the characteristic points of the image to be processed.
Wherein,the abscissa representing the ith feature point of the image to be processed after the translation compensation,the ordinate of the ith characteristic point of the image to be processed after the translation compensation is represented,the abscissa representing the ith feature point of the image to be processed,ordinate, T, representing the ith feature point of the image to be processedxRepresenting the amount of image translation, T, in the X directionyThe image translation amount in the Y direction is represented, and the value of i is 1, 2, 3, …, n-1.
After the feature points of the image to be processed after the translational compensation are obtained, the motion estimation parameters can be further calculated according to the feature points of the image to be processed after the translational compensation and the feature points of the reference image in the feature matching point pair, and in an optional implementation manner, the process of calculating the motion estimation parameters includes the following steps:
fitting formula according to similarity transformationAnd performing similarity transformation fitting on the characteristic points of the reference image and the characteristic points of the image to be processed after the translation compensation to obtain motion estimation parameters.
Wherein s, theta denote motion estimation parameters, s denotes a size parameter in the motion estimation parameters, theta denotes a rotation angle parameter in the motion estimation parameters, and pAiI-th feature point, p 'representing a reference picture'BiAnd representing the ith characteristic point of the image to be processed after the translation compensation.
In an embodiment of the invention, the motion estimation uses a similarity transformation fit, in terms ofCarrying out similarity transformation fitting on the characteristic points of the reference image and the characteristic points of the image to be processed after the translation compensation, and solvingAt the minimum, the corresponding values of s, θ, where, the abscissa representing the ith feature point of the reference image,the ordinate of the i-th feature point of the reference image is represented,the abscissa representing the ith feature point of the image to be processed after the translation compensation,and representing the ordinate of the ith characteristic point of the image to be processed after the translation compensation.
It should be noted that, in addition to the above similarity transformation fitting, the motion estimation may also use other motion estimation. For example, affine, perspective, projection, etc., but the motion estimation method of the similarity transformation fitting is the motion estimation method with the best reliability, and the motion estimation is not particularly limited in the embodiment of the present invention.
After obtaining the motion estimation parameter, the image processing apparatus can further perform a physical correction (i.e. motion compensation) on the pixel point of the image to be processed after the translational compensation based on the motion estimation parameter, in the step S208, the step of performing the motion compensation on the image to be processed after the translational compensation based on the motion estimation parameter includes: and performing inverse similarity transformation compensation on the image to be processed after the translational compensation based on the motion estimation parameters, and referring to fig. 4, including the following steps.
Step S401, determining an inverse similarity transformation compensation matrix according to the motion estimation parameters.
And step S402, processing the image to be processed after the translation compensation through the inverse similarity transformation compensation matrix so as to align the target object in the processed image with the target object in the reference image.
Specifically, the inverse similarity transformation compensation matrix may be:and then, processing the image to be processed after the translation compensation through the inverse similarity transformation compensation matrix, wherein the processing result is as follows:
I″B,ts(x,y)=I'B,ts(x ', y'), wherein, I ″)B,ts(x, y) represents the processed image, and
thus, obtained I ″)B,ts(x, y) is and IA,tsThe target object in the image is aligned, and further, smooth stabilization of the target object in the image can be realized when the lens A is switched to the lens B at the ts moment in the digital Dollyzoom processing.
Example 3:
an embodiment of the present invention further provides an image processing apparatus, which is mainly used for executing the image processing method provided by the foregoing content of the embodiment of the present invention, and the image processing apparatus provided by the embodiment of the present invention is specifically described below.
Fig. 5 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention, as shown in fig. 5, the image processing apparatus mainly includes: a feature point extraction unit 10, a translation compensation unit 20, a calculation unit 30, and a motion compensation unit 40, wherein:
the characteristic point extraction unit is used for extracting mutually matched characteristic points of the target object from the reference image and the image to be processed to obtain characteristic matching point pairs; the reference image and the image to be processed are images obtained by shooting a target object at the same time through lenses with different focal lengths;
the translation compensation unit is used for carrying out translation compensation on the image to be processed and the feature points of the image to be processed in the feature matching point pair based on the feature matching point pair to obtain the image to be processed after the translation compensation and the feature points of the image to be processed after the translation compensation;
the computing unit is used for computing motion estimation parameters according to the feature points of the reference image in the feature matching point pair and the feature points of the image to be processed after the translation compensation;
and the motion compensation unit is used for performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters so as to align the target object in the image after the motion compensation with the target object in the reference image.
In the embodiment of the invention, mutually matched feature points of a target object are extracted from a reference image and an image to be processed to obtain feature matching point pairs; then, based on the feature matching point pair, performing translation compensation on the image to be processed and the feature points of the image to be processed in the feature matching point pair to obtain the image to be processed after the translation compensation and the feature points of the image to be processed after the translation compensation; further, calculating motion estimation parameters according to the feature points of the reference image in the feature matching point pair and the feature points of the image to be processed after the translation compensation; and finally, performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters so as to align the target object in the image after the motion compensation with the target object in the reference image. As can be seen from the above description, after the image to be processed is processed by the image processing method in the embodiment of the present invention, the target object in the obtained image can be aligned with the target object in the reference image, so that when the picture is switched from the reference image to the processed image, the target object can be kept smooth, continuous, stable and natural, that is, when the picture is switched between the lenses with different focal lengths, the target object can be kept smooth and stable, and thus, the unnatural technical problems of discontinuity, jumping and the like of the foreground object in the picture when the picture is switched between the lenses with different focal lengths in the prior art are solved.
Optionally, the feature points of the reference image in the feature matching point pair are uniformly distributed in the image region of the target object in the reference image, and the feature points of the image to be processed in the feature matching point pair are uniformly distributed in the image region of the target object in the image to be processed.
Optionally, the translation compensation unit is further configured to: determining the image translation amount according to the feature matching point pairs; and respectively carrying out translation compensation on the characteristic points of the image to be processed and the image to be processed according to the image translation amount.
Optionally, the translation compensation unit is further configured to: determining a reference point of the reference image according to the feature points of the reference image in the feature matching point pair, and determining a reference point of the image to be processed according to the feature points of the image to be processed in the feature matching point pair; the image translation amount is determined based on the reference point of the reference image and the reference point of the image to be processed.
Optionally, the translation compensation unit is further configured to: calculating the centroid of the target object in the reference image according to the coordinates of the characteristic points of the reference image, and calculating the centroid of the target object in the image to be processed according to the coordinates of the characteristic points of the image to be processed; taking the calculated centroid of the target object in the reference image as a reference point of the reference image, and taking the calculated centroid of the target object in the image to be processed as a reference point of the image to be processed;
or, calculating the center of the target object in the reference image according to the coordinates of the characteristic points of the reference image, and calculating the center of the target object in the image to be processed according to the coordinates of the characteristic points of the image to be processed; and taking the center of the target object in the calculated reference image as a reference point of the reference image, and taking the center of the target object in the calculated image to be processed as a reference point of the image to be processed.
Optionally, the translation compensation unit is further configured to: calculation formula according to image translation amountCalculating the translation amount of the image; t isxRepresenting the amount of image translation, T, in the X directionyThe amount of image translation in the Y direction is indicated,an abscissa indicating a reference point of the reference image,the abscissa representing the reference point of the image to be processed,a ordinate representing a reference point of the reference image,the ordinate of the reference point representing the image to be processed.
Optionally, the translation compensation unit is further configured to: translating each pixel point in the image to be processed according to the image translation amount; and translating the characteristic points of the image to be processed according to the image translation amount.
Optionally, the computing unit furtherFor: fitting formula according to similarity transformationPerforming similarity transformation fitting on the characteristic points of the reference image and the characteristic points of the image to be processed after the translation compensation to obtain motion estimation parameters; s, theta denote motion estimation parameters, s denote size parameters in motion estimation parameters, theta denotes a rotation angle parameter in motion estimation parameters, pAiI-th feature point, p 'representing a reference picture'BiAnd representing the ith characteristic point of the image to be processed after the translation compensation.
Optionally, the motion compensation unit is further configured to: and performing inverse similarity transformation compensation on the image to be processed after the translational compensation based on the motion estimation parameters.
Optionally, the motion compensation unit is further configured to: determining an inverse similarity transformation compensation matrix according to the motion estimation parameters; and processing the image to be processed after the translation compensation through the inverse similarity transformation compensation matrix so as to align the target object in the processed image with the target object in the reference image.
The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the method embodiments without reference to the device embodiments.
In another implementation of the present invention, there is further provided a computer storage medium having a computer program stored thereon, the computer program, when executed by a computer, performing the steps of the method of any one of the above method embodiments 2.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (13)
1. An image processing method, comprising:
extracting mutually matched feature points of the target object from the reference image and the image to be processed to obtain feature matching point pairs; the reference image and the image to be processed are images obtained by shooting the target object at the same time through lenses with different focal lengths;
performing translation compensation on the image to be processed and the feature points of the image to be processed in the feature matching point pair based on the feature matching point pair to obtain the image to be processed after the translation compensation and the feature points of the image to be processed after the translation compensation;
calculating motion estimation parameters according to the characteristic points of the reference image in the characteristic matching point pair and the characteristic points of the image to be processed after the translation compensation;
and performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters so as to align a target object in the image after the motion compensation with a target object in the reference image.
2. The method according to claim 1, wherein the feature points of the reference image in the feature matching point pair are uniformly distributed in the image area of the target object in the reference image, and the feature points of the image to be processed in the feature matching point pair are uniformly distributed in the image area of the target object in the image to be processed.
3. The method according to claim 1, wherein the step of performing translational compensation on the feature points of the image to be processed and the image to be processed in the pair of feature matching points based on the pair of feature matching points comprises:
determining the image translation amount according to the feature matching point pairs;
and respectively carrying out translation compensation on the image to be processed and the characteristic points of the image to be processed according to the image translation amount.
4. The method of claim 3, wherein the step of determining the amount of translation of the image based on the pairs of feature matching points comprises:
determining a reference point of the reference image according to the feature point of the reference image in the feature matching point pair, and determining a reference point of the image to be processed according to the feature point of the image to be processed in the feature matching point pair;
determining the image translation amount based on the reference point of the reference image and the reference point of the image to be processed.
5. The method according to claim 4, wherein the step of determining the reference point of the reference image according to the feature point of the reference image in the feature matching point pair, and determining the reference point of the image to be processed according to the feature point of the image to be processed in the feature matching point pair comprises:
calculating the centroid of a target object in the reference image according to the coordinates of the characteristic points of the reference image, and calculating the centroid of the target object in the image to be processed according to the coordinates of the characteristic points of the image to be processed;
taking the calculated centroid of the target object in the reference image as a reference point of the reference image, and taking the calculated centroid of the target object in the image to be processed as a reference point of the image to be processed;
or,
calculating the center of a target object in the reference image according to the coordinates of the characteristic points of the reference image, and calculating the center of the target object in the image to be processed according to the coordinates of the characteristic points of the image to be processed;
and taking the center of the target object in the reference image obtained by calculation as a reference point of the reference image, and taking the center of the target object in the image to be processed obtained by calculation as a reference point of the image to be processed.
6. The method of claim 4, wherein the step of determining the image translation amount based on the reference point of the reference image and the reference point of the image to be processed comprises:
calculating according to the translation amount of the imageFormula (II)Calculating the image translation amount; t isxRepresenting the amount of image translation, T, in the X directionyThe amount of image translation in the Y direction is indicated,an abscissa representing a reference point of the reference image,an abscissa representing a reference point of the image to be processed,a ordinate representing a reference point of the reference image,a vertical coordinate representing a reference point of the image to be processed.
7. The method according to claim 3, wherein the step of performing translation compensation on the image to be processed and the feature points of the image to be processed according to the image translation amount respectively comprises:
translating each pixel point in the image to be processed according to the image translation amount;
and translating the characteristic points of the image to be processed according to the image translation amount.
8. The method according to claim 1, wherein the step of calculating the motion estimation parameters according to the feature points of the reference image in the feature matching point pair and the feature points of the translation-compensated image to be processed comprises:
fitting formula according to similarity transformationPerforming similarity transformation fitting on the characteristic points of the reference image and the characteristic points of the image to be processed after the translation compensation to obtain the motion estimation parameters; s, thetaAiI-th feature point, p 'representing the reference picture'BiAnd representing the ith characteristic point of the image to be processed after the translation compensation.
9. The method of claim 1, wherein the step of motion compensating the translational compensated image to be processed based on the motion estimation parameters comprises:
and performing inverse similarity transformation compensation on the image to be processed after the translation compensation based on the motion estimation parameters.
10. The method of claim 9, wherein the step of performing inverse similarity transformation compensation on the translational compensated image to be processed based on the motion estimation parameters comprises:
determining an inverse similarity transformation compensation matrix according to the motion estimation parameters;
and processing the image to be processed after the translation compensation through the inverse similarity transformation compensation matrix so as to align a target object in the processed image with a target object in the reference image.
11. An image processing apparatus characterized by comprising:
the characteristic point extraction unit is used for extracting mutually matched characteristic points of the target object from the reference image and the image to be processed to obtain characteristic matching point pairs; the reference image and the image to be processed are images obtained by shooting the target object at the same time through lenses with different focal lengths;
the translation compensation unit is used for carrying out translation compensation on the image to be processed and the feature points of the image to be processed in the feature matching point pair based on the feature matching point pair to obtain the image to be processed after the translation compensation and the feature points of the image to be processed after the translation compensation;
the calculation unit is used for calculating motion estimation parameters according to the characteristic points of the reference image in the characteristic matching point pair and the characteristic points of the image to be processed after the translation compensation;
and the motion compensation unit is used for performing motion compensation on the image to be processed after the translational compensation based on the motion estimation parameters so as to align a target object in the image after the motion compensation with a target object in the reference image.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1 to 10 when executing the computer program.
13. A computer storage medium, having a computer program stored thereon, which when executed by a computer performs the steps of the method of any of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910914273.XA CN110611767B (en) | 2019-09-25 | 2019-09-25 | Image processing method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910914273.XA CN110611767B (en) | 2019-09-25 | 2019-09-25 | Image processing method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110611767A true CN110611767A (en) | 2019-12-24 |
CN110611767B CN110611767B (en) | 2021-08-10 |
Family
ID=68893542
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910914273.XA Active CN110611767B (en) | 2019-09-25 | 2019-09-25 | Image processing method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110611767B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111083380A (en) * | 2019-12-31 | 2020-04-28 | 维沃移动通信有限公司 | Video processing method, electronic equipment and storage medium |
CN113542625A (en) * | 2021-05-28 | 2021-10-22 | 北京迈格威科技有限公司 | Image processing method, device, equipment and storage medium |
CN113808227A (en) * | 2020-06-12 | 2021-12-17 | 杭州普健医疗科技有限公司 | Medical image alignment method, medium and electronic device |
CN113973189A (en) * | 2020-07-24 | 2022-01-25 | 荣耀终端有限公司 | Display content switching method, device, terminal and storage medium |
CN114268731A (en) * | 2020-09-16 | 2022-04-01 | 北京小米移动软件有限公司 | Camera switching method, camera switching device and storage medium |
CN115134505A (en) * | 2021-03-25 | 2022-09-30 | 北京小米移动软件有限公司 | Preview picture generation method and device, electronic equipment and storage medium |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1574892A (en) * | 2003-05-30 | 2005-02-02 | 佳能株式会社 | Photographing device and method for obtaining photographic image having image vibration correction |
US20130300875A1 (en) * | 2010-04-23 | 2013-11-14 | Flir Systems Ab | Correction of image distortion in ir imaging |
CN104024906A (en) * | 2011-12-28 | 2014-09-03 | 奥林巴斯映像株式会社 | Optical instrument, and imaging device |
CN104104880A (en) * | 2014-07-28 | 2014-10-15 | 深圳市中兴移动通信有限公司 | Mobile terminal and shooting method thereof |
CN105027167A (en) * | 2013-02-28 | 2015-11-04 | 诺基亚技术有限公司 | Method and apparatus for automatically rendering dolly zoom effect |
US20160225167A1 (en) * | 2015-02-03 | 2016-08-04 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
CN107292872A (en) * | 2017-06-16 | 2017-10-24 | 艾松涛 | Image processing method/system, computer-readable recording medium and electronic equipment |
CN108234879A (en) * | 2018-02-02 | 2018-06-29 | 成都西纬科技有限公司 | It is a kind of to obtain the method and apparatus for sliding zoom video |
CN108377342A (en) * | 2018-05-22 | 2018-08-07 | Oppo广东移动通信有限公司 | double-camera photographing method, device, storage medium and terminal |
CN108389155A (en) * | 2018-03-20 | 2018-08-10 | 北京奇虎科技有限公司 | Image processing method, device and electronic equipment |
US20180288335A1 (en) * | 2017-04-03 | 2018-10-04 | Google Llc | Generating dolly zoom effect using light field image data |
CN109379537A (en) * | 2018-12-30 | 2019-02-22 | 北京旷视科技有限公司 | Slide Zoom effect implementation method, device, electronic equipment and computer readable storage medium |
CN109600548A (en) * | 2018-11-30 | 2019-04-09 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
CN109656367A (en) * | 2018-12-24 | 2019-04-19 | 深圳超多维科技有限公司 | Image processing method, device and electronic equipment under a kind of scene applied to VR |
CN109801335A (en) * | 2019-01-08 | 2019-05-24 | 北京旷视科技有限公司 | Image processing method, device, electronic equipment and computer storage medium |
CN109922258A (en) * | 2019-02-27 | 2019-06-21 | 杭州飞步科技有限公司 | Electronic image stabilization method, device and the readable storage medium storing program for executing of in-vehicle camera |
US20190265876A1 (en) * | 2018-02-28 | 2019-08-29 | Canon Kabushiki Kaisha | Information processing apparatus and control method thereof |
-
2019
- 2019-09-25 CN CN201910914273.XA patent/CN110611767B/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1574892A (en) * | 2003-05-30 | 2005-02-02 | 佳能株式会社 | Photographing device and method for obtaining photographic image having image vibration correction |
US20130300875A1 (en) * | 2010-04-23 | 2013-11-14 | Flir Systems Ab | Correction of image distortion in ir imaging |
CN104024906A (en) * | 2011-12-28 | 2014-09-03 | 奥林巴斯映像株式会社 | Optical instrument, and imaging device |
CN105027167A (en) * | 2013-02-28 | 2015-11-04 | 诺基亚技术有限公司 | Method and apparatus for automatically rendering dolly zoom effect |
CN104104880A (en) * | 2014-07-28 | 2014-10-15 | 深圳市中兴移动通信有限公司 | Mobile terminal and shooting method thereof |
US20160225167A1 (en) * | 2015-02-03 | 2016-08-04 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
US20180288335A1 (en) * | 2017-04-03 | 2018-10-04 | Google Llc | Generating dolly zoom effect using light field image data |
CN107292872A (en) * | 2017-06-16 | 2017-10-24 | 艾松涛 | Image processing method/system, computer-readable recording medium and electronic equipment |
CN108234879A (en) * | 2018-02-02 | 2018-06-29 | 成都西纬科技有限公司 | It is a kind of to obtain the method and apparatus for sliding zoom video |
US20190265876A1 (en) * | 2018-02-28 | 2019-08-29 | Canon Kabushiki Kaisha | Information processing apparatus and control method thereof |
CN108389155A (en) * | 2018-03-20 | 2018-08-10 | 北京奇虎科技有限公司 | Image processing method, device and electronic equipment |
CN108377342A (en) * | 2018-05-22 | 2018-08-07 | Oppo广东移动通信有限公司 | double-camera photographing method, device, storage medium and terminal |
CN109600548A (en) * | 2018-11-30 | 2019-04-09 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
CN109656367A (en) * | 2018-12-24 | 2019-04-19 | 深圳超多维科技有限公司 | Image processing method, device and electronic equipment under a kind of scene applied to VR |
CN109379537A (en) * | 2018-12-30 | 2019-02-22 | 北京旷视科技有限公司 | Slide Zoom effect implementation method, device, electronic equipment and computer readable storage medium |
CN109801335A (en) * | 2019-01-08 | 2019-05-24 | 北京旷视科技有限公司 | Image processing method, device, electronic equipment and computer storage medium |
CN109922258A (en) * | 2019-02-27 | 2019-06-21 | 杭州飞步科技有限公司 | Electronic image stabilization method, device and the readable storage medium storing program for executing of in-vehicle camera |
Non-Patent Citations (2)
Title |
---|
JOACHIM VALENTE: "Perspective distortion modeling, learning and compensation", 《2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW)》 * |
陶学恺 等: "大范围运动延时影像拍摄方法研究", 《现代电影技术》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111083380A (en) * | 2019-12-31 | 2020-04-28 | 维沃移动通信有限公司 | Video processing method, electronic equipment and storage medium |
CN111083380B (en) * | 2019-12-31 | 2021-06-11 | 维沃移动通信有限公司 | Video processing method, electronic equipment and storage medium |
CN113808227A (en) * | 2020-06-12 | 2021-12-17 | 杭州普健医疗科技有限公司 | Medical image alignment method, medium and electronic device |
CN113808227B (en) * | 2020-06-12 | 2023-08-25 | 杭州普健医疗科技有限公司 | Medical image alignment method, medium and electronic equipment |
CN113973189A (en) * | 2020-07-24 | 2022-01-25 | 荣耀终端有限公司 | Display content switching method, device, terminal and storage medium |
CN114268731A (en) * | 2020-09-16 | 2022-04-01 | 北京小米移动软件有限公司 | Camera switching method, camera switching device and storage medium |
CN114268731B (en) * | 2020-09-16 | 2024-01-19 | 北京小米移动软件有限公司 | Camera switching method, camera switching device and storage medium |
CN115134505A (en) * | 2021-03-25 | 2022-09-30 | 北京小米移动软件有限公司 | Preview picture generation method and device, electronic equipment and storage medium |
CN113542625A (en) * | 2021-05-28 | 2021-10-22 | 北京迈格威科技有限公司 | Image processing method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110611767B (en) | 2021-08-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110611767B (en) | Image processing method and device and electronic equipment | |
CN109242961B (en) | Face modeling method and device, electronic equipment and computer readable medium | |
CN110730296B (en) | Image processing apparatus, image processing method, and computer readable medium | |
US11055826B2 (en) | Method and apparatus for image processing | |
US8345961B2 (en) | Image stitching method and apparatus | |
KR102010712B1 (en) | Distortion Correction Method and Terminal | |
CN105635588B (en) | A kind of digital image stabilization method and device | |
CN111062881A (en) | Image processing method and device, storage medium and electronic equipment | |
CN111340737B (en) | Image correction method, device and electronic system | |
CN111935398B (en) | Image processing method and device, electronic equipment and computer readable medium | |
CN111445537B (en) | Calibration method and system of camera | |
CN111866523B (en) | Panoramic video synthesis method and device, electronic equipment and computer storage medium | |
CN109690568A (en) | A kind of processing method and mobile device | |
WO2017128750A1 (en) | Image collection method and image collection device | |
WO2021031210A1 (en) | Video processing method and apparatus, storage medium, and electronic device | |
CN113793259B (en) | Image zooming method, computer device and storage medium | |
CN113286084B (en) | Terminal image acquisition method and device, storage medium and terminal | |
WO2023221969A1 (en) | Method for capturing 3d picture, and 3d photographic system | |
CN112288664A (en) | High dynamic range image fusion method and device and electronic equipment | |
JP2017021430A (en) | Panoramic video data processing device, processing method, and program | |
WO2022000176A1 (en) | Infrared image processing method, electronic device, and computer-readable storage medium | |
CN113542625B (en) | Image processing method, device, equipment and storage medium | |
JP2016119572A (en) | Image processing apparatus, imaging apparatus, image processing method, and program | |
CN115278071B (en) | Image processing method, device, electronic equipment and readable storage medium | |
CN115514895B (en) | Image anti-shake method, apparatus, electronic device, and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |