CN111091513A - Image processing method, image processing device, computer-readable storage medium and electronic equipment - Google Patents

Image processing method, image processing device, computer-readable storage medium and electronic equipment Download PDF

Info

Publication number
CN111091513A
CN111091513A CN201911312392.4A CN201911312392A CN111091513A CN 111091513 A CN111091513 A CN 111091513A CN 201911312392 A CN201911312392 A CN 201911312392A CN 111091513 A CN111091513 A CN 111091513A
Authority
CN
China
Prior art keywords
image
feature points
frame image
reference frame
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911312392.4A
Other languages
Chinese (zh)
Other versions
CN111091513B (en
Inventor
贾玉虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911312392.4A priority Critical patent/CN111091513B/en
Publication of CN111091513A publication Critical patent/CN111091513A/en
Application granted granted Critical
Publication of CN111091513B publication Critical patent/CN111091513B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses an image processing method, an image processing device, a computer readable storage medium and electronic equipment. The method comprises the steps of obtaining multi-frame images of a shooting scene; determining a reference frame image from the multi-frame images, and determining characteristic points in the reference frame image; judging whether the shooting scene meets a multi-frame noise reduction processing condition or not based on the feature points; and when the judgment result is negative, outputting the reference frame image. According to the method and the device, whether the current shooting scene meets the multi-frame noise reduction processing condition or not is judged through the characteristic points in the image, and the multi-frame noise reduction function is avoided being used under the condition that the multi-frame noise reduction condition is not met, so that the imaging quality is improved, and output of a fuzzy image is avoided.

Description

Image processing method, image processing device, computer-readable storage medium and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, a computer-readable storage medium, and an electronic device.
Background
At present, in the process of shooting a scene, a camera shakes due to the vibration of a camera carrier. If the camera is fixed on a building or a vehicle and is installed on equipment with a motor or handheld camera equipment, the carriers vibrate to drive a camera lens, so that a shot video picture shakes, and image coordinates of pixel points formed by the same point in a shot multi-frame image in a scene deviate along with time, and image quality is affected.
In the related technology, the pixel points of the same point in different image frames in a physical scene are averaged through a multi-frame noise reduction technology to obtain a better noise reduction effect and obtain a higher-quality image. However, in actual use, the multi-frame noise reduction technique causes output images to be blurred due to, for example, an error in matching of feature points, and thus the imaging effect is not good.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, a computer readable storage medium and electronic equipment, which can improve imaging quality and avoid output of blurred images.
In a first aspect, an embodiment of the present application provides an image processing method, where the image processing method includes:
acquiring a multi-frame image of a shooting scene;
determining a reference frame image from the multi-frame images, and determining characteristic points in the reference frame image;
judging whether the shooting scene meets a multi-frame noise reduction processing condition or not based on the feature points;
and when the judgment result is negative, outputting the reference frame image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the acquisition module is used for acquiring multi-frame images of a shooting scene;
the determining module is used for determining a reference frame image from the multi-frame images and determining the characteristic points in the reference frame image;
the judging module is used for judging whether the shooting scene meets the multi-frame noise reduction processing condition or not based on the characteristic points;
and the output module is used for outputting the reference frame image when the judgment result is negative.
In a third aspect, the present application provides a storage medium having a computer program stored thereon, which, when running on a computer, causes the computer to execute the image processing method provided by the present application.
In a fourth aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a plurality of cameras, where the memory stores a computer program, and the processor executes an image processing method according to an embodiment of the present application by calling the computer program.
In the embodiment of the application, whether the current shooting scene meets the multi-frame noise reduction processing condition or not is judged through the characteristic points in the image, and the multi-frame noise reduction function is avoided being used under the condition that the multi-frame noise reduction condition is not met, so that the imaging quality is improved, and the output of a fuzzy image is avoided.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application.
Fig. 2 is another schematic flow chart of an image processing method according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Fig. 5 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present application are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the application and should not be taken as limiting the application with respect to other embodiments that are not detailed herein.
The embodiment of the application firstly provides an image processing method which can be applied to electronic equipment. The main body of the image processing method may be the image processing apparatus provided in the embodiment of the present application, or an electronic device integrated with the image processing apparatus, where the image processing apparatus may be implemented in a hardware or software manner, and the electronic device may be a device with processing capability and configured with a processor, such as a smart phone, a tablet computer, a palmtop computer, a notebook computer, or a desktop computer.
For example, the electronic device is exemplified by a smartphone. Wherein the electronic device may comprise one, two or more cameras. The electronic equipment can comprise a front camera and/or a rear camera, the front camera can comprise a plurality of cameras, and the rear camera can also comprise a plurality of cameras. The camera used for acquiring the image in the embodiment of the application can be a front-facing camera or a rear-facing camera.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present disclosure. The image processing method can be applied to the electronic device provided by the embodiment of the application, and the flow of the image processing method provided by the embodiment of the application can be as follows:
101, acquiring multi-frame images of a shooting scene.
When the electronic equipment acquires the multi-frame images of the shooting scene, the camera can be called to shoot the shooting scene, namely the multi-frame images of the shooting scene are obtained through shooting, and the multi-frame images previewing the scene to be shot can also be acquired through utilizing the previewing function of the camera.
The multi-frame images of the shooting scene acquired by the electronic device may be continuously acquired multi-frame images or discontinuous multi-frame images selected from the continuously acquired images, as long as the multi-frame images of the same shooting scene are acquired, and the contents in the images are substantially the same. For example, the multi-frame images acquired by the electronic device are all images shot of a mountain at a distance, and even if the shooting angles and shooting areas of the images are slightly different due to hand shake, vibration of an image carrier and the like during shooting, the electronic device can be considered to acquire the multi-frame images of the same shooting scene.
And 102, determining a reference frame image in the multi-frame image, and determining the characteristic points in the reference frame image.
The reference frame image may be a randomly selected one of a plurality of frame images, or a selected one of the plurality of frame images according to a certain selection rule. For example, the definition of the multiple frames of images may be evaluated by a definition evaluation method, and a clearest frame of image in the multiple frames of images is selected as a reference frame of image, or a frame of image with the highest contrast in the multiple frames of images is selected as a reference frame, and the like, and the selection rule may be determined according to the current scene and the actual requirements of the user. If the user needs a high-contrast photo, the image with the highest contrast in the multi-frame images can be determined as the reference frame image, if the user needs a high-definition photo, the image with the highest definition in the multi-frame images can be determined as the reference frame image, and the like.
It should be noted that the above examples of the method for determining the reference frame image are only exemplary, and do not limit the method for determining the reference frame image in the present application, and the electronic device may determine a certain frame image meeting the current actual requirement in the multiple frame images as the reference frame image by using various methods, and even in some cases, the electronic device may determine two frames or multiple frame reference frame images. For example, if the user has a high requirement on the definition of the image, the electronic device may also perform image synthesis on two frames of images with the highest definition in the multiple frames of images, determine the synthesized image as a reference frame image, and so on.
In an embodiment, the electronic device may acquire a plurality of frames of images of a shooting scene, analyze and learn the images of the shooting scene by using a machine learning algorithm in advance, generate a machine learning algorithm model through a self-analysis and learning process, determine an image with the highest definition in the plurality of frames of images according to a result of the machine learning algorithm model processing, and use the image as a reference frame image.
After the electronic equipment determines the reference frame image, the characteristic points are determined in the reference frame image.
The feature point is a point having a feature in the image, and may be an extreme point, or a point with a certain attribute highlighted, for example, an intersection of two lines, or a vertex of one corner. The feature points in the image can reflect the position and contour of each object in the image.
In one embodiment, for the same shooting scene, the electronic device may determine feature points in the reference frame by using a machine learning algorithm model trained in advance. For example, for a multi-frame image with a shooting scene of a human face, the electronic device may perform training of a human face feature point recognition model in advance, and determine feature points in the multi-reference frame image according to the trained human face feature point recognition model based on a human face gray-scale value or a boundary feature in the image.
In one embodiment, the electronic device may determine feature points in the reference frame image using a feature point extraction algorithm, such as Harris (Harris corner detection), SIFT (Scale Invariant features Transform), and the like. The feature point matching algorithm has good environmental adaptability, and can realize rapid and accurate image stabilization of equipment in various imaging environments under the condition of meeting the real-time property.
And 103, judging whether the shooting scene meets the multi-frame noise reduction processing condition or not based on the characteristic points.
In the process of shooting multi-frame images by electronic equipment, slight shake happens inevitably to arms of people due to breathing and muscle contraction, and the shooting carrier can also vibrate slightly, so that the image coordinates of pixel points formed by the same point in the multi-frame images in a shooting scene shift along with time. The electronic equipment can obtain a clear image of a shooting scene through multi-frame noise reduction processing. For example, estimating the motion between images, registering and interpolating each image to a reference frame image, so that corresponding pixel points of the same point of the physical scene in each registered and interpolated image have the same image coordinate, and averaging by using a plurality of registered and interpolated images to achieve the effect of noise reduction. Therefore, after the output image is subjected to registration and interpolation from the multi-frame image to the reference frame image, the average result is obtained, and the imaging quality is improved through noise reduction.
However, in the multi-frame noise reduction process, when each image is registered with a reference frame image, feature point matching is involved, and feature points determined in the reference frame image are matched with other images to match corresponding feature points, and the matching error rate of the feature point matching in some scenes is high. At the moment, the electronic equipment applies multi-frame noise reduction processing images, the definition of the output images is reduced on the contrary, and the imaging quality is difficult to guarantee. By judging whether the current scene meets the conditions of multi-frame noise reduction processing, the scene can be divided into a scene suitable for multi-frame noise reduction and a scene not suitable for multi-frame noise reduction. The electronic equipment improves the imaging quality by using a multi-frame noise reduction technology under a scene suitable for using multi-frame noise reduction, avoids multi-frame noise reduction processing under a scene unsuitable for using multi-frame noise reduction, and improves the imaging quality through other ways.
When judging whether the shooting scene meets the multi-frame noise reduction processing condition, the electronic equipment can judge based on the characteristic points determined in the reference frame image. It can be understood that if these feature points are very close to each other in the reference frame image and exhibit a certain regular distribution, such as a lattice shirt, when the reference frame image is matched with feature points of other images of the same shooting scene, matching errors are prone to occur, and at this time, the shooting scene can be considered not to satisfy the multi-frame noise reduction processing condition.
In one embodiment, the electronic device determines some feature points with close distance in the reference frame image as target feature points, and then determines whether the shooting scene meets the multi-frame noise reduction condition based on the target feature points. For example, the electronic device inputs the target feature points into a machine learning algorithm model which is learned in advance, and judges whether the target feature points in the reference frame image are arranged according to a repeated geometric figure according to the processing result of the machine learning algorithm model. And if the target feature points in the reference frame image are arranged according to the repeated geometric figure, the shooting scene is considered not to meet the multi-frame noise reduction processing condition.
And 104, when the judgment result is negative, outputting the reference frame image.
If the electronic equipment judges that the shooting scene does not meet the multi-frame noise reduction processing condition, the definition of the output image is reduced on the contrary by applying the multi-frame noise reduction processing image, and the definition of the imaging is difficult to guarantee.
In an embodiment, when the electronic device determines that the shooting scene does not satisfy the multi-frame noise reduction processing condition, the reference frame image is directly output, the reference frame image may be a clearest frame image determined by evaluating the definition in the multi-frame images, and the definition of the imaging is guaranteed.
In an embodiment, if the electronic device determines that the shooting scene meets the multi-frame noise reduction processing condition, all images except the reference frame image in the multi-frame images are determined as current frame images, a homography matrix corresponding to each current frame image is determined, the electronic device performs image registration on each current frame image to the reference frame image based on the homography matrices, performs multi-frame noise reduction processing on the registered images, and outputs a result image after the multi-frame noise reduction processing.
As can be seen from the above, in the embodiment of the present application, a plurality of frame images of a shooting scene are obtained; determining a reference frame image from the multi-frame images, and determining characteristic points in the reference frame image; judging whether the shooting scene meets a multi-frame noise reduction processing condition or not based on the feature points; and when the judgment result is negative, outputting the reference frame image. According to the method and the device, whether the current shooting scene meets the multi-frame denoising processing condition or not is judged through the feature points in the image, and the multi-frame denoising function is avoided being used under the condition that the multi-frame denoising processing condition is not met, so that the imaging quality is improved, and output of a fuzzy image is avoided.
Referring to fig. 2, fig. 2 is another schematic flow chart of an image processing method provided in an embodiment of the present application, where the image processing method is applicable to an electronic device provided in the embodiment of the present application, and the flow of the image processing method may include:
201. the electronic equipment acquires a multi-frame image of a shooting scene.
When the electronic equipment acquires the multi-frame images of the shooting scene, the camera can be called to shoot the shooting scene, namely the multi-frame images of the shooting scene are obtained through shooting, and the multi-frame images previewing the scene to be shot can also be acquired through utilizing the previewing function of the camera.
The multi-frame images of the shooting scene acquired by the electronic device may be continuously acquired multi-frame images or discontinuous multi-frame images selected from the continuously acquired images, as long as the multi-frame images of the same shooting scene are acquired, and the contents in the images are substantially the same. For example, the multi-frame images acquired by the electronic device are all images shot of a mountain at a distance, and even if the shooting angles and shooting areas of the images are slightly different due to hand shake, vibration of an image carrier and the like during shooting, the electronic device can be considered to acquire the multi-frame images of the same shooting scene.
202. The electronic equipment evaluates the definition of the multi-frame images, and determines the image with the highest definition as a reference frame image.
The definition of the multi-frame images acquired by the electronic equipment is different due to the shaking of hands during shooting or the vibration of a camera carrier, and the most clear frame image can be selected as the reference frame image by evaluating the definition of the acquired multi-frame images by the electronic equipment.
When an image is relatively clear, details of the image are rich, the feature values (e.g., gray scale, color, and the like) of adjacent pixels change greatly, and a high-frequency component appearing as a spectrum is large in the frequency domain. By utilizing the characteristics, when the electronic equipment evaluates the definition of a multi-frame image, various definition evaluation methods such as a focusing evaluation function, a gray gradient algorithm and the like can be adopted. The focus evaluation function may be any one or a combination of several of a spectrum function, a gradient function, and an entropy function, and may be specifically selected as required.
Alternatively, the electronic device may determine the sharpness of the multi-frame image and then perform the comparison. For example, the image sharpness may be identified and compared by using a relevant image identification method, such as a contrast method, a phase method, a high frequency component method, a smoothing method, a threshold integration method, and/or a gray difference method, and the algorithm may be combined and improved to improve the identification accuracy.
203. And the electronic equipment detects the corner of the reference frame image to acquire the corner.
Wherein a corner point is an extreme point, i.e. a point with a particularly prominent attribute in some respect. The corner point may be the intersection of two lines or a point located on two adjacent objects with different main directions. The corner detection is the detection of a defined or detectable point, which may be a corner, an isolated point with maximum or minimum intensity on some attributes, an end point of a line segment, or a point with maximum local curvature on a curve.
In one embodiment, the electronic device may perform corner detection using Harris corner detection algorithm. For example, the electronic device determines a rectangular window of a certain size, moves the window in the image, and determines the corner points by looking at the average transform values of the gray values of the image within this window. If the gray value of the image of the area in the window is constant, the deviation of all different directions in the image is represented to be hardly changed; if a window crosses an edge in the image, the offset along the edge changes little, but the offset perpendicular to the edge changes greatly; if the window contains an isolated point or corner, the offsets in all the different directions can vary greatly.
It can be understood that, if the small window is moved in the reference frame image, the circled regions are all desktops of the same pure-color desk and do not include edge portions of the desk, then, in the circled region of the small window, the gray values of the pixel points are constant, the average conversion value of the gray values is almost 0, the deviations in all different directions in the image hardly change, and it can be determined that there is no corner point capable of representing the outline of the desk in the region circled by the small window.
By using corner point detection, the electronic equipment can determine some points with representative meanings in the reference frame image and determine the points with representative meanings as characteristic points, so as to perform subsequent operations according to the characteristic points.
204. And the electronic equipment carries out error detection and elimination on the detected angular points so as to screen the angular points to obtain characteristic points.
In one embodiment, after the electronic device determines the corner points in the reference frame image through corner point detection, the electronic device rejects the corner points which are detected and are not representative and/or misdetected. For example, an algorithm such as Harris Score (Harris Score) can be used to characterize conditions such as an index of quality of the feature points themselves and distances between local feature points, so as to screen out the feature points from the determined corner points.
In an embodiment, the electronic device selects a preset region in a minimum range around each corner point, detects the gray value change of pixel points in the preset region again, and determines a feature point from the corner points by observing an average conversion value of image gray values in the preset region in the minimum range, that is, the electronic device performs false detection and elimination on the detected corner points, and eliminates non-representative and/or false detected corner points.
205. The electronic device obtains the distance between every two feature points.
206. The electronic device judges whether the distance is smaller than a preset distance. If so, go to 207, otherwise, return to continue 206.
207. The electronic equipment determines the two feature points as target feature points, and adds the distance between the two feature points to a preset distance set.
It can be understood that the feature points determined by the electronic device in the reference frame image are distributed everywhere, and some regional feature points are distributed more intensively and some regional feature points are distributed more dispersedly. And the concentrated feature points are more prone to matching errors when matching pairs of feature points than the dispersed feature points.
In one embodiment, the electronic device determines some feature points with close distance in the reference frame image as target feature points, and then determines whether the shooting scene meets the multi-frame noise reduction condition based on the target feature points. For example, the electronic device obtains the distance between every two feature points, and when the distance between two feature points is smaller than a preset distance, the electronic device determines the two feature points as target feature points at the same time, and adds the distance between the two feature points to a preset distance set.
It should be noted that, in the present application, there is no limitation on which two feature points are obtained in the above description, and they may be any two feature points in the reference frame image. According to the method and the device, the two feature points are obtained not only once, but also in a pairwise obtaining mode, two different feature points in the reference frame image are obtained for multiple times until all the feature points in the reference frame image are traversed. For example, two feature points in the reference frame image are arbitrarily acquired, and if the distance between the two feature points is greater than or equal to the preset distance, the process returns to 206, and the electronic device acquires the distance between the other two feature points again, and determines whether the distance between the other two feature points acquired again is smaller than the preset distance. If the distance between the other two feature points acquired again by the electronic equipment is smaller than the preset distance, determining the two acquired feature points as target feature points, and acquiring the distance between the other two feature points again for judgment; if the distance between the other two feature points acquired again by the electronic equipment is larger than or equal to the preset distance, the distance is not processed, the distances between the other two feature points are acquired again for judgment, and the steps are repeated until the electronic equipment finishes the pairwise combination of all the feature points in the reference frame image.
In an embodiment, when the electronic device obtains the distance between every two feature points, it first selects one feature point from the reference frame image, then determines the nearest feature point closest to the feature point among the other feature points, and obtains the distance between the feature point and the nearest feature point.
And if the distance between the two feature points is smaller than the preset distance, the electronic equipment determines the two feature points as target feature points and adds the distance between the two feature points to a preset distance set.
208. The electronic equipment determines the proportion of the distances with equal values in the preset distance set.
209. The electronic equipment judges whether the proportion is larger than a preset proportion. If so, go to step 212, otherwise go to step 210.
210. The electronic device determines that the target feature points are arranged according to a repeated geometric figure.
It will be appreciated that if the target feature points exhibit a characteristic distribution in the reference frame image, e.g. arranged in a repeating geometric pattern, some of the spacings in the set of predetermined spacings must be numerically equal. If the proportion of the equal intervals in the whole preset interval set is large, the target feature points can be determined to be arranged according to a repeated geometric figure.
For example, for a shot scene of a "lattice shirt", taking the lattice of the lattice shirt as a square as an example, in a reference frame image of the "lattice shirt", target feature points distributed in a concentrated manner in a "lattice" area are mostly located at intersections of lines in the lattice shirt and are regularly arranged in a repeated square.
In this way, in the preset distance set acquired by the electronic device, most of the distances are equal to the side lengths of the repeated squares in terms of value, so that when the proportion occupied in the whole preset distance set is greater than the preset proportion, the electronic device can judge that the target feature points are arranged according to the repeated geometric figures in the shooting scene of the 'lattice shirt'.
Moreover, it should be noted that equality does not require exact equality in numerical values, and as such, it should be understood that any measurement means is subject to error. Thus, even if there is a certain deviation in the values of the two pitches, the two pitches can be considered to be equal if the deviation is within the allowable range.
211. The electronic equipment judges that the shooting scene does not meet the multi-frame noise reduction processing condition.
For the target feature points distributed in a concentrated manner, if the distribution around the target feature points is similar, the target feature points are more likely to be matched with errors than other target feature points with dissimilar distribution around the target feature points.
In an embodiment, the electronic device may determine whether the target feature points are distributed characteristically by determining whether the target feature points are arranged according to a repeated geometric figure.
The repeating geometric pattern may be a repeating rectangle, triangle, circle, etc. For example, in a "lattice shirt" scene, the target feature points are arranged in a repeated rectangle, and such a shooting scene is prone to matching errors and belongs to a shooting scene that does not satisfy the multi-frame noise reduction processing condition.
It will be understood by those skilled in the art that the foregoing examples are for illustrative purposes only and that other non-illustrated arrangements of repeating geometric figures, with the exception of those listed above, are within the scope of the present application.
212. The electronic device outputs a reference frame image.
If the electronic equipment judges that the shooting scene does not meet the multi-frame noise reduction processing condition, the definition of the output image is reduced on the contrary by applying the multi-frame noise reduction processing image, and the imaging quality is difficult to guarantee.
In an embodiment, when the electronic device determines that the shooting scene does not satisfy the multi-frame noise reduction processing condition, the reference frame image is directly output, and the imaging definition is guaranteed because the reference frame image is the clearest frame image determined by the multi-frame image identification definition.
213. The electronic equipment determines all images except the reference frame image in the multi-frame image as the current frame image.
When acquiring a plurality of frame images of a shooting scene, the electronic equipment determines the clearest frame from the plurality of frame images as a reference frame image, and other images are all current frame images.
214. The electronic device performs image registration of the current frame image to the reference frame image.
The shooting angle or shooting area of each frame of image is slightly different due to hand shake or vibration of the camera carrier during shooting. Thus, all current frame images are registered and interpolated to the reference frame image before pixel averaging the images. During registration, the electronic equipment determines a homography matrix when each current frame image is registered to the reference frame image, and aligns the coordinates of each pixel point in the current frame image to the coordinates of each pixel point in the reference frame image by using the homography matrix, so that the registration purpose is achieved.
In an embodiment, the determining, by the electronic device, the homography matrix corresponding to each current frame image includes:
(1) the electronic equipment matches current frame feature points corresponding to the reference frame feature points in the current frame image, wherein each reference frame feature point is matched with the corresponding current frame feature point to form a feature point pair, and a plurality of pairs of feature point pairs are obtained.
In one embodiment, the electronic device determines a matching feature point B1 in the current frame image that matches a pixel block within a certain range around the reference frame feature point a1 in the reference frame image according to the consistency of the pixel blocks around the correct matching point pair determined based on visual sense, and uses B1 as the corresponding feature point of a1 in the current frame image, so that a1 and B1 form a pair of feature point pairs. For example, the electronic device may establish a feature window around any feature point of the reference frame image, and determine a matching window corresponding to the feature window in the current frame image, where a center point of the matching window is used as a matching feature point, and the reference frame feature point and the current frame matching feature point corresponding to the reference frame feature point form a feature point pair.
In an embodiment, after the electronic device matches a plurality of pairs of feature point pairs, the plurality of pairs of feature point pairs are removed and screened to obtain screened feature point pairs. For example, the electronic device calculates the distance of the translation amount of the feature point pairs of the reference frame image and the current frame image in the horizontal direction and the vertical direction by using an Euclidean distance formula, performs distance verification on the matched feature point pairs by using distance normality distribution features, and eliminates the non-matched feature point pairs to obtain correctly matched feature point pairs.
(2) And calculating a plurality of homography matrixes corresponding to the current frame image according to the plurality of pairs of characteristic points.
The electronic equipment randomly acquires three pairs of feature point pairs in the multiple pairs of feature point pairs, and calculates a homography matrix corresponding to the current frame image according to the three acquired pairs of feature point pairs.
(3) An optimal homography matrix is determined among the plurality of homography matrices.
After acquiring a plurality of homography matrixes, the electronic equipment uses a Random Sample Consensus (RANSAC) algorithm to score the homography matrix for any one calculated homography matrix through other characteristic points except three corresponding characteristic point pairs so as to obtain an optimal homography matrix, and the electronic equipment performs affine transformation on the current frame image by using the optimal homography matrix. For example, each time a homography matrix is calculated according to three pairs of feature point pairs, the electronic device matches the homography matrix with other feature point pairs except for the three pairs of feature point pairs corresponding to the homography matrix to obtain the matching success rate of the homography matrix in the current frame image, and determines the homography matrix with the highest matching success rate among the homography matrices as the optimal homography matrix.
After the electronic device determines the corresponding homography matrix of each current frame image relative to the reference frame image, affine transformation can be performed on each current frame image based on the corresponding homography matrix, so that image registration is performed on the current frame image to the reference frame image. For example, the electronic device multiplies the coordinates of the pixel points in the current frame image by the corresponding homography matrix to perform affine transformation, obtains the coordinates of the pixel points after the affine transformation, and synthesizes the coordinates after the affine transformation of each pixel point in the current frame image to obtain an image after the current frame image is registered to the reference frame image.
215. And the electronic equipment performs multi-frame noise reduction processing on the reference frame image based on the registered current frame image and outputs the multi-frame noise-reduced reference frame image.
After the current frame image is registered to the reference frame image, the positions of the pixel points in the reference frame image and the positions of the pixel points in the registered current frame image are unified, the electronic equipment averages the pixel values of the reference frame image and the registered current frame image, and outputs an imaging result after the pixel values are averaged, namely, a result image after noise reduction processing.
When each current frame image is subjected to image registration to the reference frame image, the electronic equipment can average the pixel values of the reference frame image and the registered current frame image after completing the registration once, and the imaging quality of the obtained noise-reduced result image is higher and higher along with the registration and averaging once. Or after all the current frame images are registered, the electronic equipment averages the pixel values of the reference frame image and all the registered current frame images, and outputs an imaging result after the pixel values are averaged, namely the result image after the noise reduction processing.
As can be seen from the above, in the embodiment of the application, the electronic device obtains the multi-frame image of the shooting scene; determining a reference frame image from the multi-frame images, and determining characteristic points in the reference frame image; judging whether the shooting scene meets a multi-frame noise reduction processing condition or not based on the feature points; and when the judgment result is negative, outputting the reference frame image. According to the method and the device, whether the current shooting scene meets the multi-frame noise reduction processing condition or not is judged through the characteristic points in the image, and the multi-frame noise reduction function is avoided being used under the condition that the multi-frame noise reduction condition is not met, so that the imaging quality is improved, and output of a fuzzy image is avoided.
The embodiment of the application also provides an image processing device. Referring to fig. 3, fig. 3 is a first structural schematic diagram of an image processing apparatus according to an embodiment of the present disclosure. The image processing apparatus 300 is applied to an electronic device, and the image processing apparatus 300 includes an obtaining module 301, a determining module 302, a determining module 303, and an outputting module 304, as follows:
an obtaining module 301, configured to obtain a multi-frame image of a shooting scene;
a determining module 302, configured to determine a reference frame image from the multiple frame images, and determine feature points in the reference frame image;
the judging module 303 is configured to judge whether the shooting scene meets a multi-frame noise reduction processing condition based on the feature points;
and an output module 304, configured to output the reference frame image when the determination result is negative.
In one embodiment, the determining module is configured to:
acquiring the distance between every two feature points, and determining a plurality of target feature points from the feature points according to the distance;
judging whether the target feature points are arranged according to repeated geometric figures;
and if so, judging that the shooting scene does not meet the multi-frame noise reduction processing condition.
When the distance between the two feature points is smaller than the preset distance, the two feature points are determined as target feature points, and the distance between the two feature points is added to a preset distance set.
In an embodiment, when determining whether the target feature points are arranged according to the repeated geometric figure, determining a proportion of distances with equal values in a preset distance set, and when the proportion is greater than the preset proportion, determining that the target feature points are arranged according to the repeated geometric figure.
In one embodiment, when determining whether the target feature points are arranged according to the repeated geometric figure, the target feature points are input into a machine learning algorithm model which is learned in advance, and whether the target feature points are arranged according to the repeated geometric figure is determined according to a result processed by the machine learning algorithm model.
In one embodiment, when determining the reference frame image from the plurality of frame images, the determining module 302 is configured to:
and evaluating the definition of the multiple frame images, and determining the frame image with the highest definition as a reference frame image.
In one embodiment, when determining the feature points in the reference frame image, the determining module 302 is configured to:
carrying out corner detection on the reference frame image to obtain a corner;
and carrying out error detection and elimination on the detected angular points so as to screen characteristic points from the angular points.
It should be noted that the image processing apparatus provided in the embodiment of the present application and the image processing method in the foregoing embodiment belong to the same concept, and any method provided in the embodiment of the image processing method may be executed on the image processing apparatus, and a specific implementation process thereof is described in detail in the embodiment of the image processing method, and is not described herein again.
As can be seen from the above, in the embodiment of the present application, the obtaining module 301 obtains the multi-frame image of the shooting scene; the determining module 302 determines a reference frame image from the multi-frame images and determines feature points in the reference frame image; the judging module 303 judges whether the shooting scene meets the multi-frame noise reduction processing condition based on the feature points; the output module 304 outputs the reference frame image when the determination result is negative. According to the method and the device, whether the current shooting scene meets the multi-frame noise reduction processing condition or not is judged through the characteristic points in the image, and the multi-frame noise reduction function is avoided being used under the condition that the multi-frame noise reduction condition is not met, so that the imaging quality is improved, and output of a fuzzy image is avoided.
The embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, which, when the stored computer program is executed on a computer, causes the computer to execute the steps in the image processing method as provided by the embodiment of the present application. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
An electronic device is further provided in the embodiments of the present application, please refer to fig. 4, and fig. 4 is a schematic structural diagram of the electronic device provided in the embodiments of the present application. The electronic device includes a processor 401, a memory 402, a camera 403 and a display 404, wherein the processor 401 is electrically connected to the memory 402, the camera 403 and the display 404.
The processor 401 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by running or loading a computer program stored in the memory 402 and calling data stored in the memory 402.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the computer programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, a computer program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
The camera 403 may include a normal color camera, or a normal color camera with a viewing angle of about 45 degrees, or a color telephoto camera with a viewing angle of less than 40 degrees, to name a few examples. There may be one or two or more cameras 403.
The display 404 may be used to display information entered by or provided to the user as well as various graphical user interfaces, which may be comprised of graphics, text, icons, video, and any combination thereof. The display 404 includes a display screen for displaying preview images.
In this embodiment, the processor 401 in the electronic device loads instructions corresponding to one or more processes of the computer program into the memory 402 according to the following steps, and the processor 401 runs the computer program stored in the memory 402, so as to implement various functions, as follows:
acquiring a multi-frame image of a shooting scene;
determining a reference frame image from the multi-frame images, and determining characteristic points in the reference frame image;
judging whether the shooting scene meets a multi-frame noise reduction processing condition or not based on the feature points;
and when the judgment result is negative, outputting the reference frame image.
Referring to fig. 5, fig. 5 is another schematic structural diagram of the electronic device according to the embodiment of the present disclosure, and the difference from the electronic device shown in fig. 4 is that the electronic device further includes components such as an input unit 405 and an output unit 406.
The input unit 405 may be used to receive input numbers, character information, or user characteristic information (such as a fingerprint), and generate a keyboard, a mouse, a joystick, an optical or trackball signal input, etc., related to user setting and function control, among others.
The output unit 406 may be used to display information input by the user or information provided to the user, such as a screen.
In this embodiment, the processor 401 in the electronic device loads instructions corresponding to one or more processes of the computer program into the memory 402 according to the following steps, and the processor 401 runs the computer program stored in the memory 402, so as to implement various functions, as follows:
acquiring a multi-frame image of a shooting scene;
determining a reference frame image from the multi-frame images, and determining characteristic points in the reference frame image;
judging whether the shooting scene meets a multi-frame noise reduction processing condition or not based on the feature points;
and when the judgment result is negative, outputting the reference frame image.
In an embodiment, when determining whether the shooting scene satisfies the multi-frame noise reduction processing condition based on the feature points, the processor 401 further performs:
acquiring the distance between every two feature points, and determining a plurality of target feature points from the feature points according to the distance;
judging whether the target feature points are arranged according to repeated geometric figures;
and if so, judging that the shooting scene does not meet the multi-frame noise reduction processing condition.
In one embodiment, when determining a plurality of target feature points from the feature points according to the distance, the processor 401 further performs:
and when the distance between the two feature points is smaller than the preset distance, determining the two feature points as target feature points, and adding the distance between the two feature points to a preset distance set.
In one embodiment, in determining whether the target feature points are arranged according to the repeated geometric figure, the processor 401 further performs:
determining the proportion of intervals with equal values in a preset interval set;
and when the proportion is larger than the preset proportion, judging that the target characteristic points are arranged according to a repeated geometric figure.
In one embodiment, in determining whether the target feature points are arranged according to the repeated geometric figure, the processor 401 further performs:
and inputting the target feature points into a machine learning algorithm model which is learned in advance, and judging whether the target feature points are arranged according to repeated geometric figures according to the processing result of the machine learning algorithm model.
In one embodiment, when determining the reference frame image from the plurality of frame images, the processor 401 further performs:
and evaluating the definition of the multiple frame images, and determining the frame image with the highest definition as a reference frame image.
In one embodiment, when determining the feature points in the reference frame image, the processor 401 further performs:
carrying out corner detection on the reference frame image to obtain a corner;
and carrying out error detection and elimination on the detected angular points so as to screen characteristic points from the angular points.
It should be noted that the electronic device provided in the embodiment of the present application and the image processing method in the foregoing embodiment belong to the same concept, and any method provided in the embodiment of the image processing method may be executed on the electronic device, and a specific implementation process thereof is described in detail in the embodiment of the feature extraction method, and is not described herein again.
As can be seen from the above, in the embodiment of the application, the electronic device obtains the multi-frame image of the shooting scene; determining a reference frame image from the multi-frame images, and determining characteristic points in the reference frame image; judging whether the shooting scene meets a multi-frame noise reduction processing condition or not based on the feature points; and when the judgment result is negative, outputting the reference frame image. According to the method and the device, whether the current shooting scene meets the multi-frame noise reduction processing condition or not is judged through the characteristic points in the image, and the multi-frame noise reduction function is avoided being used under the condition that the multi-frame noise reduction condition is not met, so that the imaging quality is improved, and output of a fuzzy image is avoided.
It should be noted that, for the image processing method of the embodiment of the present application, it can be understood by a person skilled in the art that all or part of the process of implementing the image processing method of the embodiment of the present application can be completed by controlling the relevant hardware through a computer program, the computer program can be stored in a computer readable storage medium, such as a memory of an electronic device, and executed by at least one processor in the electronic device, and the process of executing the computer program can include the process of the embodiment of the image processing method. The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random access memory, etc.
In the image processing apparatus according to the embodiment of the present application, each functional module may be integrated into one processing chip, each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The foregoing detailed description has provided an image processing method, an image processing apparatus, a computer-readable storage medium, and an electronic device according to embodiments of the present application, and specific examples are applied herein to explain the principles and implementations of the present application, and the descriptions of the foregoing embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image processing method, comprising:
acquiring a multi-frame image of a shooting scene;
determining a reference frame image from the multi-frame images, and determining characteristic points in the reference frame image;
judging whether the shooting scene meets a multi-frame noise reduction processing condition or not based on the feature points;
and when the judgment result is negative, outputting the reference frame image.
2. The image processing method according to claim 1, wherein the determining whether the captured scene satisfies a multi-frame noise reduction processing condition based on the feature point comprises:
acquiring the distance between every two feature points, and determining a plurality of target feature points from the feature points according to the distance;
judging whether the target feature points are arranged according to repeated geometric figures;
and if so, judging that the shooting scene does not meet the multi-frame noise reduction processing condition.
3. The image processing method according to claim 2, wherein the determining a plurality of target feature points from the feature points according to the distance comprises:
when the distance between the two feature points is smaller than a preset distance, determining the two feature points as target feature points, and adding the distance between the two feature points to a preset distance set.
4. The image processing method according to claim 3, wherein the determining whether the target feature points are arranged in a repeating geometric figure comprises:
determining the proportion of intervals with equal values in the preset interval set;
and when the proportion is larger than a preset proportion, judging that the target characteristic points are arranged according to a repeated geometric figure.
5. The image processing method according to claim 3, wherein the determining whether the target feature points are arranged in a repeating geometric figure comprises:
and inputting the target feature points into a machine learning algorithm model which is learned in advance, and judging whether the target feature points are arranged according to a repeated geometric figure according to a result processed by the machine learning algorithm model.
6. The image processing method according to claim 1, wherein said determining a reference frame image among the plurality of frame images comprises:
and evaluating the definition of the multiple frame images, and determining the frame image with the highest definition as the reference frame image.
7. The image processing method according to any one of claims 1 to 6, wherein the determining the feature points in the reference frame image comprises:
carrying out corner detection on the reference frame image to obtain a corner;
and carrying out error detection and elimination on the detected corner points so as to screen the feature points from the corner points.
8. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring multi-frame images of a shooting scene;
the determining module is used for determining a reference frame image from the multi-frame images and determining the characteristic points in the reference frame image;
the judging module is used for judging whether the shooting scene meets the multi-frame noise reduction processing condition or not based on the characteristic points;
and the output module is used for outputting the reference frame image when the judgment result is negative.
9. A computer-readable storage medium, having stored thereon a computer program which, when run on a computer, causes the computer to execute the image processing method according to any one of claims 1 to 7.
10. An electronic device, comprising a processor and a memory, wherein the processor is electrically connected to the memory, and the memory stores a computer program, and the processor executes the image processing method according to any one of claims 1 to 7 by calling the computer program.
CN201911312392.4A 2019-12-18 2019-12-18 Image processing method, device, computer readable storage medium and electronic equipment Active CN111091513B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911312392.4A CN111091513B (en) 2019-12-18 2019-12-18 Image processing method, device, computer readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911312392.4A CN111091513B (en) 2019-12-18 2019-12-18 Image processing method, device, computer readable storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111091513A true CN111091513A (en) 2020-05-01
CN111091513B CN111091513B (en) 2023-07-25

Family

ID=70396002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911312392.4A Active CN111091513B (en) 2019-12-18 2019-12-18 Image processing method, device, computer readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111091513B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112488027A (en) * 2020-12-10 2021-03-12 Oppo(重庆)智能科技有限公司 Noise reduction method, electronic equipment and computer storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007151008A (en) * 2005-11-30 2007-06-14 Casio Comput Co Ltd Image processor and program
US20100245604A1 (en) * 2007-12-03 2010-09-30 Jun Ohmiya Image processing device, photographing device, reproducing device, integrated circuit, and image processing method
CN104604214A (en) * 2012-09-25 2015-05-06 三星电子株式会社 Method and apparatus for generating photograph image
US20160301840A1 (en) * 2013-12-06 2016-10-13 Huawei Device Co., Ltd. Photographing Method for Dual-Lens Device and Dual-Lens Device
US20170019616A1 (en) * 2014-05-15 2017-01-19 Huawei Technologies Co., Ltd. Multi-frame noise reduction method, and terminal
CN107205116A (en) * 2017-06-13 2017-09-26 广东欧珀移动通信有限公司 Image-selecting method and Related product
CN107230192A (en) * 2017-05-31 2017-10-03 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal
WO2017215501A1 (en) * 2016-06-15 2017-12-21 深圳市万普拉斯科技有限公司 Method and device for image noise reduction processing and computer storage medium
CN108898567A (en) * 2018-09-20 2018-11-27 北京旷视科技有限公司 Image denoising method, apparatus and system
CN109348088A (en) * 2018-11-22 2019-02-15 Oppo广东移动通信有限公司 Image denoising method, device, electronic equipment and computer readable storage medium
US20190130532A1 (en) * 2017-11-01 2019-05-02 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image-processing method, apparatus and device
CN110349163A (en) * 2019-07-19 2019-10-18 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007151008A (en) * 2005-11-30 2007-06-14 Casio Comput Co Ltd Image processor and program
US20100245604A1 (en) * 2007-12-03 2010-09-30 Jun Ohmiya Image processing device, photographing device, reproducing device, integrated circuit, and image processing method
CN104604214A (en) * 2012-09-25 2015-05-06 三星电子株式会社 Method and apparatus for generating photograph image
US20160301840A1 (en) * 2013-12-06 2016-10-13 Huawei Device Co., Ltd. Photographing Method for Dual-Lens Device and Dual-Lens Device
US20170019616A1 (en) * 2014-05-15 2017-01-19 Huawei Technologies Co., Ltd. Multi-frame noise reduction method, and terminal
WO2017215501A1 (en) * 2016-06-15 2017-12-21 深圳市万普拉斯科技有限公司 Method and device for image noise reduction processing and computer storage medium
CN107230192A (en) * 2017-05-31 2017-10-03 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and mobile terminal
CN107205116A (en) * 2017-06-13 2017-09-26 广东欧珀移动通信有限公司 Image-selecting method and Related product
US20190130532A1 (en) * 2017-11-01 2019-05-02 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image-processing method, apparatus and device
CN108898567A (en) * 2018-09-20 2018-11-27 北京旷视科技有限公司 Image denoising method, apparatus and system
CN109348088A (en) * 2018-11-22 2019-02-15 Oppo广东移动通信有限公司 Image denoising method, device, electronic equipment and computer readable storage medium
CN110349163A (en) * 2019-07-19 2019-10-18 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112488027A (en) * 2020-12-10 2021-03-12 Oppo(重庆)智能科技有限公司 Noise reduction method, electronic equipment and computer storage medium
CN112488027B (en) * 2020-12-10 2024-04-30 Oppo(重庆)智能科技有限公司 Noise reduction method, electronic equipment and computer storage medium

Also Published As

Publication number Publication date
CN111091513B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN111091590B (en) Image processing method, device, storage medium and electronic equipment
US10887519B2 (en) Method, system and apparatus for stabilising frames of a captured video sequence
US8126206B2 (en) Image processing apparatus, image processing method, and program
CN110852997B (en) Dynamic image definition detection method and device, electronic equipment and storage medium
JP5500163B2 (en) Image processing system, image processing method, and image processing program
CN106934806B (en) It is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus
EP2639769A2 (en) Image synthesis device and computer program for image synthesis
US20180068451A1 (en) Systems and methods for creating a cinemagraph
EP3093822A1 (en) Displaying a target object imaged in a moving picture
JP4631973B2 (en) Image processing apparatus, image processing apparatus control method, and image processing apparatus control program
CN111028276A (en) Image alignment method and device, storage medium and electronic equipment
WO2019123554A1 (en) Image processing device, image processing method, and recording medium
JP2018137636A (en) Image processing device and image processing program
CN111091513B (en) Image processing method, device, computer readable storage medium and electronic equipment
Ha et al. Embedded panoramic mosaic system using auto-shot interface
JP6403207B2 (en) Information terminal equipment
Satiro et al. Super-resolution of facial images in forensics scenarios
JP6478282B2 (en) Information terminal device and program
CN107251089B (en) Image processing method for motion detection and compensation
CN115294493A (en) Visual angle path acquisition method and device, electronic equipment and medium
JP5051671B2 (en) Information processing apparatus, information processing method, and program
US11790483B2 (en) Method, apparatus, and device for identifying human body and computer readable storage medium
JP6717769B2 (en) Information processing device and program
Chen et al. Applying Image Processing Technology to Automatically Detect and Adjust Paper Benchmark for Printing Machine.
Kooi et al. Colour descriptors for tracking in spatial augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant