CN113409375A - Image processing method, image processing apparatus, and non-volatile storage medium - Google Patents

Image processing method, image processing apparatus, and non-volatile storage medium Download PDF

Info

Publication number
CN113409375A
CN113409375A CN202110874352.XA CN202110874352A CN113409375A CN 113409375 A CN113409375 A CN 113409375A CN 202110874352 A CN202110874352 A CN 202110874352A CN 113409375 A CN113409375 A CN 113409375A
Authority
CN
China
Prior art keywords
images
image
determining
definition
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110874352.XA
Other languages
Chinese (zh)
Other versions
CN113409375B (en
Inventor
张鎏锟
熊剑平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110874352.XA priority Critical patent/CN113409375B/en
Publication of CN113409375A publication Critical patent/CN113409375A/en
Application granted granted Critical
Publication of CN113409375B publication Critical patent/CN113409375B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing device and a nonvolatile storage medium. The method comprises the following steps: acquiring a plurality of images, and aligning the same content of each image in the plurality of images to obtain a plurality of position-converted images; masking the images after the position transformation to obtain a plurality of images containing the same area content; determining definition reference information of each image in the plurality of images based on the plurality of images containing the same area content; and determining an image with the definition meeting a preset condition from the plurality of images based on the definition reference information of each image. By the method and the device, the problems of high calculation complexity and low calculation speed of acquiring the clear picture from a plurality of images obtained by continuous shooting in the related art are solved.

Description

Image processing method, image processing apparatus, and non-volatile storage medium
Technical Field
The present application relates to the field of video image processing technologies, and in particular, to an image processing method and apparatus, and a non-volatile storage medium.
Background
When a dynamic object is continuously shot, in order to obtain a clear picture from a plurality of images obtained by continuous shooting, a method for obtaining the edge of an image through a traditional filtering method and then summarizing the edge of the image to obtain the definition of the image appears in the related art, but the method needs to use a CPU for operation, does not support multi-batch operation and is slow. In the related art, a method for calculating the definition by using a depth model also appears, but in order to ensure the accuracy of calculation, the model is often very large, so that the calculation speed is not ideal, and the generalization is not strong.
Aiming at the problems of high calculation complexity and low calculation speed of acquiring a clear picture from a plurality of images obtained by continuous shooting in the related art, an effective solution is not provided at present.
Disclosure of Invention
The application provides an image processing method, an image processing device and a nonvolatile storage medium, which are used for solving the problems of high calculation complexity and low calculation speed of acquiring a clear picture from a plurality of images obtained by continuous shooting in the related art.
According to one aspect of the present application, an image processing method is provided. The method comprises the following steps: acquiring a plurality of images, and aligning the same content of each image in the plurality of images to obtain a plurality of position-converted images; masking the images after the position transformation to obtain a plurality of images containing the same area content; determining definition reference information of each image in the plurality of images based on the plurality of images containing the same area content; and determining an image with the definition meeting a preset condition from the plurality of images based on the definition reference information of each image.
Optionally, the obtaining a plurality of images, and aligning the same content of each of the plurality of images to obtain a plurality of position-transformed images includes: acquiring a plurality of first images obtained by continuously shooting a target object; grouping a plurality of first images according to the shooting time sequence, and acquiring a group of images from the grouped images to obtain a plurality of images; and determining a calibration image from the plurality of images, and aligning the same content of each image in the plurality of images based on the calibration image to obtain a plurality of position-transformed images.
Optionally, determining the calibration image from the plurality of images comprises: acquiring a time section between the shooting time of the first image and the shooting time of the last image in the plurality of images to obtain a target time section; and acquiring a target image with the shooting time in the target time zone from the plurality of images, and determining the target image as a calibration image.
Optionally, based on the calibration image, performing position alignment on the same content of each of the multiple images, and obtaining multiple position-transformed images includes: performing feature extraction on each image in the plurality of images to obtain feature points of each image; respectively matching the characteristic points of each non-calibrated image in the plurality of images with the characteristic points of the calibrated image to obtain a plurality of groups of matched characteristic point pair sets, wherein the non-calibrated images are images except the calibrated image in the plurality of images, and the matched characteristic point pair sets are sets formed by the characteristic points with the same content of the non-calibrated image and the calibrated image; and performing position conversion on the feature points of each non-calibrated image based on each group of matched feature point pair set to obtain a plurality of images with converted positions.
Optionally, based on each group of matching feature point pair sets, performing position conversion on the feature points of each non-calibration image to obtain a plurality of position-converted images includes: calculating a homography matrix according to each group of matched characteristic point pair sets; and performing position conversion on the characteristic points of the non-calibration images corresponding to the matched characteristic point pairs according to the homography matrix to obtain the images after position conversion.
Optionally, before performing mask processing on a plurality of position-transformed images to obtain a plurality of images containing the same area content, the method further includes: acquiring a target area formed in the position change process of each image after the position conversion to obtain a plurality of target areas, wherein the target areas are areas with pixel values smaller than a preset value; and constructing an image mask according to the plurality of target areas, wherein the image mask is a mask used in the mask processing.
Optionally, determining the sharpness reference information of each of the plurality of images based on the plurality of images containing the same region content includes: inputting a plurality of images containing the same area content into an image processing model to obtain definition reference information of each image in the plurality of images; the image processing model is obtained by the following steps: determining a target convolution kernel according to a second derivative calculation formula of pixel values of the image; and determining the weight parameters of the convolution layer of the neural network model based on the target convolution kernel to obtain an image processing model.
Optionally, the definition reference information includes a definition reference value representing a definition of the corresponding image, and determining, based on the definition reference information of each image, an image whose definition meets a preset condition from the plurality of images includes: determining the variance of the definition reference value of each image, and determining the image corresponding to the definition reference value with the maximum variance as the image with the definition meeting the preset condition; or determining the image corresponding to the definition reference value which is greater than the definition reference value threshold value as the image with the definition meeting the preset condition; or based on the size of the definition reference value of each image, sorting the definition reference values of the plurality of images, and determining the image corresponding to the definition reference value sorted at the designated position as the image with the definition meeting the preset condition.
According to another aspect of the present application, there is provided an image processing apparatus. The device includes: the first acquisition unit is used for acquiring a plurality of images and aligning the same content of each image in the plurality of images to obtain a plurality of position-converted images; the first processing unit is used for carrying out mask processing on the images after the position transformation to obtain a plurality of images containing the same area content; the second processing unit is used for determining the definition reference information of each image in the plurality of images based on the plurality of images containing the same area content; and the first determining unit is used for determining the image with the definition meeting the preset condition from the plurality of images based on the definition reference information of each image.
According to another aspect of the embodiments of the present invention, there is also provided a non-volatile storage medium including a stored program, wherein the program controls an apparatus in which the non-volatile storage medium is located to execute an image processing method when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a processor and a memory; the memory has stored therein computer readable instructions, and the processor is configured to execute the computer readable instructions, wherein the computer readable instructions when executed perform an image processing method.
Through the application, the following steps are adopted: acquiring a plurality of images, and aligning the same content of each image in the plurality of images to obtain a plurality of position-converted images; masking the images after the position transformation to obtain a plurality of images containing the same area content; determining definition reference information of each image in the plurality of images based on the plurality of images containing the same area content; the image with the definition meeting the preset condition is determined from the multiple images based on the definition reference information of each image, and the problems of high calculation complexity and low calculation speed of acquiring a clear picture from the multiple images obtained by continuous shooting in the related technology are solved. The same content of each image is obtained to obtain a plurality of images containing the same area content, and the obtained images are processed by adopting an image processing model, so that the effect of efficiently obtaining the clear pictures of the plurality of images is achieved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. In the drawings:
FIG. 1 is a flow chart of an image processing method provided according to an embodiment of the present application;
FIG. 2 is a flow chart of an image processing method provided according to an embodiment of the present application;
fig. 3 is a schematic diagram of an image processing apparatus provided according to an embodiment of the present application.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to solve the problems of high computational complexity and low computational speed of acquiring a clear picture from a plurality of images obtained by continuous shooting in the related art, the following methods appear in the related art:
the method comprises the following steps: the method has the main defects that the contents of the dynamic images with different time sequences have difference, and the definition of corresponding contents in the images cannot be completely reflected by directly using filtering.
The second method comprises the following steps: the method has the main defects that ResNet152 is adopted in the model, so that the memory is occupied too much, the processing speed is low, and the generalization capability of the model is not strong.
The third method comprises the following steps: the method has the main defects that the image blocks and the calculation sobel are filtered by using a CPU, the multi-batch parallel processing of the image is not supported, and the sobel operator is not processed based on the image gray level and cannot strictly separate the image subject from the background.
The method four comprises the following steps: the method includes the steps of calculating the definition of edge images of sub-blocks after the sub-blocks of an image are divided, then sequencing the sub-blocks from large to small, and determining the definition value of the whole image based on the definition of the sub-image blocks with the preset number in the plurality of sequenced sub-image blocks.
Based on this, the present application intends to provide a solution to the above technical problem, the details of which will be explained in the following embodiments.
According to an embodiment of the present application, there is provided an image processing method.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present application. As shown in fig. 1, the method comprises the steps of:
step S102, acquiring a plurality of images, and aligning the same content of each image in the plurality of images to obtain a plurality of position-converted images.
Specifically, the multiple images may be images continuously captured by the medical endoscope, and the content of the multiple images obtained by continuous capturing is dynamic, so that the content of the multiple images obtained by continuous capturing is different, and therefore, the same content of the multiple images is aligned, and the same content of the multiple images is conveniently processed in the subsequent process.
In order to improve the effect of aligning the same content of images when a large number of images are obtained by continuous shooting, optionally, in the image processing method provided in this embodiment of the present application, the obtaining the images with the transformed positions by acquiring the multiple images and aligning the same content of each of the multiple images includes: acquiring a plurality of first images obtained by continuously shooting a target object; grouping a plurality of first images according to the shooting time sequence, and acquiring a group of images from the grouped images to obtain a plurality of images; and determining a calibration image from the plurality of images, and aligning the same content of each image in the plurality of images based on the calibration image to obtain a plurality of position-transformed images.
In an alternative embodiment, the same lesion part can be shot in a medical endoscope to obtain a section of video, and a plurality of images contained in the section of video are grouped from front to back according to the preset video frame number to obtain a plurality of groups of images, so that the content contained in each image in each group of images does not change greatly.
Further, in order to facilitate the alignment of the same content in each set of images, a calibration image may be selected from one set of images, and the content of other images may be aligned with the content of the calibration image, so as to achieve the alignment of the same content of each of the plurality of images.
Optionally, in the image processing method provided in the embodiment of the present application, the determining the calibration image from the multiple images includes: acquiring a time section between the shooting time of the first image and the shooting time of the last image in the plurality of images to obtain a target time section; and acquiring a target image with the shooting time in the target time zone from the plurality of images, and determining the target image as a calibration image.
Specifically, since the angle of view of the medical endoscope is not fixed, the contents of a group of captured images have differences, and in order to reduce the influence of the image contents of the edge region on the sharpness comparison of the group of captured images, an intermediate time sequence image can be selected as a calibration image, so that the group of captured images after position conversion according to the calibration image has a larger common region, thereby facilitating the sharpness calculation and comparison of the same content.
Optionally, in the image processing method provided in the embodiment of the present application, based on the calibration image, the same content of each of the multiple images is aligned in position, and obtaining the multiple position-transformed images includes: performing feature extraction on each image in the plurality of images to obtain feature points of each image; respectively matching the characteristic points of each non-calibrated image in the plurality of images with the characteristic points of the calibrated image to obtain a plurality of groups of matched characteristic point pair sets, wherein the non-calibrated images are images except the calibrated image in the plurality of images, and the matched characteristic point pair sets are sets formed by the characteristic points with the same content of the non-calibrated image and the calibrated image; and performing position conversion on the feature points of each non-calibrated image based on each group of matched feature point pair set to obtain a plurality of images with converted positions.
Specifically, an ORB algorithm (ordered FAST and Rotated BRIEF) may be used to perform feature extraction on the images to obtain feature points of each image, a BruteForce algorithm (storm algorithm) is used to match the feature points of the calibrated image with the feature points of the non-calibrated image to obtain an optimal matching feature point pair set of the calibrated image and the non-calibrated image, and position conversion is performed on the feature points of the non-calibrated image according to the optimal matching feature point pair set.
Optionally, in the image processing method provided in the embodiment of the present application, performing position conversion on the feature points of each non-calibration image based on each group of matching feature point pair sets to obtain a plurality of position-converted images includes: calculating a homography matrix according to each group of matched characteristic point pair sets; and performing position conversion on the characteristic points of the non-calibration images corresponding to the matched characteristic point pairs according to the homography matrix to obtain the images after position conversion.
Specifically, a homography matrix corresponding to each optimal matching characteristic point pair set is calculated, the homography matrix represents the transformation relation between the calibration image and other non-calibration images in the group, and the homography matrix is adopted to carry out image transformation on images except the calibration images, so that the contents of the same area of the images are consistent.
Specifically, the homography matrix is defined as follows:
Figure BDA0003189798290000061
wherein [ A ] is2×2,T2×1]Representing an affine transformation matrix, a11、a12、a21、a22、tx、tyRepresenting affine transformation parameters, tx、tyFor translation transformation parameters among affine transformation parameters, VT=[v1,v2]Representing the relationship of the intersection points of the edges after the graphics transformation, v1、v2Representing projective transformation parameters, v1、v2Satisfy v1x+v2y=1,
Figure BDA0003189798290000062
And
Figure BDA0003189798290000063
respectively, the intersection of the straight line with the x-axis and the y-axis, and s represents the intersection with VT=[v1,v2]The associated scaling factor.
Step S104, masking the plurality of images after position conversion to obtain a plurality of images containing the same area content.
Specifically, since the same content of the images after position conversion is located at the same position, the same image mask can be used to perform mask processing on the images after position conversion, so as to cut the image content, and obtain a plurality of images only containing the same area content.
Optionally, in the image processing method provided in this embodiment of the present application, before performing mask processing on a plurality of position-transformed images to obtain a plurality of images including the same area content, the method further includes: acquiring a target area formed in the position change process of each image after the position conversion to obtain a plurality of target areas, wherein the target areas are areas with pixel values smaller than a preset value; and constructing an image mask according to the plurality of target areas, wherein the image mask is a mask used in the mask processing.
Specifically, the preset value may be 0, the target region is a region having a pixel value smaller than 0, that is, a black region, the black regions generated in the conversion process of all the images are counted first, an image mask is obtained according to the black regions of all the images, then mask calculation is performed on each image using the obtained image mask, and the pixel value of the redundant image region is taken as 0, so that all the images have the same edge region having the pixel value of 0 and the same content region.
Step S106, based on a plurality of images containing the same area content, determining definition reference information of each image in the plurality of images.
In order to ensure that the image processing effect reduces the complexity of the model, optionally, in the image processing method provided in the embodiment of the present application, determining the sharpness reference information of each of the plurality of images based on the plurality of images including the same region content includes: inputting a plurality of images containing the same area content into an image processing model to obtain definition reference information of each image in the plurality of images; the image processing model is obtained by the following steps: determining a target convolution kernel according to a second derivative calculation formula of pixel values of the image; and determining the weight parameters of the convolution layer of the neural network model based on the target convolution kernel to obtain an image processing model.
Specifically, the image processing model may be a model for determining an edge sharpness degree of the image, and the edge sharpness degree of the image may reflect the sharpness degree of the image, and a result of processing the image by the image processing model is sharpness reference information, and the sharpness reference information represents sharpness of the image, for example, the sharpness reference information may be a score between 0 and 1.
Specifically, the image processing model may be a neural network model including one convolutional layer, and a method of setting a weight parameter of a fixed convolutional layer may be used instead of a method of obtaining a weight parameter through multiple times of training, so as to achieve the purpose of simplifying the model training process.
The processing effect of the image processing model depends on the weight parameters of the convolutional layer, and it should be noted that the local value of the first-order derivative of a function where the original image changes greatly becomes large, and this phenomenon is also applicable to the edge position of the image, so that the maximum value can be obtained at the place where the second-order derivative of the image is 0, and the edge position of the image is obtained, and therefore, the convolutional kernel parameters, that is, the weight parameters of the convolutional layer, can be determined based on the image second-order derivative calculation formula.
Specifically, the image second derivative calculation formula is as follows:
Figure BDA0003189798290000071
where Δ f is the second derivative of the image, and x and y represent the abscissa and ordinate directions of the image, respectively.
It should be noted that, since the image content is discrete, the differential can be approximated by a difference, and the second derivatives are obtained for the x direction and the y direction of the image respectively.
Wherein, the first derivative and the second derivative of the image in the x direction are respectively shown as follows:
Figure BDA0003189798290000081
Figure BDA0003189798290000082
wherein, the first derivative and the second derivative of the image in the y direction are respectively shown as follows:
Figure BDA0003189798290000083
Figure BDA0003189798290000084
thus, combining the x-direction with the y-direction yields the second derivative of the entire image:
Figure BDA0003189798290000085
further, from the coefficients of each term in the above expression of the second derivative of the image, the following convolution kernel of 3 × 3 can be obtained:
Figure BDA0003189798290000086
the convolution kernel is used as the weight parameter of the convolution layer of the neural network model, so that the second derivative of the image can be obtained after the neural network processes the image, and the second derivative of the image can represent the clear morning reading of the edge area of the image, namely, the edge definition degree of the image can be represented by the result of processing the image by the neural network.
It should be noted that, because the traditional filtering method is slow in calculation speed and does not support multi-batch parallel operation, and the depth model method is too large in model and too slow in speed, according to the embodiment, the second-order partial derivative of the image is used for determining the convolution kernel, and the convolution kernel is used as the weight parameter of the convolution layer of the neural network model, so that the algorithm can process multiple batches of images on the GPU in parallel, and the sharpness value of the image is obtained quickly and efficiently.
And step S108, determining an image with the definition meeting a preset condition from the plurality of images based on the definition reference information of each image.
Optionally, in the image processing method provided in this embodiment of the present application, the definition reference information includes a definition reference value representing a definition degree of a corresponding image, and determining, from the plurality of images, an image whose definition satisfies the preset condition based on the definition reference information of each image includes: determining the variance of the definition reference value of each image, and determining the image corresponding to the definition reference value with the maximum variance as the image with the definition meeting the preset condition; or determining the image corresponding to the definition reference value which is greater than the definition reference value threshold value as the image with the definition meeting the preset condition; or based on the size of the definition reference value of each image, sorting the definition reference values of the plurality of images, and determining the image corresponding to the definition reference value sorted at the designated position as the image with the definition meeting the preset condition.
Specifically, in an alternative embodiment, the variance is calculated for the processing result of the image processing model, that is, the variance is calculated for the sharpness reference information of the image, and this result may be regarded as the intensity of the change of the edge of the image, that is, the sharpness detection result of the image, so that the image with the largest variance is determined as the image with the sharpness meeting the preset condition.
Wherein, the calculation formula of the variance is as follows:
Figure BDA0003189798290000091
wherein the variance is s2X is the mean of the image, xiAnd n is the number of pixel points of the image.
In another optional implementation, a definition reference value threshold used for measuring whether the definition meets the preset condition is determined, the definition reference values of the images obtained by processing the image processing model are respectively compared with the definition reference value threshold, and the image with the definition meeting the preset condition is determined to be the image with the definition greater than the definition reference value threshold.
In another optional implementation manner, the definition reference values are sorted to obtain a sorting result, the image corresponding to the N values sorted at the beginning may be determined as an image whose definition meets a preset condition, the image corresponding to the N values sorted at the end may also be determined as an image whose definition meets the preset condition, and the image corresponding to the N values sorted in the middle may also be determined as an image whose definition meets the preset condition, where the specific position of the designated position is not limited in this embodiment.
According to the image processing method provided by the embodiment of the application, a plurality of images are obtained, and the same content of each image in the plurality of images is aligned in position, so that a plurality of images with changed positions are obtained; masking the images after the position transformation to obtain a plurality of images containing the same area content; determining definition reference information of each image in the plurality of images based on the plurality of images containing the same area content; the image with the definition meeting the preset condition is determined from the multiple images based on the definition reference information of each image, and the problems of high calculation complexity and low calculation speed of acquiring a clear picture from the multiple images obtained by continuous shooting in the related technology are solved. The same content of each image is obtained to obtain a plurality of images containing the same area content, and the obtained images are processed by adopting an image processing model, so that the effect of efficiently obtaining the clear pictures of the plurality of images is achieved.
Fig. 2 is a schematic diagram of an alternative image processing apparatus according to an embodiment of the application. As shown in fig. 2, the method includes:
firstly, a group of images are collected by using a medical endoscope, one image in a middle time sequence is selected as a calibration image, and the calibration image and other images in the group are subjected to feature extraction and feature matching to obtain an optimal matching point pair set; and calculating homography matrixes of all the optimal matching characteristic point pair sets, and transforming the rest images in the group except the calibration image by adopting the homography matrixes.
Further, a common mask is calculated for all the transformed images and applied to each image, resulting in cropped images having regions of the same content.
And finally, inputting the cut images into a definition detection model to obtain the definition score of each image, and taking the image corresponding to the maximum definition score in a group of images, namely the clearest image in the group of images.
According to the embodiment, the image registration algorithm is used for non-calibration pictures in a group of pictures to obtain the positions of the corresponding regions of the calibration pictures, the mask is used for calculating to obtain the public content region of the group of pictures, the comparison content of the pictures is the same, then the corresponding region image blocks are input into the definition detection model, the definition score of each region block is output, the pictures with the optimal definition quality are obtained by comparing the definition scores, and the accuracy and the efficiency of obtaining the clear images in the group of pictures are improved.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
The embodiment of the present application further provides an image processing apparatus, and it should be noted that the image processing apparatus according to the embodiment of the present application may be configured to execute the method for image processing provided by the embodiment of the present application. The following describes an image processing apparatus according to an embodiment of the present application.
Fig. 3 is a schematic diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 3, the apparatus includes: a first acquisition unit 10, a first processing unit 20, a second processing unit 30 and a first determination unit 40.
Specifically, the first obtaining unit 10 is configured to obtain a plurality of images, and align the same content of each of the plurality of images to obtain a plurality of position-transformed images.
The first processing unit 20 is configured to perform mask processing on the plurality of position-converted images to obtain a plurality of images including the same area content.
And the second processing unit 30 is configured to determine sharpness reference information of each of the plurality of images based on the plurality of images including the same region content.
And a first determining unit 40 for determining an image having a sharpness satisfying a preset condition from the plurality of images based on the sharpness reference information of each image.
Optionally, in the image processing apparatus provided in the embodiment of the present application, the first obtaining unit 10 includes: the device comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a plurality of first images obtained by continuously shooting a target object; the grouping module is used for grouping the first images according to the shooting time sequence and acquiring a group of images from the grouped images to obtain a plurality of images; the first determining module is used for determining a calibration image from the plurality of images, and aligning the same content of each image in the plurality of images based on the calibration image to obtain a plurality of position-transformed images.
Optionally, in the image processing apparatus provided in the embodiment of the present application, the first determining module includes: the first acquisition submodule is used for acquiring a time section between the shooting time of the first image and the shooting time of the last image in the plurality of images to obtain a target time section; and the second acquisition sub-module is used for acquiring a target image with the shooting time in the target time section from the plurality of images and determining the target image as a calibration image.
Optionally, in the image processing apparatus provided in the embodiment of the present application, the first determining module further includes: the characteristic extraction submodule is used for extracting the characteristic of each image in the plurality of images to obtain the characteristic point of each image; the matching sub-module is used for matching the characteristic points of each non-calibrated image in the plurality of images with the characteristic points of the calibrated image respectively to obtain a plurality of groups of matching characteristic point pair sets, wherein the non-calibrated images are images except the calibrated images in the plurality of images, and the matching characteristic point pair sets are sets formed by the characteristic points of the non-calibrated images and the characteristic points of the calibrated images with the same content; and the conversion sub-module is used for carrying out position conversion on the characteristic points of each non-calibration image based on each group of matched characteristic point pair sets to obtain a plurality of images with converted positions.
Optionally, in the image processing apparatus provided in the embodiment of the present application, the converting sub-module includes: the calculation submodule is used for calculating a homography matrix according to each group of matched characteristic point pair sets; and the position conversion sub-module is used for carrying out position conversion on the characteristic points of the non-calibration images corresponding to the matched characteristic point pairs set according to the homography matrix to obtain the images after position conversion.
Optionally, in the image processing apparatus provided in the embodiment of the present application, the apparatus further includes: the second acquisition unit is used for acquiring a target area formed by each image after position conversion in the process of position change before masking the images after position conversion to obtain a plurality of images containing the same area content to obtain a plurality of target areas, wherein the target areas are areas with pixel values smaller than a preset value; and the construction unit is used for constructing an image mask according to the plurality of target areas, wherein the image mask is used in mask processing.
Optionally, in the image processing apparatus provided in the embodiment of the present application, the second processing unit 30 is further configured to input a plurality of images containing the same area content into the image processing model, so as to obtain the sharpness reference information of each of the plurality of images; the image processing model is obtained by the following steps: determining a target convolution kernel according to a second derivative calculation formula of pixel values of the image; and determining the weight parameters of the convolution layer of the neural network model based on the target convolution kernel to obtain an image processing model.
Optionally, in the image processing apparatus provided in the embodiment of the present application, the first determining unit 40 includes: the second determining module is used for determining the variance of the definition reference value of each image and determining the image corresponding to the definition reference value with the maximum variance as the image with the definition meeting the preset condition; or a third determining module, configured to determine an image corresponding to the sharpness reference value greater than the sharpness reference value threshold as an image whose sharpness meets a preset condition; or the third determining module is used for sequencing the definition reference values of the plurality of images based on the size of the definition reference value of each image, and determining the image corresponding to the definition reference value sequenced at the designated position as the image with the definition meeting the preset condition.
In the image processing apparatus provided in the embodiment of the present application, a plurality of images are acquired by a first acquiring unit 10, and the same content of each of the plurality of images is aligned to obtain a plurality of position-converted images; the first processing unit 20 performs mask processing on the plurality of position-converted images to obtain a plurality of images containing the same area content; the second processing unit 30 determines the sharpness reference information of each image in the plurality of images based on the plurality of images containing the same area content; the first determining unit 40 determines an image with a definition meeting a preset condition from a plurality of images based on the definition reference information of each image, so as to solve the problems of high computation complexity and low computation speed of obtaining a definition picture from a plurality of images obtained by continuous shooting in the related art, obtain a plurality of images containing the same area content by obtaining the same content of each image, and process the obtained images by adopting an image processing model, thereby achieving the effect of efficiently obtaining the definition pictures of the plurality of images.
The image processing apparatus includes a processor and a memory, the first acquiring unit 10, the first processing unit 20, the second processing unit 30, the first determining unit 40, and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to realize corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to be one or more than one, and the problems of high calculation complexity and low calculation speed of acquiring a clear picture from a plurality of images obtained by continuous shooting in the related technology are solved by adjusting kernel parameters.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
The embodiment of the application also provides a nonvolatile storage medium, which comprises a stored program, wherein the program controls the equipment where the nonvolatile storage medium is located to execute an image processing method when running.
The embodiment of the application also provides an electronic device, which comprises a processor and a memory; the memory has stored therein computer readable instructions, and the processor is configured to execute the computer readable instructions, wherein the computer readable instructions when executed perform an image processing method. The electronic device herein may be a server, a PC, a PAD, a mobile phone, etc.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. An image processing method, comprising:
acquiring a plurality of images, and aligning the same content of each image in the plurality of images to obtain a plurality of position-converted images;
masking the images after the position transformation to obtain a plurality of images containing the same area content;
determining definition reference information of each image in the plurality of images based on the plurality of images containing the same area content;
and determining an image with the definition meeting a preset condition from the plurality of images based on the definition reference information of each image.
2. The method of claim 1, wherein the obtaining the plurality of images and aligning the same content of each of the plurality of images to obtain the plurality of position-transformed images comprises:
acquiring a plurality of first images obtained by continuously shooting a target object;
grouping a plurality of first images according to the shooting time sequence, and acquiring a group of images from the grouped images to obtain a plurality of images;
and determining a calibration image from the plurality of images, and aligning the same content of each image in the plurality of images based on the calibration image to obtain the plurality of position-transformed images.
3. The method of claim 2, wherein the determining a calibration image from the plurality of images comprises:
acquiring a time section between the shooting time of the first image and the shooting time of the last image in the plurality of images to obtain a target time section;
and acquiring a target image with shooting time in the target time zone from the plurality of images, and determining the target image as the calibration image.
4. The method according to claim 2, wherein the aligning the same content of each of the plurality of images based on the calibration image to obtain the plurality of position-transformed images comprises:
performing feature extraction on each image in the plurality of images to obtain a feature point of each image;
respectively matching the feature points of each non-calibration image in the plurality of images with the feature points of the calibration image to obtain a plurality of groups of matched feature point pair sets, wherein the non-calibration images are images in the plurality of images except the calibration image, and the matched feature point pair sets are sets formed by feature points with the same content of the non-calibration images and the calibration image;
and performing position conversion on the feature points of each non-calibration image based on each group of the matched feature point pair set to obtain the images after position conversion.
5. The method according to claim 4, wherein the performing position conversion on the feature points of each non-calibration image based on each set of the matched feature point pairs to obtain the position-converted images comprises:
calculating a homography matrix according to each group of the matching characteristic point pair sets;
and performing position conversion on the characteristic points of the non-calibration image corresponding to the matching characteristic point pair set according to the homography matrix to obtain the image after the position conversion.
6. The method according to claim 1, wherein before the masking the plurality of position-transformed images to obtain a plurality of images containing the same area content, the method further comprises:
acquiring a target area formed in the position change process of each image after the position conversion to obtain a plurality of target areas, wherein the target areas are areas with pixel values smaller than a preset value;
and constructing an image mask according to the target areas, wherein the image mask is used in mask processing.
7. The method according to claim 1, wherein the determining the sharpness reference information of each of the plurality of images based on the plurality of images containing the same region content comprises
Inputting the images containing the same area content into an image processing model to obtain definition reference information of each image in the images;
wherein the image processing model is obtained by:
determining a target convolution kernel according to a second derivative calculation formula of pixel values of the image;
and determining the weight parameters of the convolution layer of the neural network model based on the target convolution kernel to obtain the image processing model.
8. The method according to claim 1, wherein the sharpness reference information includes a sharpness reference value representing a sharpness degree of a corresponding image, and the determining, based on the sharpness reference information of each image, an image with a sharpness meeting a preset condition from the plurality of images includes:
determining the variance of the definition reference values of the images, and determining the image corresponding to the definition reference value with the maximum variance as the image with the definition meeting the preset condition; or
Determining the image corresponding to the definition reference value larger than the definition reference value threshold value as the image with the definition meeting the preset condition; or
And sorting the definition reference values of the plurality of images based on the size of the definition reference value of each image, and determining the image corresponding to the definition reference value sorted at the designated position as the image with the definition meeting the preset condition.
9. An image processing apparatus characterized by comprising:
the first acquisition unit is used for acquiring a plurality of images and aligning the same content of each image in the plurality of images to obtain a plurality of position-converted images;
the first processing unit is used for carrying out mask processing on the images after the position transformation to obtain a plurality of images containing the same area content;
the second processing unit is used for determining the definition reference information of each image in the plurality of images based on the plurality of images containing the same regional content;
and the first determining unit is used for determining the images with the definition meeting the preset condition from the plurality of images according to the definition reference information of each image.
10. A non-volatile storage medium, comprising a stored program, wherein the program when executed controls a device in which the non-volatile storage medium is located to perform the image processing method of any one of claims 1 to 8.
CN202110874352.XA 2021-07-30 2021-07-30 Image processing method, image processing apparatus, and non-volatile storage medium Active CN113409375B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110874352.XA CN113409375B (en) 2021-07-30 2021-07-30 Image processing method, image processing apparatus, and non-volatile storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110874352.XA CN113409375B (en) 2021-07-30 2021-07-30 Image processing method, image processing apparatus, and non-volatile storage medium

Publications (2)

Publication Number Publication Date
CN113409375A true CN113409375A (en) 2021-09-17
CN113409375B CN113409375B (en) 2023-02-24

Family

ID=77688332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110874352.XA Active CN113409375B (en) 2021-07-30 2021-07-30 Image processing method, image processing apparatus, and non-volatile storage medium

Country Status (1)

Country Link
CN (1) CN113409375B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782300A (en) * 2022-04-29 2022-07-22 苏州华兴源创科技股份有限公司 Product image reconstruction method, product reconstruction model training method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2541539A1 (en) * 2010-02-26 2013-01-02 Sharp Kabushiki Kaisha Image display device and image display method
CN104574412A (en) * 2015-01-22 2015-04-29 浙江大学 Remote sensing image defogging method under inhomogeneous cloud and fog condition
CN108961183A (en) * 2018-06-27 2018-12-07 Oppo广东移动通信有限公司 Image processing method, terminal device and computer readable storage medium
US20190188870A1 (en) * 2017-12-20 2019-06-20 International Business Machines Corporation Medical image registration guided by target lesion
CN112802033A (en) * 2021-01-28 2021-05-14 Oppo广东移动通信有限公司 Image processing method and device, computer readable storage medium and electronic device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2541539A1 (en) * 2010-02-26 2013-01-02 Sharp Kabushiki Kaisha Image display device and image display method
CN104574412A (en) * 2015-01-22 2015-04-29 浙江大学 Remote sensing image defogging method under inhomogeneous cloud and fog condition
US20190188870A1 (en) * 2017-12-20 2019-06-20 International Business Machines Corporation Medical image registration guided by target lesion
CN108961183A (en) * 2018-06-27 2018-12-07 Oppo广东移动通信有限公司 Image processing method, terminal device and computer readable storage medium
CN112802033A (en) * 2021-01-28 2021-05-14 Oppo广东移动通信有限公司 Image processing method and device, computer readable storage medium and electronic device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782300A (en) * 2022-04-29 2022-07-22 苏州华兴源创科技股份有限公司 Product image reconstruction method, product reconstruction model training method and device
WO2023206844A1 (en) * 2022-04-29 2023-11-02 苏州华兴源创科技股份有限公司 Product image reconstruction method and apparatus, and product reconstruction model training method and apparatus

Also Published As

Publication number Publication date
CN113409375B (en) 2023-02-24

Similar Documents

Publication Publication Date Title
CN110428366B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109978839B (en) Method for detecting wafer low-texture defects
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108921897B (en) Method and apparatus for locating card area
CN111199230B (en) Method, device, electronic equipment and computer readable storage medium for target detection
CN111612781A (en) Screen defect detection method and device and head-mounted display equipment
CN110852997B (en) Dynamic image definition detection method and device, electronic equipment and storage medium
CN114529459B (en) Method, system and medium for enhancing image edge
CN111680690B (en) Character recognition method and device
CN111028170B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN109190617B (en) Image rectangle detection method and device and storage medium
CN111192241B (en) Quality evaluation method and device for face image and computer storage medium
CN113674203A (en) Defect detection model training method and device and defect detection method and device
CN113409375B (en) Image processing method, image processing apparatus, and non-volatile storage medium
CN112991159B (en) Face illumination quality evaluation method, system, server and computer readable medium
CN112204957A (en) White balance processing method and device, movable platform and camera
Khalid et al. Image de-fencing using histograms of oriented gradients
CN111767752B (en) Two-dimensional code identification method and device
CN113822927B (en) Face detection method, device, medium and equipment suitable for weak quality image
CN113239738B (en) Image blurring detection method and blurring detection device
CN111627041B (en) Multi-frame data processing method and device and electronic equipment
Hwang et al. Source camera identification based on interpolation via lens distortion correction
CN112733565A (en) Two-dimensional code coarse positioning method, equipment and storage medium
CN112634298A (en) Image processing method and device, storage medium and terminal
CN111985535A (en) Method and device for optimizing human body depth map through neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant