CN110738625B - Image resampling method, device, terminal and computer readable storage medium - Google Patents

Image resampling method, device, terminal and computer readable storage medium Download PDF

Info

Publication number
CN110738625B
CN110738625B CN201911003561.6A CN201911003561A CN110738625B CN 110738625 B CN110738625 B CN 110738625B CN 201911003561 A CN201911003561 A CN 201911003561A CN 110738625 B CN110738625 B CN 110738625B
Authority
CN
China
Prior art keywords
image
sample
image processing
neighborhood
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911003561.6A
Other languages
Chinese (zh)
Other versions
CN110738625A (en
Inventor
张建中
陈岩
蒋燚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911003561.6A priority Critical patent/CN110738625B/en
Publication of CN110738625A publication Critical patent/CN110738625A/en
Application granted granted Critical
Publication of CN110738625B publication Critical patent/CN110738625B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application belongs to the technical field of image processing, and particularly relates to an image resampling method, an image resampling device, a terminal and a computer readable storage medium, wherein the method determines a preset image processing model for resampling an original image and a neighborhood pixel point selection type corresponding to the preset image processing model according to a type to be processed of the original image; then, resampling an original image according to the neighborhood pixel point selection type and the preset image processing model to obtain a target image corresponding to the original image; the problem that the universality of the traditional image resampling method is poor is solved.

Description

Image resampling method, device, terminal and computer readable storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image resampling method, an image resampling device, a terminal, and a computer-readable storage medium.
Background
Resampling is a fundamental task of image processing for modifying some information in an image, such as: and adding or deleting pixel points. At present, the common image resampling methods include a nearest neighbor method, a bilinear interpolation method, a cubic convolution interpolation method, and the like.
However, these resampling methods are only applicable singularly to the same type of image processing application scenario. For example, the method can only be applied to image processing application scenes which need to enhance images singly or image processing application scenes which need to blur images singly, and has the defect of poor universality.
Disclosure of Invention
The embodiment of the application provides an image resampling method, an image resampling device, a terminal and a computer readable storage medium, which can be suitable for various image processing application scenes.
A first aspect of an embodiment of the present application provides an image resampling method, including:
acquiring an original image and a type to be processed of the original image, and determining a preset image processing model for resampling the original image and a neighborhood pixel point selection type corresponding to the preset image processing model according to the type to be processed of the original image;
sampling pixel points of the original image according to the neighborhood pixel point selection type to obtain a plurality of neighborhood pixel points corresponding to each pixel point of the target image;
inputting pixel values of a plurality of neighborhood pixel points corresponding to each pixel point of a target image into the preset image processing model, and outputting a weight value corresponding to each neighborhood pixel point in the plurality of neighborhood pixel points and an image processing coefficient corresponding to the plurality of neighborhood pixel points by the preset image processing model;
and respectively calculating the pixel value of each pixel point of the target image according to the pixel values of the plurality of neighborhood pixel points respectively corresponding to each pixel point of the target image, the weight value corresponding to each neighborhood pixel point in the plurality of neighborhood pixel points output by the preset image processing model and the image processing coefficients corresponding to the plurality of neighborhood pixel points, so as to obtain the target image corresponding to the original image.
A second aspect of the embodiments of the present application provides an image resampling apparatus, including:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an original image and a type to be processed of the original image, and determining a preset image processing model for resampling the original image and a neighborhood pixel point selection type corresponding to the preset image processing model according to the type to be processed of the original image;
the sampling unit is used for sampling pixel points of the original image according to the neighborhood pixel point selection type to obtain a plurality of neighborhood pixel points corresponding to each pixel point of the target image;
the output unit is used for inputting the pixel values of a plurality of neighborhood pixel points corresponding to each pixel point of the target image into the preset image processing model, and outputting the weight value corresponding to each neighborhood pixel point in the plurality of neighborhood pixel points and the image processing coefficient corresponding to the plurality of neighborhood pixel points by the preset image processing model;
and the calculating unit is used for calculating the pixel value of each pixel point of the target image according to the pixel values of the plurality of neighborhood pixel points corresponding to each pixel point of the target image, the weight value corresponding to each neighborhood pixel point in the plurality of neighborhood pixel points output by the preset image processing model and the image processing coefficients corresponding to the plurality of neighborhood pixel points, so as to obtain the target image corresponding to the original image.
A third aspect of the embodiments of the present application provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method when executing the computer program.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the above method.
In the embodiment of the application, a preset image processing model for resampling an original image and a neighborhood pixel point selection type corresponding to the preset image processing model are determined according to the type to be processed of the original image; then, resampling is carried out on the original image according to the neighborhood pixel point selection type and the preset image processing model to obtain a target image corresponding to the original image, and when image processing under different image processing application scenes (different to-be-processed types of the original image) needs to be carried out on the original image, image resampling can be carried out on the original image according to the to-be-processed type of the original image and the neighborhood pixel point selection type corresponding to the preset image processing model, so that the image resampling method can be applied to different image processing application scenes, namely, no matter what kind of image processing application scenes which can be realized through image resampling needs to be carried out on the original image, the preset image processing model corresponding to the image processing application scene and the neighborhood pixel point selection corresponding to the preset image processing model can be used The original image is resampled by selecting the type, so that the problem of poor universality of the traditional image resampling method is solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic flow chart of an implementation of an image resampling method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of pixel sampling performed on an original image according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a block averaging process provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of an implementation process for training an image processing model to be trained according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of an image resampling apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Resampling is a fundamental task of image processing for modifying some information in an image, such as: and adding or deleting pixel points, and resampling the image when the image is rotated, translated, scaled and the like. At present, the common image resampling methods include a nearest neighbor method, a bilinear interpolation method, a cubic convolution interpolation method, and the like.
However, these resampling methods are only applicable singularly to the same type of image processing application scenario. For example, the method can only be applied to image processing application scenes which need to enhance images singly or image processing application scenes which need to blur images singly, and has the defect of poor universality.
Based on this, embodiments of the present application provide an image resampling method, an image resampling apparatus, a terminal, and a computer-readable storage medium, which may be applicable to a variety of image processing application scenarios.
In order to explain the technical means of the present application, the following description will be given by way of specific examples.
Fig. 1 is a schematic diagram illustrating an implementation flow of an image resampling method provided in an embodiment of the present application, where the method is applied to a terminal, and can be executed by an image resampling apparatus configured on the terminal, and is suitable for a situation where it is necessary to improve the universality of the image resampling method. The terminal can be an intelligent terminal which can realize a photographing function and is used for mobile phones, computers, wearable equipment and the like. The image resampling method described above may include steps 101 to 104.
Step 101, obtaining an original image and a type to be processed of the original image, and determining a preset image processing model for resampling the original image and a neighborhood pixel point selection type corresponding to the preset image processing model according to the type to be processed of the original image.
The types to be processed of the original image may include application types of machine vision such as face recognition, object detection, and flaw detection, and may also be image processing types such as blurring, enhancing, sharpening, blurring, and brightness adjustment for the image.
It should be noted that each type to be processed of the original image may determine a preset image processing model, each preset image processing model has a neighborhood pixel selection type uniquely corresponding to the preset image processing model, and the neighborhood pixel selection type may be a neighborhood pixel selection type adopted in a process of performing pixel sampling on the original sample image when the preset image processing model corresponding to the type to be processed is obtained.
Specifically, please refer to the description of steps 401 to 405 for the training process of the preset image processing model, which is not described herein again.
It should be noted that different image processing needs to be performed on the original image according to different types of the original image to be processed, that is, the preset image processing models determined according to different types of the original image to be processed are different, and the image processing effects achieved by the different preset image processing models are also different.
For example, when the type of the original image to be processed is defect detection, the preset image processing model achieves an image processing effect that a defect area is subjected to enhancement processing and a non-defect area is subjected to blurring processing; if the type of the original image to be processed is brightness adjustment, the preset image processing model can achieve the image processing effect of performing brightness reduction processing on a place with high exposure and performing brightness enhancement processing on a place with low exposure.
Step 102, sampling pixel points of the original image according to the neighborhood pixel point selection type, and obtaining a plurality of neighborhood pixel points corresponding to each pixel point of the target image.
The plurality of neighborhood pixels respectively corresponding to each pixel of the target image may be obtained by traversing each pixel in the original image and determining an adjacent pixel of each pixel in the traversed original image according to a neighborhood pixel selection type, so as to obtain a plurality of sets of neighborhood pixels respectively corresponding to each pixel of the target image, wherein the plurality of sets of neighborhood pixels are composed of a certain traversed pixel and an adjacent pixel of the pixel.
Specifically, the selection type of the neighborhood pixel point may include: a distance nearest selection type, a square selection type, and a cross selection type. And the number of the neighborhood pixels can be adjusted according to the actual situation.
For example, as shown in fig. 2, if the neighborhood pixel selection type is a square selection type and the number of neighborhood pixels is 9, an original pixel block 202 composed of a pixel 201 of the original image 21 and an adjacent pixel of the pixel 201 corresponds to a pixel 204 of the target image 22, and each pixel of the original image 21 is traversed, so that each pixel of the target image 22 can be finally obtained; for another example, if the neighborhood pixel selection type is a cross selection type and the number of neighborhood pixels is 5, the original pixel block 203 composed of the pixel 201 of the original image 21 and the adjacent pixel of the pixel 201 corresponds to the pixel 205 of the target image 22, and each pixel of the original image 21 is traversed, so that each pixel of the target image 23 can be finally obtained.
Specifically, when determining the coordinates of the pixel points in the target image corresponding to the plurality of neighborhood pixel points, the coordinates of a certain pixel point in the original pixel block composed of the plurality of neighborhood pixel points in the original image are converted into the pixel coordinates of a pixel point in the target image corresponding to the original pixel block composed of the plurality of neighborhood pixel points in the target image.
For example, by the formula xt=xi*RatioxAnd formula yt=yi*RatioyDetermining the coordinates of pixel points in the target image corresponding to the plurality of neighborhood pixel points, wherein (x)i,yi) The coordinates of any one pixel point in the original pixel block composed of a plurality of neighborhood pixel points in the original image are represented, for example, the coordinates of a pixel point, such as the pixel point 201 or the pixel point 206 in the original pixel block 202 in fig. 2, in the original image 21. Wherein (x)t,yt) Expressing pixel coordinates of pixel points in a target image corresponding to an original pixel block consisting of the plurality of neighborhood pixel points in the target image; for example, the original pixel block 202 in fig. 2 corresponds to the coordinates of the pixel point 204 in the target image 22 in the target image. Ratio (R)xConverting the abscissa of a certain pixel point in an original pixel block consisting of a plurality of neighborhood pixel points in an original image into the abscissa conversion coefficient of the abscissa of the pixel point in a target image corresponding to the original pixel block consisting of the plurality of neighborhood pixel points in the target image; ratio (R)yThe method is characterized in that the vertical coordinate of a certain pixel point in an original pixel block formed by a plurality of neighborhood pixel points in an original image is converted into the vertical coordinate conversion coefficient of the vertical coordinate of the pixel point in a target image corresponding to the original pixel block formed by the plurality of neighborhood pixel points in the target image.
Step 103, inputting pixel values of a plurality of neighborhood pixels corresponding to each pixel of the target image into a preset image processing model, and outputting a weight value corresponding to each neighborhood pixel in the plurality of neighborhood pixels and an image processing coefficient corresponding to the plurality of neighborhood pixels by the preset image processing model.
It should be noted that, the input preset image processing models are different, the weight value corresponding to each neighborhood pixel point in the output plurality of neighborhood pixel points is different, the image processing coefficients corresponding to the output plurality of neighborhood pixel points are also different, calculating pixel values of pixel points of the target image to be different according to different weight values corresponding to the neighborhood pixel points and different image processing coefficients corresponding to the neighborhood pixel points, therefore, when the pixel values of a plurality of neighborhood pixel points respectively corresponding to each pixel point of the target image are input into the preset image processing model for resampling the original image determined according to the type to be processed of the original image, a weight value corresponding to each neighborhood pixel point for realizing an image processing effect corresponding to the type to be processed, and image processing coefficients corresponding to the plurality of neighborhood pixel points can be obtained.
And 104, respectively calculating the pixel value of each pixel point of the target image according to the pixel values of a plurality of neighborhood pixel points respectively corresponding to each pixel point of the target image, the weight value corresponding to each neighborhood pixel point in a plurality of neighborhood pixel points output by a preset image processing model and the image processing coefficients corresponding to the plurality of neighborhood pixel points, so as to obtain the target image corresponding to the original image.
In particular, it can be represented by the formula
Figure BDA0002241757930000081
And calculating the pixel value of each pixel point of the target image.
Where α denotes a contrast coefficient, β denotes a luminance coefficient, s0To sm-1Representing pixel values, w, of a plurality of neighborhood pixels corresponding to each pixel of the target image0To wm-1Representing a weight value corresponding to each neighborhood pixel point in a plurality of neighborhood pixel points output by a preset image processing model, m representing the number of pixel points of the plurality of neighborhood pixel points corresponding to the pixel point of the target image, and m being an integer greater than or equal to 2。
That is to say, the calculating the pixel value of each pixel point of the target image according to the pixel values of the plurality of neighborhood pixel points corresponding to each pixel point of the target image, the weight value corresponding to each neighborhood pixel point of the plurality of neighborhood pixel points output by the preset image processing model, and the image processing coefficients corresponding to the plurality of neighborhood pixel points may include: the method comprises the steps of accumulating products of pixel values of a plurality of neighborhood pixel points corresponding to each pixel point of a target image and a weighted value corresponding to each neighborhood pixel point in a plurality of neighborhood pixel points output by a preset image processing model, multiplying the accumulated value by a contrast coefficient, and then summing the multiplied value with a brightness coefficient to obtain the pixel value of each pixel point of the target image.
It should be noted that the pixel values of the plurality of neighborhood pixels input into the preset image processing model may be RGB, YUV, RGB-D, Y, RGB-W, RGB-IR, and other types of pixel values, and when the pixel values of the plurality of neighborhood pixels input into the preset image processing model are RGB, YUV, RGB-D, RGB-W, RGB-IR, and other color values including a plurality of color channels, the weight values of the color values of each color channel and the corresponding image processing coefficients may be obtained by sequentially inputting the preset image processing model.
For example, when the pixel values of a plurality of neighborhood pixels of the preset image processing model are RGB-formatted pixel values, the color values of each neighborhood pixel R channel may be input into the preset image processing model, the preset image processing model outputs the weight values corresponding to the color values of each neighborhood pixel R channel and the image processing coefficients corresponding to the color values of the R channels, then the color values of each neighborhood pixel G channel and B channel are input into the preset image processing model, the preset image processing model outputs the weight values corresponding to the color values of each neighborhood pixel G channel and B channel and the image processing coefficients corresponding to the color values of the G channel and B channel, and finally the step 104 is executed to obtain the target image corresponding to the original image.
In the embodiment of the application, a preset image processing model for resampling an original image and a neighborhood pixel point selection type corresponding to the preset image processing model are determined according to the type to be processed of the original image; then, resampling is carried out on the original image according to the neighborhood pixel point selection type and the preset image processing model to obtain a target image corresponding to the original image, and when image processing under different image processing application scenes (different to-be-processed types of the original image) needs to be carried out on the original image, image resampling can be carried out on the original image according to the to-be-processed type of the original image and the neighborhood pixel point selection type corresponding to the preset image processing model, so that the image resampling method can be applied to different image processing application scenes, namely, no matter what kind of image processing application scenes which can be realized through image resampling needs to be carried out on the original image, the preset image processing model corresponding to the image processing application scene and the neighborhood pixel point selection corresponding to the preset image processing model can be used The original image is resampled by selecting the type, so that the problem of poor universality of the traditional resampling method is solved.
In order to increase the speed of image resampling, in some other embodiments of the present application, before the above sampling pixel points of the original image according to the neighborhood pixel point selection type to obtain a plurality of neighborhood pixel points corresponding to each pixel point of the target image, the method may further include: and carrying out block averaging processing on the original image to obtain the original image subjected to block averaging processing.
In the embodiment of the application, the data size of the original image after the block averaging processing is smaller than that of the original image before the block averaging processing, so that the calculation amount for performing image resampling on the original image after the block averaging processing is smaller than that for directly performing image resampling on the original image before the block averaging processing, and the speed of performing image resampling on the original image is improved conveniently.
Specifically, when the block averaging processing is performed on the original image, a formula may be utilized
Figure BDA0002241757930000101
And calculating the original image after block average processing.
Wherein r is0To rn-1Representing the pixel values, r, of n pixels of a block of pixels subjected to a block averaging process in an original imageavgRepresenting the sum r in the original image after block averaging0To rn-1And the pixel values of the pixel points corresponding to the formed pixel blocks, wherein n is an integer greater than or equal to 2.
That is to say, when performing block averaging, the original picture may be divided into a plurality of pixel blocks, the pixel values of each pixel point in the pixel blocks are accumulated, the accumulated value is divided by the number of pixel points in the block to obtain block pixel values after the block averaging, and the original picture after the block averaging is generated according to the block pixel values after the block averaging.
It should be noted that, when performing block averaging processing on the original image, block averaging processing may be performed on color values of different color channels of each pixel point in the pixel block, respectively, to obtain the original image after the block averaging processing.
For example, as shown in fig. 3, the region in the original picture 31 may be divided into k0To k is8These 9 pixel blocks, pair k0To k is8The pixel values of the three RGB color channels of the pixel point of each pixel block in the 9 pixel blocks are accumulated respectively, and the accumulated value is divided by the number of the pixel points in the block to obtain a block pixel value composed of the color values of the three RGB color channels after block averaging processing, and an original image after block averaging processing is generated according to the block pixel value composed of the color values of the three RGB color channels after the block averaging processing.
It should be noted that other block averaging processing methods are also applicable to the present application, and the present application is not limited thereto.
Correspondingly, in the step 102, performing pixel sampling on the original image according to the neighborhood pixel selection type to obtain a plurality of neighborhood pixels corresponding to each pixel of the target image, which may include: and sampling pixel points of the original image subjected to block averaging according to the neighborhood pixel point selection type to obtain a plurality of neighborhood pixel points corresponding to each pixel point of the target image.
According to the embodiment of the application, the original image after block average processing is obtained by performing block average processing on the original image, pixel point sampling is performed on the original image after block average processing, a plurality of neighborhood pixel points corresponding to each pixel point of the target image are obtained, namely, the input original image is preliminarily extracted, so that the data volume of the original image subjected to image resampling is reduced, and further the calculation amount of image resampling is reduced.
In some embodiments of the present application, before the inputting the pixel values of the plurality of neighborhood pixels respectively corresponding to each pixel of the target image into the preset image processing model, the method may further include: and training the image processing model to be trained to obtain a preset image processing model corresponding to each type to be processed.
Specifically, as shown in fig. 4, the training of the image processing model to be trained may include: step 401 to step 405.
Step 401, acquiring a plurality of sets of original sample images and target sample images corresponding to target processing types marked in advance by the original sample images.
The target sample image is an image which is obtained by performing image processing on an original sample image according to a pre-marked target processing type and meets the requirement of the pre-marked target processing type. For example, when the pre-marked target processing type is defect detection, the target sample image may be an image obtained by performing enhancement processing on a defective region of the original sample image and performing blurring processing on a non-defective region of the original sample image.
It should be noted that, in the embodiment of the present application, in the multiple sets of original sample images, each set of original sample image corresponds to a type to be processed of an original image, types to be processed of different sets of original sample images are different, and each set of original sample image includes multiple original sample images. Through a plurality of groups of original sample images, image processing models respectively corresponding to a plurality of types to be processed can be obtained through training.
Step 402, sampling pixel points of the original sample image according to the neighborhood pixel point selection type to be confirmed, and obtaining a plurality of sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed.
Specifically, the selection type of the neighborhood pixel to be confirmed may include: a distance nearest selection type, a square selection type, and a cross selection type.
Correspondingly, the pixel sampling of the original sample image according to the neighborhood pixel selection type to be confirmed may include: and sampling pixel points of the original sample image according to a neighborhood pixel point selection type to be confirmed in one of a nearest selection type, a square selection type and a cross selection type.
It should be noted that, the above is only an example of the selection type of the neighborhood pixel to be confirmed, and in other embodiments of the present application, the pixel sampling may be performed on the original sample image according to the neighborhood pixel selection method of other neighborhood pixel selection types.
Step 403, inputting a plurality of sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed into the image processing model to be trained, and outputting a weight value corresponding to each sample neighborhood pixel point in the plurality of sample neighborhood pixel points and an image processing coefficient corresponding to the plurality of sample neighborhood pixel points by the image processing model to be trained.
It should be noted that, in the embodiment of the present application, each pixel point of a large number of sample images to be confirmed is used to train an image processing model to be trained, so that the obtained preset image processing model can perform image processing corresponding to the image processing type of the preset image processing model on different images.
For example, in some embodiments of the present application, the image processing model to be trained may be a multi-layered perceptron model to be trained.
Correspondingly, the above-mentioned a plurality of sample neighborhood pixel points that will correspond respectively with every pixel point of waiting to confirm the sample image input the image processing model that waits to train, by waiting the corresponding weight value of every sample neighborhood pixel point in a plurality of sample neighborhood pixel points of image processing model output that trains, and a plurality of sample neighborhood pixel point correspond the image processing coefficient, can include: inputting a plurality of sample neighborhood pixel points corresponding to each pixel point of a sample image to be confirmed into a multilayer perceptron model to be trained, and outputting a weighted value corresponding to each sample neighborhood pixel point in the plurality of sample neighborhood pixel points, and a contrast coefficient and a brightness coefficient corresponding to the plurality of sample neighborhood pixel points by the multilayer perceptron model to be trained.
It should be noted that the multi-layer perceptron model to be trained may include a plurality of hidden layers, each hidden layer may include a plurality of nodes, and the calculation formula of each node may be p ═ f (∑ a)i·pi+b)。
Where f denotes the mapping function of the node, piRepresenting the input of a node, wherein the input of the node can be the pixel values of a plurality of sample neighborhood pixel points respectively corresponding to each pixel point of the sample image to be confirmed, or the output of the previous node; a isiRepresenting the weight parameter of the node, b representing the bias parameter of the node, and p representing the output of the node.
It should be noted that the mapping function of the node may include a linear mapping function, an absolute value mapping function, a relu mapping function, a tanh mapping function, a sigmoid mapping function, or other mapping functions.
Based on the above description of the structure of the multi-layered perceptron model to be trained, in some embodiments of the present application, the adjusting the parameters of the image processing model to be trained may include: adjusting the number of layers of the hidden layer of the multilayer perceptron; and/or adjusting the number of nodes of each hidden layer of the multilayer perceptron; and/or adjusting node parameters and mapping functions of the multi-layer perceptron.
Step 404, respectively calculating a pixel value of each pixel point of the sample image to be confirmed according to the pixel values of the plurality of sample neighborhood pixel points respectively corresponding to each pixel point of the sample image to be confirmed, the weight value corresponding to each sample neighborhood pixel point of the plurality of sample neighborhood pixel points output by the image processing model to be trained, and the image processing coefficients corresponding to the plurality of sample neighborhood pixel points, so as to obtain the sample image to be confirmed corresponding to the original sample image.
Specifically, the method for calculating the pixel value of each pixel point of the sample image to be confirmed may refer to step 104, which is not described herein again.
Step 405, calculating the similarity between the sample image to be confirmed and the target sample image, if the similarity is smaller than a similarity threshold, adjusting parameters of the image processing model to be trained or adjusting the selection type of the neighborhood pixel points to be confirmed, training the image processing model to be trained by reusing the original sample image until the similarity is larger than or equal to the similarity threshold, or training the image processing model to be trained by reusing the original sample image for a time larger than or equal to a first time threshold, training the image processing model to be trained by utilizing the next group of original sample image until the total training time of the image processing model to be trained is larger than or equal to a second time threshold, or determining the selection type of the neighborhood pixel points to be confirmed as a preset corresponding to the target processing type pre-marked by the original sample image when the change rate of the similarity is smaller than the change rate threshold The image processing model and the neighborhood pixel point selection type.
For example, sampling pixel points of 100 original sample images according to the neighborhood pixel point selection type to be confirmed to obtain a plurality of sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed; inputting a plurality of sample neighborhood pixel points corresponding to each pixel point of a sample image to be confirmed into an image processing model to be trained, and outputting a weight value corresponding to each sample neighborhood pixel point in the plurality of sample neighborhood pixel points and an image processing coefficient corresponding to the plurality of sample neighborhood pixel points by the image processing model to be trained; then, respectively calculating the pixel value of each pixel point of the sample image to be confirmed according to the pixel values of a plurality of sample neighborhood pixel points respectively corresponding to each pixel point of the sample image to be confirmed, the weight value corresponding to each sample neighborhood pixel point in the plurality of sample neighborhood pixel points output by the image processing model to be trained, and the image processing coefficients corresponding to the plurality of sample neighborhood pixel points to obtain 100 sample images to be confirmed corresponding to the original sample image, then calculating the similarity between the sample image to be confirmed and the target sample image, and training the image processing model to be trained by using the next sample image to be confirmed when the similarity is smaller than the similarity threshold value until the total training times of the image processing model to be trained is larger than or equal to the time threshold value, or the change rate of the similarity is smaller than the change rate threshold value, and obtaining a preset image processing model.
The similarity between the sample image to be confirmed and the target sample image may be calculated by a structural similarity measurement method, a cosine similarity calculation method, an image similarity calculation method based on a histogram, and the like, which is not limited in the present application.
In the embodiment of the application, when the similarity is greater than or equal to the similarity threshold, or the training frequency of the image processing model to be trained by reusing the original sample picture is greater than or equal to the frequency threshold, it is indicated that the image processing model can perform image processing on the original sample picture more accurately, the total training frequency of the image processing model to be trained is greater than or equal to the frequency threshold, or the change rate of the similarity is less than the change rate threshold, it is indicated that the accuracy of image processing of the image processing model to be trained tends to be stable, and therefore, it can be indicated that the image processing model to be trained has been trained.
In the embodiment of the present application, the method for training the preset image processing model includes inputting the pixel values of the plurality of sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed into the image processing model to be trained, outputting, by the image processing model to be trained, the weight value corresponding to each sample neighborhood pixel point of the plurality of sample neighborhood pixel points, and the image processing coefficient corresponding to the plurality of sample neighborhood pixel points, rather than directly outputting the pixel value of each sample neighborhood pixel point of the plurality of sample neighborhood pixel points, that is, the input data type and the output data type of the preset image processing model obtained by training are different, the input data type is the pixel values of the plurality of neighborhood pixel points, the output data types are the weight value and the image processing coefficient, and the method belongs to a non-end-to-end parameter training method, the number of calculation parameters is effectively reduced, the parameter training process is accelerated, and the training efficiency of the model is improved.
It should be noted that, the above is only an example of the image processing model to be trained, and in other embodiments of the present application, the image processing model to be trained may be other non-end-to-end image processing models to be trained, besides the multi-layer perceptron model.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present invention is not limited by the order of acts or combination of acts, as some steps may occur in other orders based on the present invention.
Fig. 5 shows a schematic structural diagram of an image resampling apparatus 500 provided in an embodiment of the present application, and includes an obtaining unit 501, a sampling unit 502, an output unit 503, and a calculating unit 504.
The acquiring unit 501 is configured to acquire an original image and a type to be processed of the original image, and determine a preset image processing model for resampling the original image and a neighborhood pixel selection type corresponding to the preset image processing model according to the type to be processed of the original image.
The sampling unit 502 is configured to perform pixel sampling on the original image according to the neighborhood pixel selection type, and obtain a plurality of neighborhood pixels corresponding to each pixel of the target image.
An output unit 503, configured to input pixel values of the multiple neighborhood pixels corresponding to each pixel of the target image into the preset image processing model, and output, by the preset image processing model, a weight value corresponding to each neighborhood pixel in the multiple neighborhood pixels and an image processing coefficient corresponding to the multiple neighborhood pixels.
A calculating unit 504, configured to calculate, according to the pixel values of the multiple neighborhood pixels corresponding to each pixel of the target image, the weight value corresponding to each neighborhood pixel in the multiple neighborhood pixels output by the preset image processing model, and the image processing coefficients corresponding to the multiple neighborhood pixels, the pixel value of each pixel of the target image, respectively, so as to obtain a target image corresponding to the original image.
In some embodiments of the present application, the sampling unit 502 is further configured to perform block averaging on the original image to obtain an original image after block averaging; and sampling pixel points of the original image after the block average processing according to the neighborhood pixel point selection type to obtain a plurality of neighborhood pixel points corresponding to each pixel point of the target image.
In some embodiments of the present application, the image resampling apparatus may further include a training unit, configured to train an image processing model to be trained, so as to obtain a preset image processing model corresponding to each type to be processed.
Specifically, the training unit is configured to obtain a plurality of sets of original sample images and target sample images corresponding to target processing types labeled in advance for the original sample images; sampling pixel points of the original sample image according to the neighborhood pixel point selection type to be confirmed to obtain a plurality of sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed; inputting pixel values of a plurality of sample neighborhood pixel points corresponding to each pixel point of a sample image to be confirmed into the image processing model to be trained, and outputting a weight value corresponding to each sample neighborhood pixel point in the plurality of sample neighborhood pixel points and an image processing coefficient corresponding to the plurality of sample neighborhood pixel points by the image processing model to be trained; calculating the pixel value of each pixel point of the sample image to be confirmed respectively according to the pixel values of the sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed respectively, the weight value corresponding to each sample neighborhood pixel point in the sample neighborhood pixel points output by the image processing model to be trained and the image processing coefficients corresponding to the sample neighborhood pixel points, so as to obtain the sample image to be confirmed corresponding to the original sample image; calculating the similarity between the sample image to be confirmed and the target sample image, if the similarity is smaller than a similarity threshold, adjusting parameters of the image processing model to be trained or adjusting the selection type of the neighborhood pixel points to be confirmed, training the image processing model to be trained by reusing the original sample image until the similarity is larger than or equal to the similarity threshold, or training the image processing model to be trained by reusing the original sample image for more than or equal to a first time threshold, training the image processing model to be trained by utilizing the next group of original sample image until the total training time of the image processing model to be trained is larger than or equal to a second time threshold or the change rate of the similarity is smaller than a change rate threshold, and determining the image processing model to be trained and the neighborhood pixel point selection type to be confirmed as a preset image processing model and a neighborhood pixel point selection type corresponding to a target processing type marked in advance by the original sample image.
In some embodiments of the application, when the image processing model to be trained is a multilayer perceptron model to be trained, the output unit 503 is further configured to input the plurality of sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed into the multilayer perceptron model to be trained, and output, by the multilayer perceptron model to be trained, a weight value corresponding to each sample neighborhood pixel point in the plurality of sample neighborhood pixel points, and a contrast coefficient and a brightness coefficient corresponding to the plurality of sample neighborhood pixel points.
In some embodiments of the present application, the image resampling apparatus may further include an adjusting unit, configured to adjust the number of layers of the hidden layer of the multilayer perceptron; and/or adjusting the number of nodes of each hidden layer of the multilayer perceptron; and/or adjusting node parameters and mapping functions of the multi-layer perceptron.
In some embodiments of the application, the calculating unit 504 is further configured to accumulate products between pixel values of a plurality of sample neighborhood pixel points respectively corresponding to each pixel point of the sample image to be confirmed and weight values corresponding to each sample neighborhood pixel point of the plurality of sample neighborhood pixel points output by the image processing model to be trained, multiply an accumulated value by the contrast coefficient, and sum the multiplied value by the brightness coefficient to obtain a pixel value of each pixel point of the sample image to be confirmed.
In some embodiments of the application, the sampling unit 502 is further configured to perform pixel sampling on the original sample image according to a to-be-confirmed neighborhood pixel selection type selected from the closest selection type, the square selection type, and the cross selection type.
It should be noted that, for convenience and simplicity of description, the specific working process of the image resampling apparatus 500 described above may refer to the corresponding process of the method described in fig. 1 to fig. 4, and is not described herein again.
As shown in fig. 6, the present application provides a terminal for implementing the image resampling method, where the terminal may include: a processor 61, a memory 62, one or more input devices 63 (only one shown in fig. 6), and one or more output devices 64 (only one shown in fig. 6). The processor 61, memory 62, input device 63 and output device 64 are connected by a bus 65.
It should be understood that in the embodiment of the present Application, the Processor 61 may be a Central Processing Unit (CPU), and the Processor may also be other general processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 63 may include a virtual keyboard, a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of the fingerprint), a microphone, etc., and the output device 64 may include a display, a speaker, etc.
The memory 62 stores a computer program that can be executed by the processor 61, and the computer program is, for example, a program of an image resampling method. The processor 61 implements the steps of the image resampling method embodiments, such as steps 101 to 104 shown in fig. 1, when executing the computer program. Alternatively, the processor 61 may implement the functions of the units in the device embodiment when executing the computer program, for example, the functions of the units 501 to 504 shown in fig. 5.
The computer program may be divided into one or more modules/units, which are stored in the memory 62 and executed by the processor 61 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program in the first terminal for image resampling. For example, the computer program may be divided into an acquisition unit, a sampling unit, an output unit, and a calculation unit, and each unit specifically functions as follows:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring an original image and a type to be processed of the original image, and determining a preset image processing model for resampling the original image and a neighborhood pixel point selection type corresponding to the preset image processing model according to the type to be processed of the original image.
And the sampling unit is used for sampling pixel points of the original image according to the neighborhood pixel point selection type to obtain a plurality of neighborhood pixel points corresponding to each pixel point of the target image.
And the output unit is used for inputting the pixel values of the plurality of neighborhood pixels corresponding to each pixel of the target image into the preset image processing model, and outputting the weight value corresponding to each neighborhood pixel in the plurality of neighborhood pixels and the image processing coefficient corresponding to the plurality of neighborhood pixels by the preset image processing model.
And the calculating unit is used for calculating the pixel value of each pixel point of the target image according to the pixel values of the plurality of neighborhood pixel points corresponding to each pixel point of the target image, the weight value corresponding to each neighborhood pixel point in the plurality of neighborhood pixel points output by the preset image processing model and the image processing coefficients corresponding to the plurality of neighborhood pixel points, so as to obtain the target image corresponding to the original image.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiment of the present application provides a computer program product, which when running on a terminal device, enables the terminal device to implement the steps of the image resampling method in the above embodiments when executed.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal are merely illustrative, and for example, the division of the above-described modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above may be implemented by a computer program, which may be stored in a computer readable storage medium and used by a processor to implement the steps of the embodiments of the methods described above. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-described computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the computer readable medium described above may include content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media that does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An image resampling method, characterized by comprising:
acquiring an original image and a type to be processed of the original image, and determining a preset image processing model for resampling the original image and a neighborhood pixel point selection type corresponding to the preset image processing model according to the type to be processed of the original image;
sampling pixel points of the original image according to the neighborhood pixel point selection type to obtain a plurality of neighborhood pixel points corresponding to each pixel point of the target image;
inputting pixel values of a plurality of neighborhood pixel points corresponding to each pixel point of a target image into the preset image processing model, and outputting a weight value corresponding to each neighborhood pixel point in the plurality of neighborhood pixel points and an image processing coefficient corresponding to the plurality of neighborhood pixel points by the preset image processing model;
and respectively calculating the pixel value of each pixel point of the target image according to the pixel values of the plurality of neighborhood pixel points respectively corresponding to each pixel point of the target image, the weight value corresponding to each neighborhood pixel point in the plurality of neighborhood pixel points output by the preset image processing model and the image processing coefficients corresponding to the plurality of neighborhood pixel points, so as to obtain the target image corresponding to the original image.
2. The image resampling method as claimed in claim 1, wherein before said performing pixel sampling on the original image according to the neighborhood pixel selection type to obtain a plurality of neighborhood pixels corresponding to each pixel of the target image, the method comprises:
carrying out block average processing on the original image to obtain an original image subjected to block average processing;
the pixel sampling is carried out on the original image according to the neighborhood pixel selection type to obtain a plurality of neighborhood pixels corresponding to each pixel of the target image respectively, and the method comprises the following steps:
and sampling pixel points of the original image after the block average processing according to the neighborhood pixel point selection type to obtain a plurality of neighborhood pixel points corresponding to each pixel point of the target image.
3. The image resampling method as claimed in claim 1 or 2, wherein before inputting the pixel values of the plurality of neighborhood pixels respectively corresponding to each pixel of the target image into the preset image processing model, the method comprises: training the image processing model to be trained to obtain a preset image processing model corresponding to each type to be processed;
the training of the image processing model to be trained comprises:
acquiring a plurality of groups of original sample images and target sample images corresponding to target processing types marked in advance by the original sample images;
sampling pixel points of the original sample image according to the neighborhood pixel point selection type to be confirmed to obtain a plurality of sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed;
inputting pixel values of a plurality of sample neighborhood pixel points corresponding to each pixel point of a sample image to be confirmed into the image processing model to be trained, and outputting a weight value corresponding to each sample neighborhood pixel point in the plurality of sample neighborhood pixel points and an image processing coefficient corresponding to the plurality of sample neighborhood pixel points by the image processing model to be trained;
calculating the pixel value of each pixel point of the sample image to be confirmed respectively according to the pixel values of the sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed respectively, the weight value corresponding to each sample neighborhood pixel point in the sample neighborhood pixel points output by the image processing model to be trained and the image processing coefficients corresponding to the sample neighborhood pixel points, so as to obtain the sample image to be confirmed corresponding to the original sample image;
calculating the similarity between the sample image to be confirmed and the target sample image, if the similarity is smaller than a similarity threshold, adjusting parameters of the image processing model to be trained or adjusting the selection type of the neighborhood pixel points to be confirmed, training the image processing model to be trained by using the original sample image again until the similarity is larger than or equal to the similarity threshold or the change rate of the similarity is smaller than a change rate threshold, determining the image processing model to be trained as a preset image processing model corresponding to the target processing type marked in advance by the original sample image, and determining the selection type of the neighborhood pixel points to be confirmed as the selection type of the neighborhood pixel points corresponding to the preset image processing model.
4. The image resampling method as claimed in claim 3, wherein the image processing model to be trained is a multi-layered perceptron model to be trained;
the inputting of the plurality of sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed into the image processing model to be trained, and the outputting of the weighted value corresponding to each sample neighborhood pixel point in the plurality of sample neighborhood pixel points and the image processing coefficient corresponding to the plurality of sample neighborhood pixel points by the image processing model to be trained include:
inputting the plurality of sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed into the multilayer perceptron model to be trained, and outputting the weighted value corresponding to each sample neighborhood pixel point in the plurality of sample neighborhood pixel points, and the contrast coefficient and the brightness coefficient corresponding to the plurality of sample neighborhood pixel points by the multilayer perceptron model to be trained.
5. The image resampling method as recited in claim 4, wherein the adjusting the parameters of the image processing model to be trained comprises:
adjusting the number of layers of the hidden layer of the multilayer perceptron; and/or
Adjusting the number of nodes of each hidden layer of the multilayer perceptron; and/or
And adjusting the node parameters and the mapping function of the multilayer perceptron.
6. The image resampling method as claimed in claim 4, wherein said calculating the pixel value of each pixel point of the sample image to be confirmed according to the pixel values of the plurality of sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed, the weight value corresponding to each sample neighborhood pixel point of the plurality of sample neighborhood pixel points output by the image processing model to be trained, and the image processing coefficients corresponding to the plurality of sample neighborhood pixel points respectively comprises:
and accumulating products of pixel values of a plurality of sample neighborhood pixel points respectively corresponding to each pixel point of the sample image to be confirmed and a weighted value corresponding to each sample neighborhood pixel point in the plurality of sample neighborhood pixel points output by the image processing model to be trained, multiplying the accumulated value by the contrast coefficient, and then summing the multiplied value and the brightness coefficient to obtain the pixel value of each pixel point of the sample image to be confirmed.
7. The image resampling method as claimed in claim 3, wherein the selecting type of the neighborhood pixel to be confirmed comprises: a closest selection type, a square selection type and a cross selection type;
the pixel sampling of the original sample image according to the neighborhood pixel selection type to be confirmed comprises the following steps: and sampling pixel points of the original sample image according to the neighborhood pixel point selection type to be confirmed in one of the nearest selection type, the square selection type and the cross selection type.
8. An image resampling apparatus, characterized by comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an original image and a type to be processed of the original image, and determining a preset image processing model for resampling the original image and a neighborhood pixel point selection type corresponding to the preset image processing model according to the type to be processed of the original image;
the sampling unit is used for sampling pixel points of the original image according to the neighborhood pixel point selection type to obtain a plurality of neighborhood pixel points corresponding to each pixel point of the target image;
the output unit is used for inputting the pixel values of a plurality of neighborhood pixel points corresponding to each pixel point of the target image into the preset image processing model, and outputting the weight value corresponding to each neighborhood pixel point in the plurality of neighborhood pixel points and the image processing coefficient corresponding to the plurality of neighborhood pixel points by the preset image processing model;
and the calculating unit is used for calculating the pixel value of each pixel point of the target image according to the pixel values of the plurality of neighborhood pixel points corresponding to each pixel point of the target image, the weight value corresponding to each neighborhood pixel point in the plurality of neighborhood pixel points output by the preset image processing model and the image processing coefficients corresponding to the plurality of neighborhood pixel points, so as to obtain the target image corresponding to the original image.
9. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN201911003561.6A 2019-10-21 2019-10-21 Image resampling method, device, terminal and computer readable storage medium Active CN110738625B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911003561.6A CN110738625B (en) 2019-10-21 2019-10-21 Image resampling method, device, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911003561.6A CN110738625B (en) 2019-10-21 2019-10-21 Image resampling method, device, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110738625A CN110738625A (en) 2020-01-31
CN110738625B true CN110738625B (en) 2022-03-11

Family

ID=69270834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911003561.6A Active CN110738625B (en) 2019-10-21 2019-10-21 Image resampling method, device, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110738625B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021254381A1 (en) * 2020-06-17 2021-12-23 京东方科技集团股份有限公司 Image processing method and apparatus, electronic device, and computer-readable storage medium
CN112989911A (en) * 2020-12-10 2021-06-18 奥比中光科技集团股份有限公司 Pedestrian re-identification method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831579A (en) * 2011-06-16 2012-12-19 富士通株式会社 Text enhancement method and device, text extraction method and device
CN104103082A (en) * 2014-06-06 2014-10-15 华南理工大学 Image saliency detection method based on region description and priori knowledge
CN105701773A (en) * 2014-11-28 2016-06-22 联芯科技有限公司 Method and device for processing image rapidly
CN105975912A (en) * 2016-04-27 2016-09-28 天津大学 Hyperspectral image nonlinearity solution blending method based on neural network
CN106373095A (en) * 2016-08-29 2017-02-01 广东欧珀移动通信有限公司 Image processing method and terminal
CN107292828A (en) * 2016-03-31 2017-10-24 展讯通信(上海)有限公司 The treating method and apparatus of image border
CN109903224A (en) * 2019-01-25 2019-06-18 珠海市杰理科技股份有限公司 Image-scaling method, device, computer equipment and storage medium
CN110245607A (en) * 2019-06-13 2019-09-17 Oppo广东移动通信有限公司 Eyeball tracking method and Related product

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831579A (en) * 2011-06-16 2012-12-19 富士通株式会社 Text enhancement method and device, text extraction method and device
CN104103082A (en) * 2014-06-06 2014-10-15 华南理工大学 Image saliency detection method based on region description and priori knowledge
CN105701773A (en) * 2014-11-28 2016-06-22 联芯科技有限公司 Method and device for processing image rapidly
CN107292828A (en) * 2016-03-31 2017-10-24 展讯通信(上海)有限公司 The treating method and apparatus of image border
CN105975912A (en) * 2016-04-27 2016-09-28 天津大学 Hyperspectral image nonlinearity solution blending method based on neural network
CN106373095A (en) * 2016-08-29 2017-02-01 广东欧珀移动通信有限公司 Image processing method and terminal
CN109903224A (en) * 2019-01-25 2019-06-18 珠海市杰理科技股份有限公司 Image-scaling method, device, computer equipment and storage medium
CN110245607A (en) * 2019-06-13 2019-09-17 Oppo广东移动通信有限公司 Eyeball tracking method and Related product

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
改进的基于局部联合特征的运动目标检测方法;王顺飞;《仪器仪表学报》;20151015;全文 *
采用金字塔纹理和边缘特征的图像烟雾检测;李红娣;《中国图象图形学报》;20150616;全文 *

Also Published As

Publication number Publication date
CN110738625A (en) 2020-01-31

Similar Documents

Publication Publication Date Title
CN108921806B (en) Image processing method, image processing device and terminal equipment
CN107403421B (en) Image defogging method, storage medium and terminal equipment
CN108664981B (en) Salient image extraction method and device
US7149355B2 (en) Image processing apparatus, image processing method, image processing program, and computer-readable record medium storing image processing program
CN112862681B (en) Super-resolution method, device, terminal equipment and storage medium
CN111079764B (en) Low-illumination license plate image recognition method and device based on deep learning
CN110738625B (en) Image resampling method, device, terminal and computer readable storage medium
CN111079669A (en) Image processing method, device and storage medium
CN113298728B (en) Video optimization method and device, terminal equipment and storage medium
WO2023005818A1 (en) Noise image generation method and apparatus, electronic device, and storage medium
CN111598869A (en) Method, equipment and storage medium for detecting Mura of display screen
CN108960012B (en) Feature point detection method and device and electronic equipment
CN111429371A (en) Image processing method and device and terminal equipment
KR20210018508A (en) Directional scaling systems and methods
WO2023138540A1 (en) Edge extraction method and apparatus, and electronic device and storage medium
CN111222446B (en) Face recognition method, face recognition device and mobile terminal
CN116645365B (en) Quartz glass detection method, device, equipment and medium based on frequency spectrum
CN116645513A (en) Watermark extraction method, model training method, device, electronic equipment and medium
CN115619678A (en) Image deformation correction method and device, computer equipment and storage medium
CN115471413A (en) Image processing method and device, computer readable storage medium and electronic device
CN111754411B (en) Image noise reduction method, image noise reduction device and terminal equipment
Zhiwei et al. An image zooming technique based on the relative color difference of pixels
CN114764839A (en) Dynamic video generation method and device, readable storage medium and terminal equipment
CN114596210A (en) Noise estimation method, device, terminal equipment and computer readable storage medium
JP2020191030A (en) Image processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant