CN110738625A - Image resampling method, device, terminal and computer readable storage medium - Google Patents

Image resampling method, device, terminal and computer readable storage medium Download PDF

Info

Publication number
CN110738625A
CN110738625A CN201911003561.6A CN201911003561A CN110738625A CN 110738625 A CN110738625 A CN 110738625A CN 201911003561 A CN201911003561 A CN 201911003561A CN 110738625 A CN110738625 A CN 110738625A
Authority
CN
China
Prior art keywords
image
sample
image processing
neighborhood
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911003561.6A
Other languages
Chinese (zh)
Other versions
CN110738625B (en
Inventor
张建中
陈岩
蒋燚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911003561.6A priority Critical patent/CN110738625B/en
Publication of CN110738625A publication Critical patent/CN110738625A/en
Application granted granted Critical
Publication of CN110738625B publication Critical patent/CN110738625B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application belongs to the technical field of image processing, and particularly relates to image resampling methods, devices, terminals and computer readable storage media, wherein the method comprises the steps of determining a preset image processing model for resampling an original image and a neighborhood pixel point selection type corresponding to the preset image processing model according to a type to be processed of the original image, then resampling the original image according to the neighborhood pixel point selection type and the preset image processing model to obtain a target image corresponding to the original image, and the problem of poor universality of a traditional image resampling method is solved.

Description

Image resampling method, device, terminal and computer readable storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to image resampling methods, apparatuses, terminals, and computer readable storage media.
Background
Resampling is basic tasks of image processing, and is used to modify information in an image, such as adding or deleting pixel points.
However, these resampling methods can only be applied to the same type image processing application scenarios with single , for example, only to image processing application scenarios requiring image enhancement with single , or to image processing application scenarios requiring image blurring with single , which has the disadvantage of poor universality.
Disclosure of Invention
The embodiment of the application provides image resampling methods, devices, terminals and computer readable storage media, which can be applied to various image processing application scenarios.
An th aspect of the present application provides image resampling methods, including:
acquiring an original image and a type to be processed of the original image, and determining a preset image processing model for resampling the original image and a neighborhood pixel point selection type corresponding to the preset image processing model according to the type to be processed of the original image;
sampling pixel points of the original image according to the neighborhood pixel point selection type to obtain a plurality of neighborhood pixel points corresponding to each pixel point of the target image;
inputting pixel values of a plurality of neighborhood pixel points corresponding to each pixel point of a target image into the preset image processing model, and outputting a weight value corresponding to each neighborhood pixel point in the plurality of neighborhood pixel points and an image processing coefficient corresponding to the plurality of neighborhood pixel points by the preset image processing model;
and respectively calculating the pixel value of each pixel point of the target image according to the pixel values of the plurality of neighborhood pixel points respectively corresponding to each pixel point of the target image, the weight value corresponding to each neighborhood pixel point in the plurality of neighborhood pixel points output by the preset image processing model and the image processing coefficients corresponding to the plurality of neighborhood pixel points, so as to obtain the target image corresponding to the original image.
A second aspect of the embodiments of the present application provides image resampling apparatuses, including:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an original image and a type to be processed of the original image, and determining a preset image processing model for resampling the original image and a neighborhood pixel point selection type corresponding to the preset image processing model according to the type to be processed of the original image;
the sampling unit is used for sampling pixel points of the original image according to the neighborhood pixel point selection type to obtain a plurality of neighborhood pixel points corresponding to each pixel point of the target image;
the output unit is used for inputting the pixel values of a plurality of neighborhood pixel points corresponding to each pixel point of the target image into the preset image processing model, and outputting the weight value corresponding to each neighborhood pixel point in the plurality of neighborhood pixel points and the image processing coefficient corresponding to the plurality of neighborhood pixel points by the preset image processing model;
and the calculating unit is used for calculating the pixel value of each pixel point of the target image according to the pixel values of the plurality of neighborhood pixel points corresponding to each pixel point of the target image, the weight value corresponding to each neighborhood pixel point in the plurality of neighborhood pixel points output by the preset image processing model and the image processing coefficients corresponding to the plurality of neighborhood pixel points, so as to obtain the target image corresponding to the original image.
A third aspect of the embodiments of the present application provides terminals, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method when executing the computer program.
A fourth aspect of the embodiments of the present application provides computer-readable storage media, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps of the method.
In the embodiment of the application, a preset image processing model for resampling an original image and a neighborhood pixel point selection type corresponding to the preset image processing model are determined according to the type to be processed of the original image; then, resampling is carried out on the original image according to the neighborhood pixel point selection type and the preset image processing model to obtain a target image corresponding to the original image, and when image processing under different image processing application scenes (different to-be-processed types of the original image) needs to be carried out on the original image, image resampling can be carried out on the original image according to the to-be-processed type of the original image and the neighborhood pixel point selection type corresponding to the preset image processing model, so that the image resampling method can be applied to different image processing application scenes, namely, no matter what kind of image processing application scenes which can be realized through image resampling needs to be carried out on the original image, the preset image processing model corresponding to the image processing application scene and the neighborhood pixel point selection corresponding to the preset image processing model can be used The original image is resampled by selecting the type, so that the problem of poor universality of the traditional image resampling method is solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic flow chart illustrating an implementation of image resampling methods provided in an embodiment of the present application;
fig. 2 is a schematic diagram of pixel sampling performed on an original image according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a block averaging process provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of an implementation process for training an image processing model to be trained according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of an image resampling apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
For purposes of clarity and understanding of the objects, aspects and advantages of the present application, reference is made to the following detailed description of the present application in conjunction with the accompanying drawings and examples, it being understood that the specific examples described herein are intended to be illustrative only and are not intended to be limiting.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of more or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used in this specification and the appended claims, the singular forms "", "" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted in accordance with the context as "when.. or" denier "or" in response to a determination "or" in response to a detection ".
The resampling is basic tasks of image processing, and is used for changing information in an image, for example, adding or deleting pixel points, and resampling the image is required when the image is rotated, translated, scaled and the like.
However, these resampling methods can only be applied to the same type image processing application scenarios with single , for example, only to image processing application scenarios requiring image enhancement with single , or to image processing application scenarios requiring image blurring with single , which has the disadvantage of poor universality.
Based on this, the present application provides image resampling methods, apparatuses, terminals, and computer-readable storage media, which may be applicable to a variety of image processing application scenarios.
In order to explain the technical means of the present application, the following description will be given by way of specific examples.
Fig. 1 shows a schematic flow chart of an implementation of image resampling methods provided in this embodiment, where the method is applied to a terminal, and can be executed by an image resampling device configured on the terminal, and is suitable for a situation where it is necessary to improve the universality of the image resampling method.
Step 101, obtaining an original image and a type to be processed of the original image, and determining a preset image processing model for resampling the original image and a neighborhood pixel point selection type corresponding to the preset image processing model according to the type to be processed of the original image.
The types to be processed of the original image may include application types of machine vision such as face recognition, object detection, and flaw detection, and may also be image processing types such as blurring, enhancing, sharpening, blurring, and brightness adjustment for the image.
It should be noted that preset image processing models can be determined for each types to be processed of the original image, each preset image processing models have neighborhood pixel point selection types corresponding to only , and the neighborhood pixel point selection types may be neighborhood pixel point selection types adopted in the process of performing pixel point sampling on the original sample image when the preset image processing model corresponding to the type to be processed is obtained.
Specifically, please refer to the description of steps 401 to 405 for the training process of the preset image processing model, which is not described herein again.
It should be noted that different image processing needs to be performed on the original image according to different types of the original image to be processed, that is, the preset image processing models determined according to different types of the original image to be processed are different, and the image processing effects achieved by the different preset image processing models are also different.
For example, when the type of the original image to be processed is defect detection, the preset image processing model achieves an image processing effect that a defect area is subjected to enhancement processing and a non-defect area is subjected to blurring processing; if the type of the original image to be processed is brightness adjustment, the preset image processing model can achieve the image processing effect of performing brightness reduction processing on a place with high exposure and performing brightness enhancement processing on a place with low exposure.
Step 102, sampling pixel points of the original image according to the neighborhood pixel point selection type, and obtaining a plurality of neighborhood pixel points corresponding to each pixel point of the target image.
The plurality of neighborhood pixels respectively corresponding to each pixel of the target image may be obtained by traversing each pixel in the original image and determining an adjacent pixel of each pixel in the traversed original image according to a neighborhood pixel selection type, so as to obtain a plurality of sets of neighborhood pixels respectively corresponding to each pixel of the target image, wherein the plurality of sets of neighborhood pixels are composed of a certain traversed pixel and an adjacent pixel of the pixel.
Specifically, the selection type of the neighborhood pixel point may include: a distance nearest selection type, a square selection type, and a cross selection type. And the number of the neighborhood pixels can be adjusted according to the actual situation.
For example, as shown in fig. 2, if the neighborhood pixel selection type is a square selection type and the number of neighborhood pixels is 9, an original pixel block 202 composed of a pixel 201 of the original image 21 and an adjacent pixel of the pixel 201 corresponds to a pixel 204 of the target image 22, and each pixel of the original image 21 is traversed, so that each pixel of the target image 22 can be finally obtained; for another example, if the neighborhood pixel selection type is a cross selection type and the number of neighborhood pixels is 5, the original pixel block 203 composed of the pixel 201 of the original image 21 and the adjacent pixel of the pixel 201 corresponds to the pixel 205 of the target image 22, and each pixel of the original image 21 is traversed, so that each pixel of the target image 23 can be finally obtained.
Specifically, when determining the coordinates of the pixel points in the target image corresponding to the plurality of neighborhood pixel points, the coordinates of a certain pixel point in the original pixel block composed of the plurality of neighborhood pixel points in the original image are converted into the pixel coordinates of a pixel point in the target image corresponding to the original pixel block composed of the plurality of neighborhood pixel points in the target image.
For example, by the formula xt=xi*RatioxAnd formula yt=yi*RatioyDetermining the coordinates of pixel points in the target image corresponding to the plurality of neighborhood pixel points, wherein (x)i,yi) The coordinates of any one pixel point in the original pixel block composed of a plurality of neighborhood pixel points in the original image are represented, for example, the coordinates of a pixel point, such as the pixel point 201 or the pixel point 206 in the original pixel block 202 in fig. 2, in the original image 21. Wherein (x)t,yt) Expressing pixel coordinates of pixel points in a target image corresponding to an original pixel block consisting of the plurality of neighborhood pixel points in the target image; for example, the original pixel block 202 in fig. 2 corresponds to the coordinates of the pixel point 204 in the target image 22 in the target image. Ratio (R)xThe method is to make a certain pixel point in an original pixel block composed of a plurality of neighborhood pixel points transverse in an original imageCoordinate conversion is carried out on the abscissa conversion coefficient of the abscissa of the pixel point in the target image corresponding to the original pixel block consisting of the plurality of neighborhood pixel points in the target image; ratio (R)yThe method is characterized in that the vertical coordinate of a certain pixel point in an original pixel block formed by a plurality of neighborhood pixel points in an original image is converted into the vertical coordinate conversion coefficient of the vertical coordinate of the pixel point in a target image corresponding to the original pixel block formed by the plurality of neighborhood pixel points in the target image.
Step 103, inputting pixel values of a plurality of neighborhood pixels corresponding to each pixel of the target image into a preset image processing model, and outputting a weight value corresponding to each neighborhood pixel in the plurality of neighborhood pixels and an image processing coefficient corresponding to the plurality of neighborhood pixels by the preset image processing model.
It should be noted that, the input preset image processing models are different, the weight value corresponding to each neighborhood pixel point in the output plurality of neighborhood pixel points is different, the image processing coefficients corresponding to the output plurality of neighborhood pixel points are also different, calculating pixel values of pixel points of the target image to be different according to different weight values corresponding to the neighborhood pixel points and different image processing coefficients corresponding to the neighborhood pixel points, therefore, when the pixel values of a plurality of neighborhood pixel points respectively corresponding to each pixel point of the target image are input into the preset image processing model for resampling the original image determined according to the type to be processed of the original image, a weight value corresponding to each neighborhood pixel point for realizing an image processing effect corresponding to the type to be processed, and image processing coefficients corresponding to the plurality of neighborhood pixel points can be obtained.
And 104, respectively calculating the pixel value of each pixel point of the target image according to the pixel values of a plurality of neighborhood pixel points respectively corresponding to each pixel point of the target image, the weight value corresponding to each neighborhood pixel point in a plurality of neighborhood pixel points output by a preset image processing model and the image processing coefficients corresponding to the plurality of neighborhood pixel points, so as to obtain the target image corresponding to the original image.
In particular, it can be represented by the formula
Figure BDA0002241757930000081
And calculating the pixel value of each pixel point of the target image.
Where α denotes the contrast ratio, β denotes the luminance ratio, s0To sm-1Representing pixel values, w, of a plurality of neighborhood pixels corresponding to each pixel of the target image0To wm-1The image processing method includes the steps that a weight value corresponding to each neighborhood pixel point in a plurality of neighborhood pixel points output by a preset image processing model is represented, m represents the number of pixel points of the plurality of neighborhood pixel points corresponding to the pixel point of a target image, and m is an integer greater than or equal to 2.
That is to say, the calculating the pixel value of each pixel point of the target image according to the pixel values of the plurality of neighborhood pixel points corresponding to each pixel point of the target image, the weight value corresponding to each neighborhood pixel point of the plurality of neighborhood pixel points output by the preset image processing model, and the image processing coefficients corresponding to the plurality of neighborhood pixel points may include: the method comprises the steps of accumulating products of pixel values of a plurality of neighborhood pixel points corresponding to each pixel point of a target image and a weighted value corresponding to each neighborhood pixel point in a plurality of neighborhood pixel points output by a preset image processing model, multiplying the accumulated value by a contrast coefficient, and then summing the multiplied value with a brightness coefficient to obtain the pixel value of each pixel point of the target image.
It should be noted that the pixel values of the plurality of neighborhood pixels input into the preset image processing model may be RGB, YUV, RGB-D, Y, RGB-W, RGB-IR, and other types of pixel values, and when the pixel values of the plurality of neighborhood pixels input into the preset image processing model are RGB, YUV, RGB-D, RGB-W, RGB-IR, and other color values including a plurality of color channels, the weight values of the color values of each color channel and the corresponding image processing coefficients may be obtained by sequentially inputting the preset image processing model.
For example, when the pixel values of a plurality of neighborhood pixels of the preset image processing model are RGB-formatted pixel values, the color values of each neighborhood pixel R channel may be input into the preset image processing model, the preset image processing model outputs the weight values corresponding to the color values of each neighborhood pixel R channel and the image processing coefficients corresponding to the color values of the R channels, then the color values of each neighborhood pixel G channel and B channel are input into the preset image processing model, the preset image processing model outputs the weight values corresponding to the color values of each neighborhood pixel G channel and B channel and the image processing coefficients corresponding to the color values of the G channel and B channel, and finally the step 104 is executed to obtain the target image corresponding to the original image.
In the embodiment of the application, a preset image processing model for resampling an original image and a neighborhood pixel point selection type corresponding to the preset image processing model are determined according to the type to be processed of the original image; then, resampling is carried out on the original image according to the neighborhood pixel point selection type and the preset image processing model to obtain a target image corresponding to the original image, and when image processing under different image processing application scenes (different to-be-processed types of the original image) needs to be carried out on the original image, image resampling can be carried out on the original image according to the to-be-processed type of the original image and the neighborhood pixel point selection type corresponding to the preset image processing model, so that the image resampling method can be applied to different image processing application scenes, namely, no matter what kind of image processing application scenes which can be realized through image resampling needs to be carried out on the original image, the preset image processing model corresponding to the image processing application scene and the neighborhood pixel point selection corresponding to the preset image processing model can be used The original image is resampled by selecting the type, so that the problem of poor universality of the traditional resampling method is solved.
In order to increase the speed of image resampling, in other embodiments of the present application, before the sampling of the pixels of the original image according to the neighborhood pixel selection type to obtain a plurality of neighborhood pixels corresponding to each pixel of the target image, the method may further include performing block averaging on the original image to obtain an original image after the block averaging.
In the embodiment of the application, the data size of the original image after the block averaging processing is smaller than that of the original image before the block averaging processing, so that the calculation amount for performing image resampling on the original image after the block averaging processing is smaller than that for directly performing image resampling on the original image before the block averaging processing, and the speed of performing image resampling on the original image is improved conveniently.
Specifically, when the block averaging processing is performed on the original image, a formula may be utilized
Figure BDA0002241757930000101
And calculating the original image after block average processing.
Wherein r is0To rn-1Representing the pixel values, r, of n pixels of a block of pixels subjected to a block averaging process in an original imageavgRepresenting the sum r in the original image after block averaging0To rn-1And the pixel values of the pixel points corresponding to the formed pixel blocks, wherein n is an integer greater than or equal to 2.
That is to say, when performing block averaging, the original picture may be divided into a plurality of pixel blocks, the pixel values of each pixel point in the pixel blocks are accumulated, the accumulated value is divided by the number of pixel points in the block to obtain block pixel values after the block averaging, and the original picture after the block averaging is generated according to the block pixel values after the block averaging.
It should be noted that, when performing block averaging processing on the original image, block averaging processing may be performed on color values of different color channels of each pixel point in the pixel block, respectively, to obtain the original image after the block averaging processing.
For example, as shown in fig. 3, the region in the original picture 31 may be divided into k0To k is8These 9 pixel blocks, pair k0To k is8The pixel values of RGB three color channels of the pixel point of each pixel block in the 9 pixel blocks are respectively accumulated, and the value obtained by accumulation is divided by the pixel value in the blockThe block pixel value formed by the color values of the three RGB color channels after the block averaging processing is obtained, and the original image after the block averaging processing is generated according to the block pixel value formed by the color values of the three RGB color channels after the block averaging processing.
It should be noted that other block averaging processing methods are also applicable to the present application, and the present application is not limited thereto.
Correspondingly, in the step 102, performing pixel sampling on the original image according to the neighborhood pixel selection type to obtain a plurality of neighborhood pixels corresponding to each pixel of the target image, which may include: and sampling pixel points of the original image subjected to block averaging according to the neighborhood pixel point selection type to obtain a plurality of neighborhood pixel points corresponding to each pixel point of the target image.
According to the embodiment of the application, the original image after block average processing is obtained by performing block average processing on the original image, pixel point sampling is performed on the original image after block average processing, a plurality of neighborhood pixel points corresponding to each pixel point of the target image are obtained, namely, the input original image is preliminarily extracted, so that the data volume of the original image subjected to image resampling is reduced, and further the calculation amount of image resampling is reduced.
In embodiments of the present application, before the inputting the pixel values of the plurality of neighborhood pixels corresponding to each pixel of the target image into the preset image processing model, the method may further include training the image processing model to be trained to obtain the preset image processing model corresponding to each type to be processed.
Specifically, as shown in fig. 4, the training of the image processing model to be trained may include: step 401 to step 405.
Step 401, acquiring a plurality of sets of original sample images and target sample images corresponding to target processing types marked in advance by the original sample images.
The target sample image is an image which is obtained by performing image processing on an original sample image according to a pre-marked target processing type and meets the requirement of the pre-marked target processing type. For example, when the pre-marked target processing type is defect detection, the target sample image may be an image obtained by performing enhancement processing on a defective region of the original sample image and performing blurring processing on a non-defective region of the original sample image.
It should be noted that, in the embodiment of the present application, each groups of original sample images in the multiple groups of original sample images correspond to types of original images to be processed, the types of original sample images in different groups of original sample images to be processed are different, and each groups of original sample images include multiple original sample images.
Step 402, sampling pixel points of the original sample image according to the neighborhood pixel point selection type to be confirmed, and obtaining a plurality of sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed.
Specifically, the selection type of the neighborhood pixel to be confirmed may include: a distance nearest selection type, a square selection type, and a cross selection type.
Correspondingly, the pixel sampling of the original sample image according to the neighborhood pixel selection types to be confirmed may include pixel sampling of the original sample image according to neighborhood pixel selection types to be confirmed among a nearest selection type, a square selection type and a cross selection type.
It should be noted that, the above is only an example of the selection type of the neighborhood pixel to be confirmed, and in other embodiments of the present application, the pixel sampling may be performed on the original sample image according to the neighborhood pixel selection method of other neighborhood pixel selection types.
Step 403, inputting a plurality of sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed into the image processing model to be trained, and outputting a weight value corresponding to each sample neighborhood pixel point in the plurality of sample neighborhood pixel points and an image processing coefficient corresponding to the plurality of sample neighborhood pixel points by the image processing model to be trained.
It should be noted that, in the embodiment of the present application, each pixel point of a large number of sample images to be confirmed is used to train an image processing model to be trained, so that the obtained preset image processing model can perform image processing corresponding to the image processing type of the preset image processing model on different images.
For example, in the embodiments of the present application, the image processing model to be trained may be a multi-layered perceptron model to be trained.
Correspondingly, the above-mentioned a plurality of sample neighborhood pixel points that will correspond respectively with every pixel point of waiting to confirm the sample image input the image processing model that waits to train, by waiting the corresponding weight value of every sample neighborhood pixel point in a plurality of sample neighborhood pixel points of image processing model output that trains, and a plurality of sample neighborhood pixel point correspond the image processing coefficient, can include: inputting a plurality of sample neighborhood pixel points corresponding to each pixel point of a sample image to be confirmed into a multilayer perceptron model to be trained, and outputting a weighted value corresponding to each sample neighborhood pixel point in the plurality of sample neighborhood pixel points, and a contrast coefficient and a brightness coefficient corresponding to the plurality of sample neighborhood pixel points by the multilayer perceptron model to be trained.
It should be noted that the multi-layer perceptron model to be trained may include a plurality of hidden layers, each hidden layer may include a plurality of nodes, and the calculation formula of each node may be p ═ f (∑ a)i·pi+b)。
Where f denotes the mapping function of the node, piRepresenting the input of the node, wherein the input of the node can be the pixel values of a plurality of sample neighborhood pixel points respectively corresponding to each pixel point of the sample image to be confirmed, or the output of the upper nodeiRepresenting the weight parameter of the node, b representing the bias parameter of the node, and p representing the output of the node.
It should be noted that the mapping function of the node may include a linear mapping function, an absolute value mapping function, a relu mapping function, a tanh mapping function, a sigmoid mapping function, or other mapping functions.
Based on the description of the structure of the multi-layer perceptron model to be trained, in embodiments of the present application, the adjusting the parameters of the image processing model to be trained may include adjusting the number of layers of hidden layers of the multi-layer perceptron, and/or adjusting the number of nodes of each hidden layer of the multi-layer perceptron, and/or adjusting the node parameters and the mapping function of the multi-layer perceptron.
Step 404, respectively calculating a pixel value of each pixel point of the sample image to be confirmed according to the pixel values of the plurality of sample neighborhood pixel points respectively corresponding to each pixel point of the sample image to be confirmed, the weight value corresponding to each sample neighborhood pixel point of the plurality of sample neighborhood pixel points output by the image processing model to be trained, and the image processing coefficients corresponding to the plurality of sample neighborhood pixel points, so as to obtain the sample image to be confirmed corresponding to the original sample image.
Specifically, the method for calculating the pixel value of each pixel point of the sample image to be confirmed may refer to step 104, which is not described herein again.
Step 405, calculating the similarity between the sample image to be confirmed and the target sample image, if the similarity is smaller than a similarity threshold, adjusting parameters of the image processing model to be trained or adjusting a neighborhood pixel selection type to be confirmed, training the image processing model to be trained again by using the original sample image until the similarity is greater than or equal to the similarity threshold, or training the image processing model to be trained again by using the original sample image for a time greater than or equal to an th time threshold, training the image processing model to be trained by using a lower group of original sample images until the total training time of the image processing model to be trained is greater than or equal to a second time threshold, or the change rate of the similarity is less than the change rate threshold, and determining the image processing model to be trained and the neighborhood pixel selection type to be confirmed as a preset image processing model and a neighborhood pixel selection type corresponding to the target processing type marked in advance by the original sample image.
For example, sampling pixels of 100 original sample images according to a neighborhood pixel selection type to be confirmed to obtain a plurality of sample neighborhood pixels corresponding to each pixel of the sample image to be confirmed, inputting the plurality of sample neighborhood pixels corresponding to each pixel of the sample image to be confirmed into an image processing model to be trained, outputting a weight value corresponding to each sample neighborhood pixel in the plurality of sample neighborhood pixels and an image processing coefficient corresponding to the plurality of sample neighborhood pixels by the image processing model to be trained, then calculating a pixel value of each pixel of the sample image to be confirmed according to a pixel value of each sample neighborhood pixel corresponding to each pixel of the sample image to be confirmed, a weight value corresponding to each sample neighborhood pixel in the plurality of sample neighborhood pixels output by the image processing model to be trained, and calculating a similarity between the sample image to be confirmed and a target sample image according to the image processing coefficient corresponding to the neighborhood pixels of the sample image to be confirmed, and obtaining 100 sample images to be confirmed corresponding to the original sample image, and when the similarity is less than a similarity threshold, utilizing a training threshold of sample images to be confirmed to be treated to change, and obtaining a total training rate of the training image to be more than or less than a preset training rate of the training model.
The similarity between the sample image to be confirmed and the target sample image may be calculated by a structural similarity measurement method, a cosine similarity calculation method, an image similarity calculation method based on a histogram, and the like, which is not limited in the present application.
In the embodiment of the application, when the similarity is greater than or equal to the similarity threshold, or the training frequency of the image processing model to be trained by reusing the original sample picture is greater than or equal to the frequency threshold, it is indicated that the image processing model can perform image processing on the original sample picture more accurately, the total training frequency of the image processing model to be trained is greater than or equal to the frequency threshold, or the change rate of the similarity is less than the change rate threshold, it is indicated that the accuracy of image processing of the image processing model to be trained tends to be stable, and therefore, it can be indicated that the image processing model to be trained has been trained.
In the embodiment of the present application, the method for training the preset image processing model includes inputting the pixel values of the plurality of sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed into the image processing model to be trained, outputting, by the image processing model to be trained, the weight value corresponding to each sample neighborhood pixel point of the plurality of sample neighborhood pixel points, and the image processing coefficient corresponding to the plurality of sample neighborhood pixel points, rather than directly outputting the pixel value of each sample neighborhood pixel point of the plurality of sample neighborhood pixel points, that is, the input data type and the output data type of the preset image processing model obtained by training are different, the input data type is the pixel values of the plurality of neighborhood pixel points, the output data types are the weight value and the image processing coefficient, and the method belongs to a non-end-to-end parameter training method, the number of calculation parameters is effectively reduced, the parameter training process is accelerated, and the training efficiency of the model is improved.
It should be noted that, the above is only an example of the image processing model to be trained, and in other embodiments of the present application, the image processing model to be trained may be other non-end-to-end image processing models to be trained, besides the multi-layer perceptron model.
For simplicity of description, the foregoing method embodiments are described as series combinations of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may occur in other orders according to the present invention.
Fig. 5 shows a schematic structural diagram of image resampling apparatuses 500 provided in this embodiment of the present application, which includes an acquisition unit 501, a sampling unit 502, an output unit 503, and a calculation unit 504.
The acquiring unit 501 is configured to acquire an original image and a type to be processed of the original image, and determine a preset image processing model for resampling the original image and a neighborhood pixel selection type corresponding to the preset image processing model according to the type to be processed of the original image.
The sampling unit 502 is configured to perform pixel sampling on the original image according to the neighborhood pixel selection type, and obtain a plurality of neighborhood pixels corresponding to each pixel of the target image.
An output unit 503, configured to input pixel values of the multiple neighborhood pixels corresponding to each pixel of the target image into the preset image processing model, and output, by the preset image processing model, a weight value corresponding to each neighborhood pixel in the multiple neighborhood pixels and an image processing coefficient corresponding to the multiple neighborhood pixels.
A calculating unit 504, configured to calculate, according to the pixel values of the multiple neighborhood pixels corresponding to each pixel of the target image, the weight value corresponding to each neighborhood pixel in the multiple neighborhood pixels output by the preset image processing model, and the image processing coefficients corresponding to the multiple neighborhood pixels, the pixel value of each pixel of the target image, respectively, so as to obtain a target image corresponding to the original image.
In embodiments of the present application, the sampling unit 502 is further configured to perform block averaging on the original image to obtain an original image after block averaging, and perform pixel sampling on the original image after block averaging according to the neighborhood pixel selection type to obtain a plurality of neighborhood pixels corresponding to each pixel of the target image.
In embodiments of the present application, the image resampling apparatus may further include a training unit, configured to train an image processing model to be trained, so as to obtain a preset image processing model corresponding to each type to be processed.
The training unit is used for obtaining a plurality of groups of original sample images and target sample images corresponding to target processing types marked in advance by the original sample images, performing pixel sampling on the original sample images according to neighborhood pixel selection types to be confirmed to obtain a plurality of sample neighborhood pixel points corresponding to each pixel point of the sample images to be confirmed, inputting pixel values of the sample neighborhood pixel points corresponding to each pixel point of the sample images to be confirmed into the image processing model to be trained, outputting a weight value corresponding to each sample neighborhood pixel point in the sample neighborhood pixel points and image processing coefficients corresponding to the sample neighborhood pixel points by the image processing model to be trained, calculating a pixel value of each sample neighborhood pixel point of the sample images to be confirmed according to pixel values of the sample neighborhood pixel points corresponding to each pixel point of the sample images to be confirmed, calculating a weight value corresponding to each sample neighborhood pixel point in the sample neighborhood pixel points output by the image processing model to be trained until the number of the training sample neighborhood image selection pixels is equal to the threshold value of the training image to be confirmed, or the number of the training image processing parameters is equal to the threshold value of the training image selection pixels of the image to be confirmed, and the training image processing model to be confirmed is equal to the training image, or equal to the training image processing threshold value of the training image processing model to the training image, calculating the number of each pixel point of the image to be confirmed until the training image is equal to the training image, and the training image processing model to be equal to the training image, or equal to the training image, calculating the number of the training image to be equal to the training image processing threshold value of the training image to be confirmed, or equal to be similar to the training image, and the training image to be similar to the training image processing threshold value of the training image, or less than the training image, calculating the training image processing threshold value of the training image, and the training image to be similar.
In embodiments of the present application, when the image processing model to be trained is a multi-layered sensor model to be trained, the output unit 503 is further configured to input the plurality of sample neighborhood pixels corresponding to each pixel of the sample image to be confirmed into the multi-layered sensor model to be trained, and output, by the multi-layered sensor model to be trained, a weight value corresponding to each sample neighborhood pixel in the plurality of sample neighborhood pixels, and a contrast coefficient and a brightness coefficient corresponding to the plurality of sample neighborhood pixels.
In embodiments of the present application, the image resampling apparatus may further include an adjusting unit, configured to adjust the number of hidden layers of the multi-layer perceptron, and/or adjust the number of nodes in each hidden layer of the multi-layer perceptron, and/or adjust node parameters and mapping functions of the multi-layer perceptron.
In embodiments of the present application, the calculating unit 504 is further configured to accumulate products between pixel values of a plurality of sample neighborhood pixel points respectively corresponding to each pixel point of the sample image to be confirmed and weight values corresponding to each sample neighborhood pixel point of the plurality of sample neighborhood pixel points output by the image processing model to be trained, and sum the accumulated values with the contrast coefficient and the brightness coefficient to obtain a pixel value of each pixel point of the sample image to be confirmed.
In embodiments of the present application, the sampling unit 502 is further configured to perform pixel sampling on the original sample image according to to-be-confirmed neighborhood pixel selection types among the closest selection type, the square selection type, and the cross selection type.
It should be noted that, for convenience and simplicity of description, the specific working process of the image resampling apparatus 500 described above may refer to the corresponding process of the method described in fig. 1 to fig. 4, and is not described herein again.
As shown in fig. 6, the present application provides terminals for implementing the above-described image resampling method, which may include a processor 61, a memory 62, one or more input devices 63 (only are shown in fig. 6) and one or more output devices 64 (only are shown in fig. 6). the processor 61, the memory 62, the input devices 63 and the output devices 64 are connected by a bus 65.
It should be understood that in the embodiments of the present Application, the Processor 61 may be a Central Processing Unit (CPU), and the Processor may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Array (FPGA) or other Programmable logic device, discrete or transistor logic device, discrete hardware components, etc.
The input device 63 may include a virtual keyboard, a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of the fingerprint), a microphone, etc., and the output device 64 may include a display, a speaker, etc.
The memory 62 stores a computer program that can be executed by the processor 61, and the computer program is, for example, a program of an image resampling method. The processor 61 implements the steps of the image resampling method embodiments, such as steps 101 to 104 shown in fig. 1, when executing the computer program. Alternatively, the processor 61 may implement the functions of the units in the device embodiment when executing the computer program, for example, the functions of the units 501 to 504 shown in fig. 5.
The or more modules/units may be a series of computer program instruction segments capable of performing specific functions, the instruction segments being used for describing the execution process of the computer program in the terminal for performing image resampling, for example, the computer program may be divided into an acquisition unit, a sampling unit, an output unit and a calculation unit, and the specific functions of each unit are as follows:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring an original image and a type to be processed of the original image, and determining a preset image processing model for resampling the original image and a neighborhood pixel point selection type corresponding to the preset image processing model according to the type to be processed of the original image.
And the sampling unit is used for sampling pixel points of the original image according to the neighborhood pixel point selection type to obtain a plurality of neighborhood pixel points corresponding to each pixel point of the target image.
And the output unit is used for inputting the pixel values of the plurality of neighborhood pixels corresponding to each pixel of the target image into the preset image processing model, and outputting the weight value corresponding to each neighborhood pixel in the plurality of neighborhood pixels and the image processing coefficient corresponding to the plurality of neighborhood pixels by the preset image processing model.
And the calculating unit is used for calculating the pixel value of each pixel point of the target image according to the pixel values of the plurality of neighborhood pixel points corresponding to each pixel point of the target image, the weight value corresponding to each neighborhood pixel point in the plurality of neighborhood pixel points output by the preset image processing model and the image processing coefficients corresponding to the plurality of neighborhood pixel points, so as to obtain the target image corresponding to the original image.
It is obvious to those skilled in the art that, for convenience and simplicity of description, only the division of the above functional units and modules is illustrated, and in practical applications, the above functions may be distributed by different functional units and modules as needed, that is, the internal structure of the apparatus is divided into different functional units or modules to complete all or part of the above described functions.
The embodiment of the present application provides computer program products, which when run on a terminal device, enable the terminal device to implement the steps of the image resampling method in the above embodiments when executed.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
For example, the above-described embodiments of the apparatus/terminal are merely illustrative, and for example, the above modules or units may be divided into only logical functional divisions, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated with another systems, or features may be omitted or not executed.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in places, or may also be distributed on multiple network units.
In addition, the functional units in the embodiments of the present application may be integrated into processing units, or each unit may exist alone physically, or two or more units are integrated into units.
The above-described integrated modules/units, if implemented as software functional units and sold or used as separate products, may be stored in computer-readable storage media, based on the understanding that the present application implements all or part of the processes of the above-described embodiments of the methods, and may also be implemented by a computer program instructing associated hardware, the computer program may be stored in computer-readable storage media, and the computer program may implement the steps of the above-described embodiments of the methods when executed by a processor.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1, image resampling method, comprising:
acquiring an original image and a type to be processed of the original image, and determining a preset image processing model for resampling the original image and a neighborhood pixel point selection type corresponding to the preset image processing model according to the type to be processed of the original image;
sampling pixel points of the original image according to the neighborhood pixel point selection type to obtain a plurality of neighborhood pixel points corresponding to each pixel point of the target image;
inputting pixel values of a plurality of neighborhood pixel points corresponding to each pixel point of a target image into the preset image processing model, and outputting a weight value corresponding to each neighborhood pixel point in the plurality of neighborhood pixel points and an image processing coefficient corresponding to the plurality of neighborhood pixel points by the preset image processing model;
and respectively calculating the pixel value of each pixel point of the target image according to the pixel values of the plurality of neighborhood pixel points respectively corresponding to each pixel point of the target image, the weight value corresponding to each neighborhood pixel point in the plurality of neighborhood pixel points output by the preset image processing model and the image processing coefficients corresponding to the plurality of neighborhood pixel points, so as to obtain the target image corresponding to the original image.
2. The image resampling method as claimed in claim 1, wherein before said performing pixel sampling on the original image according to the neighborhood pixel selection type to obtain a plurality of neighborhood pixels corresponding to each pixel of the target image, the method comprises:
carrying out block average processing on the original image to obtain an original image subjected to block average processing;
the pixel sampling is carried out on the original image according to the neighborhood pixel selection type to obtain a plurality of neighborhood pixels corresponding to each pixel of the target image respectively, and the method comprises the following steps:
and sampling pixel points of the original image after the block average processing according to the neighborhood pixel point selection type to obtain a plurality of neighborhood pixel points corresponding to each pixel point of the target image.
3. The image resampling method as claimed in claim 1 or 2, wherein before inputting the pixel values of the plurality of neighborhood pixels respectively corresponding to each pixel of the target image into the preset image processing model, the method comprises: training the image processing model to be trained to obtain a preset image processing model corresponding to each type to be processed;
the training of the image processing model to be trained comprises:
acquiring a plurality of groups of original sample images and target sample images corresponding to target processing types marked in advance by the original sample images;
sampling pixel points of the original sample image according to the neighborhood pixel point selection type to be confirmed to obtain a plurality of sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed;
inputting pixel values of a plurality of sample neighborhood pixel points corresponding to each pixel point of a sample image to be confirmed into the image processing model to be trained, and outputting a weight value corresponding to each sample neighborhood pixel point in the plurality of sample neighborhood pixel points and an image processing coefficient corresponding to the plurality of sample neighborhood pixel points by the image processing model to be trained;
calculating the pixel value of each pixel point of the sample image to be confirmed respectively according to the pixel values of the sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed respectively, the weight value corresponding to each sample neighborhood pixel point in the sample neighborhood pixel points output by the image processing model to be trained and the image processing coefficients corresponding to the sample neighborhood pixel points, so as to obtain the sample image to be confirmed corresponding to the original sample image;
calculating the similarity between the sample image to be confirmed and the target sample image, if the similarity is smaller than a similarity threshold, adjusting parameters of the image processing model to be trained or adjusting the selection type of the neighborhood pixel points to be confirmed, training the image processing model to be trained by reusing the original sample image until the similarity is larger than or equal to the similarity threshold, or training the image processing model to be trained by reusing the original sample image for more than or equal to an th threshold, training the image processing model to be trained by utilizing a lower group of original sample images until the total training times of the image processing model to be trained is larger than or equal to a second threshold, or determining the change rate of the similarity to be smaller than the change rate threshold, determining the image processing model to be trained as a preset image processing model corresponding to the target processing type pre-marked by the original sample image, and determining the selection type of the neighborhood pixel points to be confirmed as the selection type of the neighborhood pixel points corresponding to the preset image processing model.
4. The image resampling method as claimed in claim 3, wherein the image processing model to be trained is a multi-layered perceptron model to be trained;
the inputting of the plurality of sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed into the image processing model to be trained, and the outputting of the weighted value corresponding to each sample neighborhood pixel point in the plurality of sample neighborhood pixel points and the image processing coefficient corresponding to the plurality of sample neighborhood pixel points by the image processing model to be trained include:
inputting the plurality of sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed into the multilayer perceptron model to be trained, and outputting the weighted value corresponding to each sample neighborhood pixel point in the plurality of sample neighborhood pixel points, and the contrast coefficient and the brightness coefficient corresponding to the plurality of sample neighborhood pixel points by the multilayer perceptron model to be trained.
5. The image resampling method as recited in claim 4, wherein the adjusting the parameters of the image processing model to be trained comprises:
adjusting the number of layers of the hidden layer of the multilayer perceptron; and/or
Adjusting the number of nodes of each hidden layer of the multilayer perceptron; and/or
And adjusting the node parameters and the mapping function of the multilayer perceptron.
6. The image resampling method as claimed in claim 3, wherein said calculating the pixel value of each pixel point of the sample image to be confirmed according to the pixel values of the sample neighborhood pixel points corresponding to each pixel point of the sample image to be confirmed, the weight value corresponding to each sample neighborhood pixel point of the sample neighborhood pixel points output by the image processing model to be trained, and the image processing coefficients corresponding to the sample neighborhood pixel points respectively comprises:
and accumulating products of pixel values of a plurality of sample neighborhood pixel points respectively corresponding to each pixel point of the sample image to be confirmed and a weighted value corresponding to each sample neighborhood pixel point in the plurality of sample neighborhood pixel points output by the image processing model to be trained, multiplying the accumulated value by the contrast coefficient, and then summing the multiplied value and the brightness coefficient to obtain the pixel value of each pixel point of the sample image to be confirmed.
7. The image resampling method as claimed in claim 3, wherein the selecting type of the neighborhood pixel to be confirmed comprises: a closest selection type, a square selection type and a cross selection type;
the pixel sampling of the original sample image according to the neighborhood pixel selection types to be confirmed comprises the step of carrying out pixel sampling on the original sample image according to neighborhood pixel selection types to be confirmed in the nearest distance selection type, the square selection type and the cross selection type.
An image resampling apparatus of , comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an original image and a type to be processed of the original image, and determining a preset image processing model for resampling the original image and a neighborhood pixel point selection type corresponding to the preset image processing model according to the type to be processed of the original image;
the sampling unit is used for sampling pixel points of the original image according to the neighborhood pixel point selection type to obtain a plurality of neighborhood pixel points corresponding to each pixel point of the target image;
the output unit is used for inputting the pixel values of a plurality of neighborhood pixel points corresponding to each pixel point of the target image into the preset image processing model, and outputting the weight value corresponding to each neighborhood pixel point in the plurality of neighborhood pixel points and the image processing coefficient corresponding to the plurality of neighborhood pixel points by the preset image processing model;
and the calculating unit is used for calculating the pixel value of each pixel point of the target image according to the pixel values of the plurality of neighborhood pixel points corresponding to each pixel point of the target image, the weight value corresponding to each neighborhood pixel point in the plurality of neighborhood pixel points output by the preset image processing model and the image processing coefficients corresponding to the plurality of neighborhood pixel points, so as to obtain the target image corresponding to the original image.
A terminal of 9, , comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor when executing the computer program implements the steps of the method of any of claims 1 to 7 to .
10, computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any of claims 1 to 7, .
CN201911003561.6A 2019-10-21 2019-10-21 Image resampling method, device, terminal and computer readable storage medium Active CN110738625B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911003561.6A CN110738625B (en) 2019-10-21 2019-10-21 Image resampling method, device, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911003561.6A CN110738625B (en) 2019-10-21 2019-10-21 Image resampling method, device, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110738625A true CN110738625A (en) 2020-01-31
CN110738625B CN110738625B (en) 2022-03-11

Family

ID=69270834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911003561.6A Active CN110738625B (en) 2019-10-21 2019-10-21 Image resampling method, device, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110738625B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989911A (en) * 2020-12-10 2021-06-18 奥比中光科技集团股份有限公司 Pedestrian re-identification method and system
WO2021254381A1 (en) * 2020-06-17 2021-12-23 京东方科技集团股份有限公司 Image processing method and apparatus, electronic device, and computer-readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831579A (en) * 2011-06-16 2012-12-19 富士通株式会社 Text enhancement method and device, text extraction method and device
CN104103082A (en) * 2014-06-06 2014-10-15 华南理工大学 Image saliency detection method based on region description and priori knowledge
CN105701773A (en) * 2014-11-28 2016-06-22 联芯科技有限公司 Method and device for processing image rapidly
CN105975912A (en) * 2016-04-27 2016-09-28 天津大学 Hyperspectral image nonlinearity solution blending method based on neural network
CN106373095A (en) * 2016-08-29 2017-02-01 广东欧珀移动通信有限公司 Image processing method and terminal
CN107292828A (en) * 2016-03-31 2017-10-24 展讯通信(上海)有限公司 The treating method and apparatus of image border
CN109903224A (en) * 2019-01-25 2019-06-18 珠海市杰理科技股份有限公司 Image-scaling method, device, computer equipment and storage medium
CN110245607A (en) * 2019-06-13 2019-09-17 Oppo广东移动通信有限公司 Eyeball tracking method and Related product

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831579A (en) * 2011-06-16 2012-12-19 富士通株式会社 Text enhancement method and device, text extraction method and device
CN104103082A (en) * 2014-06-06 2014-10-15 华南理工大学 Image saliency detection method based on region description and priori knowledge
CN105701773A (en) * 2014-11-28 2016-06-22 联芯科技有限公司 Method and device for processing image rapidly
CN107292828A (en) * 2016-03-31 2017-10-24 展讯通信(上海)有限公司 The treating method and apparatus of image border
CN105975912A (en) * 2016-04-27 2016-09-28 天津大学 Hyperspectral image nonlinearity solution blending method based on neural network
CN106373095A (en) * 2016-08-29 2017-02-01 广东欧珀移动通信有限公司 Image processing method and terminal
CN109903224A (en) * 2019-01-25 2019-06-18 珠海市杰理科技股份有限公司 Image-scaling method, device, computer equipment and storage medium
CN110245607A (en) * 2019-06-13 2019-09-17 Oppo广东移动通信有限公司 Eyeball tracking method and Related product

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李红娣: "采用金字塔纹理和边缘特征的图像烟雾检测", 《中国图象图形学报》 *
王顺飞: "改进的基于局部联合特征的运动目标检测方法", 《仪器仪表学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021254381A1 (en) * 2020-06-17 2021-12-23 京东方科技集团股份有限公司 Image processing method and apparatus, electronic device, and computer-readable storage medium
CN112989911A (en) * 2020-12-10 2021-06-18 奥比中光科技集团股份有限公司 Pedestrian re-identification method and system

Also Published As

Publication number Publication date
CN110738625B (en) 2022-03-11

Similar Documents

Publication Publication Date Title
US11610082B2 (en) Method and apparatus for training neural network model used for image processing, and storage medium
JP4148041B2 (en) Signal processing apparatus, signal processing method, program, and recording medium
CN104284126B (en) Image interpolation method, image interpolation device and image device using same
EP2898473A1 (en) Systems and methods for reducing noise in video streams
CN111369550A (en) Image registration and defect detection method, model, training method, device and equipment
JP2005018535A (en) Signal processor, signal processing method, program, and recording medium
CN110738625A (en) Image resampling method, device, terminal and computer readable storage medium
CN116824070B (en) Real-time three-dimensional reconstruction method and system based on depth image
JP2023003763A (en) Learning apparatus, image processing apparatus, learning processing method, and program
JP4392583B2 (en) Signal processing apparatus, signal processing method, program, and recording medium
WO2023138540A1 (en) Edge extraction method and apparatus, and electronic device and storage medium
CN115471413A (en) Image processing method and device, computer readable storage medium and electronic device
CN115619678A (en) Image deformation correction method and device, computer equipment and storage medium
Zhiwei et al. An image zooming technique based on the relative color difference of pixels
JP5169926B2 (en) Image processing device
WO2020115866A1 (en) Depth processing system, depth processing program, and depth processing method
JP2005018537A (en) Signal processor, signal processing method, program, and recording medium
JP7512150B2 (en) Information processing device, information processing method, and program
JP4423535B2 (en) Signal processing apparatus, signal processing method, program, and recording medium
CN116433674B (en) Semiconductor silicon wafer detection method, device, computer equipment and medium
JP4423536B2 (en) Signal processing apparatus, signal processing method, program, and recording medium
JP4419453B2 (en) Signal processing apparatus, signal processing method, program, and recording medium
Chen et al. Image Restoration Algorithm Research on Local Motion-blur
CN116503252A (en) Method for generating image superdivision data set, image superdivision model and training method
JP4182827B2 (en) Signal processing apparatus, signal processing method, program, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant