CN117409211A - Quality feature extraction method, quality feature extraction device, computer equipment and storage medium - Google Patents

Quality feature extraction method, quality feature extraction device, computer equipment and storage medium Download PDF

Info

Publication number
CN117409211A
CN117409211A CN202311419589.4A CN202311419589A CN117409211A CN 117409211 A CN117409211 A CN 117409211A CN 202311419589 A CN202311419589 A CN 202311419589A CN 117409211 A CN117409211 A CN 117409211A
Authority
CN
China
Prior art keywords
image
target
feature extraction
sample
fidelity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311419589.4A
Other languages
Chinese (zh)
Inventor
黄雨
楼轶维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202311419589.4A priority Critical patent/CN117409211A/en
Publication of CN117409211A publication Critical patent/CN117409211A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The application relates to a quality feature extraction method, a quality feature extraction device, computer equipment and a storage medium. The method comprises the following steps: inputting the target image into a target fidelity feature extraction model to obtain target fidelity features of the target image, and inputting the target image into a target structure feature extraction model to obtain target structure features of the target image; splicing the target fidelity feature and the target structural feature to obtain the quality feature of the target image; the target fidelity feature extraction model is obtained by training based on a plurality of sample distortion images, and the target structure feature extraction model is obtained by training based on the plurality of sample distortion images and similarity measures corresponding to the sample distortion images. The method can enable the extracted target fidelity characteristics and the target structural characteristics to be more accurate, and further enable the quality characteristics of the target image to be more accurate, so that the quality evaluation of the target image based on the quality characteristics is more accurate.

Description

Quality feature extraction method, quality feature extraction device, computer equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to a quality feature extraction method, apparatus, computer device, and storage medium.
Background
With the development of artificial intelligence technology, neural network models are generally used for extracting quality features of images.
In the prior art, a neural network model is generally used to extract quality features of an image, that is, a distorted image is input into a neural network model which is trained in advance, and the neural network model analyzes the distorted image, so as to determine the quality features of the distorted image.
Although this method can determine the quality features of a very coarse distorted image, there is a problem in that the distorted image quality features are not accurately extracted.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a quality feature extraction method, apparatus, computer device, and storage medium capable of improving accuracy of image quality feature extraction.
In a first aspect, the present application provides a quality feature extraction method, including:
inputting the target image into a target fidelity feature extraction model to obtain target fidelity features of the target image, and inputting the target image into a target structure feature extraction model to obtain target structure features of the target image;
splicing the target fidelity feature and the target structural feature to obtain the quality feature of the target image;
The target fidelity feature extraction model is obtained by training based on a plurality of sample distortion images, and the target structure feature extraction model is obtained by training based on the plurality of sample distortion images and similarity measures corresponding to the sample distortion images.
In one embodiment, the method further comprises:
respectively carrying out distortion treatment on each sample original image to obtain a sample distortion image corresponding to each sample original image;
based on the distorted images of the samples, training an initial fidelity feature extraction model to obtain a target fidelity feature extraction model.
In one embodiment, training the initial fidelity feature extraction model based on each sample distortion image to obtain the target fidelity feature extraction model comprises:
respectively carrying out enhancement processing on each sample distortion image to obtain a sample enhancement image corresponding to each sample distortion image;
and carrying out iterative training on the initial fidelity feature extraction model based on the sample distortion image and the sample enhancement image to obtain the fidelity feature extraction model.
In one embodiment, based on the sample distorted image and the sample enhanced image, performing iterative training on the initial fidelity feature extraction model to obtain the fidelity feature extraction model, including:
And for one iteration process, inputting the sample distortion image and the sample enhancement image into the intermediate fidelity feature extraction model to obtain a first fidelity feature corresponding to the sample distortion image and a second fidelity feature corresponding to the sample enhancement image.
Inputting the first fidelity feature and the second fidelity feature into a multi-layer perceptron to obtain a first fidelity vector and a second fidelity vector corresponding to the distorted image;
determining a loss value corresponding to the iterative process according to the first fidelity vector, the second fidelity vector and the preset temperature parameter;
and according to the loss value, performing model parameter adjustment on the intermediate fidelity feature extraction model.
In one embodiment, the method further comprises:
for each sample original image, determining a similarity measure between the sample original image and the sample distortion image according to the pixel values of all pixel points in the sample original image and the pixel values of the pixel points of the sample distortion image corresponding to the sample original image;
and carrying out iterative training on the initial structural feature extraction model based on the sample distortion image and the similarity measure to obtain a target structural feature extraction model.
In one embodiment, based on the sample distortion image and the similarity measure, performing iterative training on the initial structural feature extraction model to obtain a target structural feature extraction model, including:
For the sequential iterative process, inputting the sample distortion image into an intermediate structural feature extraction model, and processing and outputting the image structural feature corresponding to the sample distortion image by the initial structural feature extraction model through a multi-head attention mechanism;
carrying out projection processing on the image structural features through a multi-layer perceptron to obtain image structural vectors corresponding to the initial structural features;
and based on the image structure vector and the similarity measure, performing model parameter adjustment on the intermediate feature extraction model.
In a second aspect, the present application further provides a quality feature extraction apparatus, including:
the feature extraction module is used for inputting the target image into the target fidelity feature extraction model to obtain the target fidelity feature of the target image, and inputting the target image into the target structure feature extraction model to obtain the target structure feature of the target image;
the feature stitching module is used for stitching the target fidelity feature and the target structural feature to obtain the quality feature of the target image; the target fidelity feature extraction model is obtained by training based on a plurality of sample distortion images, and the target structure feature extraction model is obtained by training based on the plurality of sample distortion images and similarity measures corresponding to the sample distortion images.
In a third aspect, the present application also provides a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
inputting the target image into a target fidelity feature extraction model to obtain target fidelity features of the target image, and inputting the target image into a target structure feature extraction model to obtain target structure features of the target image;
splicing the target fidelity feature and the target structural feature to obtain the quality feature of the target image;
the target fidelity feature extraction model is obtained by training based on a plurality of sample distortion images, and the target structure feature extraction model is obtained by training based on the plurality of sample distortion images and similarity measures corresponding to the sample distortion images.
In a fourth aspect, the present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
inputting the target image into a target fidelity feature extraction model to obtain target fidelity features of the target image, and inputting the target image into a target structure feature extraction model to obtain target structure features of the target image;
Splicing the target fidelity feature and the target structural feature to obtain the quality feature of the target image;
the target fidelity feature extraction model is obtained by training based on a plurality of sample distortion images, and the target structure feature extraction model is obtained by training based on the plurality of sample distortion images and similarity measures corresponding to the sample distortion images.
In a fifth aspect, the present application also provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of:
inputting the target image into a target fidelity feature extraction model to obtain target fidelity features of the target image, and inputting the target image into a target structure feature extraction model to obtain target structure features of the target image;
splicing the target fidelity feature and the target structural feature to obtain the quality feature of the target image;
the target fidelity feature extraction model is obtained by training based on a plurality of sample distortion images, and the target structure feature extraction model is obtained by training based on the plurality of sample distortion images and similarity measures corresponding to the sample distortion images.
The quality feature extraction method, the quality feature extraction device, the computer equipment and the storage medium are used for inputting the target image into the target fidelity feature extraction model to obtain the target fidelity feature of the target image, and inputting the target image into the target structure feature extraction model to obtain the target structure feature of the target image; splicing the target fidelity feature and the target structural feature to obtain the quality feature of the target image; the target fidelity feature extraction model is obtained by training based on a plurality of sample distortion images, and the target structure feature extraction model is obtained by training based on the plurality of sample distortion images and similarity measures corresponding to the sample distortion images. The method comprises the steps of extracting target fidelity feature extraction through a target fidelity feature extraction model obtained by training based on a plurality of sample distortion images; the target structure feature extraction model obtained by self-supervision learning is trained based on a plurality of sample distortion images and the similarity measure corresponding to each sample distortion image, the target structure feature is extracted, the extracted target fidelity feature and the target structure feature are spliced and determined to obtain the quality feature, the extracted target fidelity feature and the target structure feature are more accurate, the quality feature of the target image is more accurate, and the quality evaluation of the target image based on the quality feature is more accurate.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort for a person having ordinary skill in the art.
Fig. 1 is an application environment diagram of a quality feature extraction method provided in this embodiment;
fig. 2 is a flow chart of a quality feature extraction method provided in the present embodiment;
fig. 3 is a schematic flow chart of a training target fidelity feature extraction model according to the present embodiment;
FIG. 4 is a flowchart of another training target feature extraction model according to the present embodiment;
fig. 5 is a flow chart of another quality feature extraction method provided in the present embodiment;
fig. 6 is a block diagram of a quality feature extraction device according to the present embodiment;
fig. 7 is a block diagram of another quality feature extraction device according to the present embodiment;
fig. 8 is a block diagram of another quality feature extraction device according to the present embodiment;
Fig. 9 is an internal structure diagram of a computer device according to the present embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The quality feature extraction method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The server 104 inputs the target image into the target fidelity feature extraction model to obtain the target fidelity feature of the target image, and inputs the target image into the target structure feature extraction model to obtain the target structure feature of the target image; splicing the target fidelity feature and the target structural feature to obtain the quality feature of the target image; the target fidelity feature extraction model is obtained by training based on a plurality of sample distortion images, and the target structure feature extraction model is obtained by training based on the plurality of sample distortion images and similarity measures corresponding to the sample distortion images. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In an exemplary embodiment, as shown in fig. 2, a quality feature extraction method is provided, and an example of application of the method to the server 104 in fig. 1 is described, including the following steps S201 to S202. Wherein:
s201, inputting the target image into the target fidelity feature extraction model to obtain the target fidelity feature of the target image, and inputting the target image into the target structure feature extraction model to obtain the target structure feature of the target image.
Wherein the target image may be an image for which quality assessment is required, alternatively the target image may be a distorted image. The target fidelity feature may be a feature for characterizing the degree of distortion of the target image. The target structural feature may be a feature for characterizing image details of the target image, such as texture of the target image. The target fidelity feature extraction model is based on multiple sample distortion image training, such as ResNet-101 convolutional neural network encoder. The target structural feature extraction model is obtained through training based on a plurality of sample distortion images and similarity measures corresponding to the sample distortion images, such as a Vision Transformer model.
Optionally, inputting the target image into a target fidelity feature model, wherein the target fidelity feature model analyzes the distortion degree of the target image and extracts the target fidelity feature from the target image; and inputting the target image into a target structural feature model, wherein the target structural feature model can analyze the image details of the target image, and the target structural feature is obtained from the target image extraction.
S202, splicing the target fidelity feature and the target structural feature to obtain the quality feature of the target image.
The quality feature may be a feature for evaluating the image quality of the target image, among others.
Optionally, performing stitching processing on the target fidelity feature and the target structural feature to obtain a target fidelity feature and a target structural feature after stitching is completed, and taking the stitched target fidelity feature and target structural feature as quality features of the target image.
The quality feature extraction method, the quality feature extraction device, the computer equipment and the storage medium are used for inputting the target image into the target fidelity feature extraction model to obtain the target fidelity feature of the target image, and inputting the target image into the target structure feature extraction model to obtain the target structure feature of the target image; splicing the target fidelity feature and the target structural feature to obtain the quality feature of the target image; the target fidelity feature extraction model is obtained by training based on a plurality of sample distortion images, and the target structure feature extraction model is obtained by training based on the plurality of sample distortion images and similarity measures corresponding to the sample distortion images. The method comprises the steps of extracting target fidelity feature extraction through a target fidelity feature extraction model obtained by performing unsupervised learning training based on a plurality of sample distortion images; the target structure feature extraction model obtained by self-supervision learning is trained based on a plurality of sample distortion images and the similarity measure corresponding to each sample distortion image, the target structure feature is extracted, the extracted target fidelity feature and the target structure feature are spliced and determined to obtain the quality feature, the extracted target fidelity feature and the target structure feature are more accurate, the quality feature of the target image is more accurate, and the quality evaluation of the target image based on the quality feature is more accurate.
FIG. 3 is a flow diagram of training a target fidelity feature extraction model in one embodiment. In order to make the extracted target fidelity feature more accurate, the embodiment provides an optional way of training the target fidelity feature extraction model, which comprises the following steps:
and S301, respectively carrying out distortion processing on each sample original image to obtain a sample distortion image corresponding to each sample original image.
Wherein the sample original image may be an undistorted original image. The sample distorted image may be a distorted image used to train the initial fidelity feature extraction model.
Optionally, obtaining original images of each sample, and performing distortion processing (such as gaussian blur, gaussian noise, JEPG2000 compression, JEPG compression, etc.) on the original images of each sample to obtain sample distorted images corresponding to the original images of each sample.
S302, training an initial fidelity feature extraction model based on each sample distortion image to obtain a target fidelity feature extraction model.
Optionally, respectively performing enhancement processing on each sample distortion image to obtain a sample enhancement image corresponding to each sample distortion image; and carrying out iterative training on the initial fidelity feature extraction model based on the sample distortion image and the sample enhancement image to obtain the fidelity feature extraction model.
Specifically, in order to ensure the stability of training the initial fidelity feature extraction model and avoid the phenomenon of unstable training the initial fidelity feature extraction model caused by overlarge pixel values of the images, the embodiment can normalize each sample distortion image so that the pixel values of each sample distortion image are kept within [0,1], and in order to simplify the training complexity of the initial fidelity feature extraction model, the embodiment also needs to adjust the image size of each sample distortion image after normalization processing so that the sizes of all sample distortion images are the same. For the sample distorted image after the size adjustment, enhancement processing (such as sampling, random rotation, overturning and the like) is performed on the sample distorted image, so as to obtain a sample enhanced image corresponding to each sample distorted image, and in order to further enhance the capability of extracting the fidelity feature of the image, the implementation can also perform color space conversion processing (such as converting an RGB color space into a LAB, HSV, grayscale color space) on the sample distorted image and the sample enhanced image before inputting the sample distorted image and the sample enhanced image into the initial fidelity feature extraction model, and perform scaling processing on the sample distorted image and the sample enhanced image again after the color space conversion processing, input the sample distorted image and the sample enhanced image into the initial fidelity feature extraction model, and perform iterative training on the initial fidelity feature extraction model based on the sample distorted image and the sample enhanced image, so as to obtain the fidelity feature extraction model.
Optionally, based on the sample distorted image and the sample enhanced image, performing iterative training on the initial fidelity feature extraction model to obtain the fidelity feature extraction model, where for one iteration process, the sample distorted image and the sample enhanced image are input into the intermediate fidelity feature extraction model to obtain a first fidelity feature corresponding to the sample distorted image and a second fidelity feature corresponding to the sample enhanced image. Inputting the first fidelity feature and the second fidelity feature into a multi-layer perceptron to obtain a first fidelity vector and a second fidelity vector corresponding to the distorted image; determining a loss value corresponding to the iterative process according to the first fidelity vector, the second fidelity vector and the preset temperature parameter; and according to the loss value, performing model parameter adjustment on the intermediate fidelity feature extraction model.
Specifically, for an iteration process, a sample distortion image and a sample enhancement image are input into an intermediate fidelity feature extraction model, the intermediate fidelity feature extraction model analyzes distortion degrees of the sample distortion image and the sample enhancement image respectively, so that a first fidelity feature corresponding to the sample distortion image and a second fidelity feature corresponding to the sample enhancement image are obtained, the first fidelity feature and the second fidelity feature are input into a multi-layer perceptron (such as a three-layer linear perceptron), the multi-layer perceptron respectively carries out projection processing on the first fidelity feature and the second fidelity feature, so that a first fidelity vector corresponding to the first fidelity feature projection and a second fidelity vector corresponding to the second fidelity feature projection are obtained, a loss value corresponding to the iteration process is determined through a normalized temperature scale cross entropy loss function (the following formula 1-1), and model parameter adjustment is carried out on the intermediate fidelity feature extraction model according to the loss value. The method uses the loss value between the contrast sample distortion image and the sample enhancement image to ensure that the learned fidelity features are close between the same type of images and have sufficient distance between different types of images, and the training process is unsupervised.
Wherein l y For loss value corresponding to iterative process, z i Z is the first fidelity vector corresponding to the ith sample distortion image j Enhancing a second fidelity vector corresponding to the image for the ith sample, z k And tau is a preset temperature parameter for the first fidelity vector corresponding to the kth sample distortion image.
And respectively carrying out distortion treatment on each sample original image by the training target fidelity feature extraction model to obtain sample distortion images corresponding to each sample original image, and training the initial fidelity feature extraction model based on each sample distortion image to obtain the target fidelity feature extraction model. Compared with the prior art, the method and the device have the advantages that the initial fidelity feature extraction model is trained through the fidelity feature labels corresponding to the sample distortion images and the fidelity features extracted through the initial fidelity features, and the process of obtaining the target fidelity feature extraction model is achieved, and in the embodiment, the accurate target fidelity feature extraction model is obtained only through an unsupervised training mode of the initial fidelity feature extraction model through the sample distortion images.
FIG. 4 is a flow diagram of training a target structural feature extraction model in one embodiment. In order to make the extracted target structural features more accurate, this embodiment provides an alternative way of training the target structural feature extraction model, including the following steps:
S401, for each sample original image, determining a similarity measure between the sample original image and the sample distortion image according to pixel values of all pixel points in the sample original image and pixel values of pixel points of the sample distortion image corresponding to the sample original image.
Wherein the similarity measure may be an exponential measure characterizing the similarity between the distorted image of the sample and the original image of the sample.
Optionally, for each sample original image, performing distortion processing on the sample original image to obtain a sample distorted image corresponding to the sample original image, and determining to obtain a similarity measure between the sample original image and the sample distorted image according to the pixel value of each pixel point in the sample original image and the pixel value of the pixel point of the sample distorted image by the following formula (1-2).
Wherein SSIM (x, y) characterizes a similarity measure between the original image of the sample and the distorted image of the sample, μ x Characterization of a weighted average of pixel values, μ for each pixel point in an original image of a sample y Weighted average, σ, of pixel values characterizing each pixel point in a sample distorted image x Weighted autocovariance, sigma, of pixel values characterizing each pixel point in a sample original image y Weighted autocovariance, sigma, of pixel values characterizing pixels in a sample distorted image xy Characterization of covariance of pixel values of each pixel point in sample original image and sample distorted image, C 1 And C 2 Is a preset constant term.
Exemplary, weighted average μ x Can be determined based on the following formula (1-3).
Wherein mu x Representing weighted average value, w, of pixel values of pixel points in original image of sample i Representing the weight value corresponding to the pixel value of each pixel point in the ith Zhang Yangben image (sample original image or sample distorted image), x i And representing the pixel value of each pixel point in the original image of the ith sample.
Exemplary, weighted autocovariance σ x Can be determined based on the following formula (1-4).
Wherein sigma x Characterizing each pixel in an original image of a sampleWeighted autocovariance, μ, of pixel values of points x Representing weighted average value, w, of pixel values of pixel points in original image of sample i Representing a weight value, x, corresponding to a pixel value of each pixel point in the ith sample image i And representing the pixel value of each pixel point in the original image of the ith sample.
Exemplary, weighted autocovariance σ xy Can be determined based on the following formula (1-5).
Wherein sigma xy Characterization of covariance of pixel values of each pixel point in sample original image and sample distorted image, mu x Representing weighted average value, w, of pixel values of pixel points in original image of sample i Representing a weight value, x, corresponding to a pixel value of each pixel point in the ith sample image i Representing pixel value, mu of each pixel point in original image of ith sample y Characterizing a weighted average of pixel values of pixel points in a sample distorted image, y i And representing the pixel value of each pixel point in the ith sample distortion image.
S402, based on the sample distortion image and the similarity measure, performing iterative training on the initial structural feature extraction model to obtain a target structural feature extraction model.
Optionally, for a sequential iterative process, inputting the sample distorted image into an intermediate structural feature extraction model, and processing the image structural features corresponding to the output sample distorted image by the initial structural feature extraction model through a multi-head attention mechanism; carrying out projection processing on the image structural features through a multi-layer perceptron to obtain image structural vectors corresponding to the initial structural features; and based on the image structure vector and the similarity measure, performing model parameter adjustment on the intermediate feature extraction model.
Specifically, a sample distortion image is input into an intermediate structural feature extraction model, the intermediate structural feature extraction model divides the sample distortion image into a plurality of sample distortion sub-images, each sample distortion sub-image is projected into a one-dimensional tensor, meanwhile, position codes are added to each sample distortion sub-image, the one-dimensional tensor corresponding to each sample distortion sub-image added with the position codes is processed through a multi-head attention mechanism, structural relation features among each sample distortion sub-image are determined, and the structural relation features are used as image structural features corresponding to output sample distortion images. Carrying out normalization processing on the image structural features, inputting the normalized image structural features into a multi-layer perceptron, and carrying out projection processing on the image structural features by the multi-layer perceptron to obtain image structural vectors corresponding to initial structural features; and taking the similarity measure as a pseudo tag, determining a loss value of the iterative process based on the image structure vector and the pseudo tag, and performing model parameter adjustment on the intermediate feature extraction model by using the loss value.
Illustratively, the principle of multi-headed attention mechanism processing is shown in the following formulas (1-6).
Wherein M is l+1 For the output value of the multi-head attention of the first layer, Q l Query matrix K for layer I multi-head attention l Key matrix for first layer multi-head attention, V l D is a value matrix of multi-head attention of the first layer k Is M l Is a dimension of (c).
According to the training target structure feature extraction model method, for each sample original image, the similarity measure between the sample original image and the sample distorted image is determined according to the pixel values of all pixel points in the sample original image and the pixel values of the pixel points of the sample distorted image corresponding to the sample original image; and carrying out iterative training on the initial structural feature extraction model based on the sample distortion image and the similarity measure to obtain a target structural feature extraction model. Compared with the prior art, through the structural feature labels corresponding to the sample distorted images and the structural features extracted through the initial structural features, training the initial structural feature extraction model to obtain a target fidelity feature extraction model, the method and the device take the similarity measure between the original images and the sample distorted images as the labels for training the initial structural feature extraction model, and perform an unsupervised training mode on the initial structural feature extraction model according to the sample original images and the similarity measure to obtain a more accurate target structural feature extraction model.
In one embodiment, this embodiment provides an alternative way of extracting quality features, and the method is applied to a server for illustration. As shown in fig. 5, the method comprises the steps of:
s501, respectively carrying out distortion processing on each sample original image to obtain a sample distortion image corresponding to each sample original image.
S502, respectively carrying out enhancement processing on each sample distortion image to obtain a sample enhancement image corresponding to each sample distortion image.
S503, based on the sample distortion image and the sample enhancement image, performing iterative training on the initial fidelity feature extraction model to obtain the fidelity feature extraction model.
Optionally, for one iteration process, inputting the sample distorted image and the sample enhanced image into the intermediate fidelity feature extraction model to obtain a first fidelity feature corresponding to the sample distorted image and a second fidelity feature corresponding to the sample enhanced image; inputting the first fidelity feature and the second fidelity feature into a multi-layer perceptron to obtain a first fidelity vector and a second fidelity vector corresponding to the distorted image; determining a loss value corresponding to the iterative process according to the first fidelity vector, the second fidelity vector and the preset temperature parameter; and according to the loss value, performing model parameter adjustment on the intermediate fidelity feature extraction model to obtain a target fidelity feature extraction model.
S504 determines, for each sample original image, a similarity measure between the sample original image and the sample distorted image according to pixel values of each pixel point in the sample original image and pixel values of pixel points of the sample distorted image corresponding to the sample original image.
S505, based on the sample distortion image and the similarity measure, performing iterative training on the initial structural feature extraction model to obtain a target structural feature extraction model.
Optionally, for a sequential iterative process, inputting the sample distorted image into an intermediate structural feature extraction model, and processing the image structural features corresponding to the output sample distorted image by the initial structural feature extraction model through a multi-head attention mechanism; carrying out projection processing on the image structural features through a multi-layer perceptron to obtain image structural vectors corresponding to the initial structural features; and based on the image structure vector and the similarity measure, performing model parameter adjustment on the intermediate feature extraction model to obtain the target structure feature extraction model.
S506, inputting the target image into the target fidelity feature extraction model to obtain the target fidelity feature of the target image, and inputting the target image into the target structure feature extraction model to obtain the target structure feature of the target image.
And S507, performing splicing processing on the target fidelity characteristic and the target structural characteristic to obtain the quality characteristic of the target image.
It should be noted that, in this embodiment, steps S501 to S503 are a process of obtaining a target fidelity feature extraction model based on training a plurality of sample distorted images, steps S504 to S505 are a process of obtaining a target structural feature extraction model based on training a plurality of sample distorted images and similar measures corresponding to each sample distorted image, and steps S506 to S507 are a process of determining quality features corresponding to the target images.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a quality feature extraction device for realizing the quality feature extraction method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation of the embodiment of the quality feature extraction device or devices provided below may be referred to the limitation of the quality feature extraction method hereinabove, and will not be repeated here.
In an exemplary embodiment, as shown in fig. 6, there is provided a quality feature extraction apparatus 1 including: a feature extraction module 10 and a feature stitching module 11, wherein:
the feature extraction module 10 is configured to input a target image into the target fidelity feature extraction model to obtain a target fidelity feature of the target image, and input the target image into the target structural feature extraction model to obtain a target structural feature of the target image;
the feature stitching module 11 is configured to stitch the target fidelity feature and the target structural feature to obtain a quality feature of the target image; the target fidelity feature extraction model is obtained by training based on a plurality of sample distortion images, and the target structure feature extraction model is obtained by training based on the plurality of sample distortion images and similarity measures corresponding to the sample distortion images.
In one embodiment, as shown in fig. 7, the quality feature extraction apparatus 1 in fig. 6 includes:
the fidelity model training module 12 is configured to perform distortion processing on each sample original image, so as to obtain a sample distortion image corresponding to each sample original image; based on the distorted images of the samples, training an initial fidelity feature extraction model to obtain a target fidelity feature extraction model.
In one embodiment, the fidelity model training module 12 of fig. 7 comprises:
the enhancement processing unit is used for respectively carrying out enhancement processing on each sample distortion image to obtain a sample enhancement image corresponding to each sample distortion image;
and the fidelity model determining unit is used for carrying out iterative training on the initial fidelity feature extraction model based on the sample distorted image and the sample enhanced image to obtain the fidelity feature extraction model.
In one embodiment, the fidelity model determining unit is configured to input, for one iteration process, the sample distorted image and the sample enhanced image to the intermediate fidelity feature extraction model, to obtain a first fidelity feature corresponding to the sample distorted image and a second fidelity feature corresponding to the sample enhanced image; inputting the first fidelity feature and the second fidelity feature into a multi-layer perceptron to obtain a first fidelity vector and a second fidelity vector corresponding to the distorted image; determining a loss value corresponding to the iterative process according to the first fidelity vector, the second fidelity vector and the preset temperature parameter; and according to the loss value, performing model parameter adjustment on the intermediate fidelity feature extraction model.
In one embodiment, as shown in fig. 8, the quality feature extraction apparatus 1 in fig. 6 includes:
the structure model training module 13 is configured to determine, for each sample original image, a similarity measure between the sample original image and the sample distorted image according to a pixel value of each pixel point in the sample original image and a pixel value of a pixel point of the sample distorted image corresponding to the sample original image; and carrying out iterative training on the initial structural feature extraction model based on the sample distortion image and the similarity measure to obtain a target structural feature extraction model.
In one embodiment, the structural model training module 13 of fig. 8 includes:
the characteristic acquisition unit is used for inputting the sample distorted image into the intermediate structural characteristic extraction model in a sequential iterative process, and the initial structural characteristic extraction model processes and outputs the image structural characteristic corresponding to the sample distorted image through a multi-head attention mechanism;
the vector determining unit is used for carrying out projection processing on the image structural features through the multi-layer perceptron to obtain image structural vectors corresponding to the initial structural features;
and the structure model determining unit is used for adjusting model parameters of the intermediate feature extraction model based on the image structure vector and the similarity measure.
The above-described respective modules in the quality feature extraction apparatus may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In an exemplary embodiment, a computer device, which may be a terminal, is provided, and an internal structure thereof may be as shown in fig. 9. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a quality feature extraction method. The display unit of the computer device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 9 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application applies, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components. In one exemplary embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
inputting the target image into a target fidelity feature extraction model to obtain target fidelity features of the target image, and inputting the target image into a target structure feature extraction model to obtain target structure features of the target image;
splicing the target fidelity feature and the target structural feature to obtain the quality feature of the target image;
the target fidelity feature extraction model is obtained by training based on a plurality of sample distortion images, and the target structure feature extraction model is obtained by training based on the plurality of sample distortion images and similarity measures corresponding to the sample distortion images.
In one embodiment, the processor when executing the computer program further performs the steps of:
respectively carrying out distortion treatment on each sample original image to obtain a sample distortion image corresponding to each sample original image;
based on the distorted images of the samples, training an initial fidelity feature extraction model to obtain a target fidelity feature extraction model.
In one embodiment, the processor when executing the computer program further performs the steps of:
respectively carrying out enhancement processing on each sample distortion image to obtain a sample enhancement image corresponding to each sample distortion image;
and carrying out iterative training on the initial fidelity feature extraction model based on the sample distortion image and the sample enhancement image to obtain the fidelity feature extraction model.
In one embodiment, the processor when executing the computer program further performs the steps of:
for one iteration process, inputting the sample distortion image and the sample enhancement image into an intermediate fidelity feature extraction model to obtain a first fidelity feature corresponding to the sample distortion image and a second fidelity feature corresponding to the sample enhancement image;
inputting the first fidelity feature and the second fidelity feature into a multi-layer perceptron to obtain a first fidelity vector and a second fidelity vector corresponding to the distorted image;
Determining a loss value corresponding to the iterative process according to the first fidelity vector, the second fidelity vector and the preset temperature parameter;
and according to the loss value, performing model parameter adjustment on the intermediate fidelity feature extraction model.
In one embodiment, the processor when executing the computer program further performs the steps of:
for each sample original image, determining a similarity measure between the sample original image and the sample distortion image according to the pixel values of all pixel points in the sample original image and the pixel values of the pixel points of the sample distortion image corresponding to the sample original image;
and carrying out iterative training on the initial structural feature extraction model based on the sample distortion image and the similarity measure to obtain a target structural feature extraction model.
In one embodiment, the processor when executing the computer program further performs the steps of:
performing iterative training on the initial structural feature extraction model to obtain a target structural feature extraction model, wherein the iterative training comprises the following steps:
for the sequential iterative process, inputting the sample distortion image into an intermediate structural feature extraction model, and processing and outputting the image structural feature corresponding to the sample distortion image by the initial structural feature extraction model through a multi-head attention mechanism;
Carrying out projection processing on the image structural features through a multi-layer perceptron to obtain image structural vectors corresponding to the initial structural features;
and based on the image structure vector and the similarity measure, performing model parameter adjustment on the intermediate feature extraction model.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
inputting the target image into a target fidelity feature extraction model to obtain target fidelity features of the target image, and inputting the target image into a target structure feature extraction model to obtain target structure features of the target image;
splicing the target fidelity feature and the target structural feature to obtain the quality feature of the target image;
the target fidelity feature extraction model is obtained by training based on a plurality of sample distortion images, and the target structure feature extraction model is obtained by training based on the plurality of sample distortion images and similarity measures corresponding to the sample distortion images.
In one embodiment, the computer program when executed by the processor further performs the steps of:
respectively carrying out distortion treatment on each sample original image to obtain a sample distortion image corresponding to each sample original image;
Based on the distorted images of the samples, training an initial fidelity feature extraction model to obtain a target fidelity feature extraction model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
respectively carrying out enhancement processing on each sample distortion image to obtain a sample enhancement image corresponding to each sample distortion image;
and carrying out iterative training on the initial fidelity feature extraction model based on the sample distortion image and the sample enhancement image to obtain the fidelity feature extraction model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
for one iteration process, inputting the sample distortion image and the sample enhancement image into an intermediate fidelity feature extraction model to obtain a first fidelity feature corresponding to the sample distortion image and a second fidelity feature corresponding to the sample enhancement image;
inputting the first fidelity feature and the second fidelity feature into a multi-layer perceptron to obtain a first fidelity vector and a second fidelity vector corresponding to the distorted image;
determining a loss value corresponding to the iterative process according to the first fidelity vector, the second fidelity vector and the preset temperature parameter;
and according to the loss value, performing model parameter adjustment on the intermediate fidelity feature extraction model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
for each sample original image, determining a similarity measure between the sample original image and the sample distortion image according to the pixel values of all pixel points in the sample original image and the pixel values of the pixel points of the sample distortion image corresponding to the sample original image;
and carrying out iterative training on the initial structural feature extraction model based on the sample distortion image and the similarity measure to obtain a target structural feature extraction model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
for the sequential iterative process, inputting the sample distortion image into an intermediate structural feature extraction model, and processing and outputting the image structural feature corresponding to the sample distortion image by the initial structural feature extraction model through a multi-head attention mechanism;
carrying out projection processing on the image structural features through a multi-layer perceptron to obtain image structural vectors corresponding to the initial structural features;
and based on the image structure vector and the similarity measure, performing model parameter adjustment on the intermediate feature extraction model.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
Inputting the target image into a target fidelity feature extraction model to obtain target fidelity features of the target image, and inputting the target image into a target structure feature extraction model to obtain target structure features of the target image;
splicing the target fidelity feature and the target structural feature to obtain the quality feature of the target image;
the target fidelity feature extraction model is obtained by training based on a plurality of sample distortion images, and the target structure feature extraction model is obtained by training based on the plurality of sample distortion images and similarity measures corresponding to the sample distortion images.
In one embodiment, the computer program when executed by the processor further performs the steps of:
respectively carrying out distortion treatment on each sample original image to obtain a sample distortion image corresponding to each sample original image;
based on the distorted images of the samples, training an initial fidelity feature extraction model to obtain a target fidelity feature extraction model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
respectively carrying out enhancement processing on each sample distortion image to obtain a sample enhancement image corresponding to each sample distortion image;
And carrying out iterative training on the initial fidelity feature extraction model based on the sample distortion image and the sample enhancement image to obtain the fidelity feature extraction model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
for one iteration process, inputting the sample distortion image and the sample enhancement image into an intermediate fidelity feature extraction model to obtain a first fidelity feature corresponding to the sample distortion image and a second fidelity feature corresponding to the sample enhancement image;
inputting the first fidelity feature and the second fidelity feature into a multi-layer perceptron to obtain a first fidelity vector and a second fidelity vector corresponding to the distorted image;
determining a loss value corresponding to the iterative process according to the first fidelity vector, the second fidelity vector and the preset temperature parameter;
and according to the loss value, performing model parameter adjustment on the intermediate fidelity feature extraction model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
for each sample original image, determining a similarity measure between the sample original image and the sample distortion image according to the pixel values of all pixel points in the sample original image and the pixel values of the pixel points of the sample distortion image corresponding to the sample original image;
And carrying out iterative training on the initial structural feature extraction model based on the sample distortion image and the similarity measure to obtain a target structural feature extraction model.
In one embodiment, the computer program when executed by the processor further performs the steps of:
for the sequential iterative process, inputting the sample distortion image into an intermediate structural feature extraction model, and processing and outputting the image structural feature corresponding to the sample distortion image by the initial structural feature extraction model through a multi-head attention mechanism;
carrying out projection processing on the image structural features through a multi-layer perceptron to obtain image structural vectors corresponding to the initial structural features;
and based on the image structure vector and the similarity measure, performing model parameter adjustment on the intermediate feature extraction model.
It should be noted that, the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are all information and data authorized by the user or sufficiently authorized by each party, and the collection, use, and processing of the relevant data are required to meet the relevant regulations.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A method of quality feature extraction, the method comprising:
inputting a target image into a target fidelity feature extraction model to obtain target fidelity features of the target image, and inputting the target image into a target structure feature extraction model to obtain target structure features of the target image;
performing splicing treatment on the target fidelity characteristic and the target structural characteristic to obtain the quality characteristic of the target image;
The target fidelity feature extraction model is trained based on a plurality of sample distortion images, and the target structural feature extraction model is trained based on the plurality of sample distortion images and similarity measures corresponding to the sample distortion images.
2. The method according to claim 1, wherein the method further comprises:
respectively carrying out distortion treatment on each sample original image to obtain a sample distortion image corresponding to each sample original image;
and training an initial fidelity feature extraction model based on each sample distortion image to obtain the target fidelity feature extraction model.
3. The method according to claim 2, wherein training the initial fidelity feature extraction model based on each of the sample distorted images results in the target fidelity feature extraction model, comprising:
respectively carrying out enhancement processing on each sample distortion image to obtain a sample enhancement image corresponding to each sample distortion image;
and carrying out iterative training on the initial fidelity feature extraction model based on the sample distortion image and the sample enhancement image to obtain the fidelity feature extraction model.
4. A method according to claim 3, wherein the iteratively training the initial fidelity feature extraction model based on the sample distortion image and the sample enhancement image to obtain the fidelity feature extraction model comprises:
for one iteration process, inputting the sample distortion image and the sample enhancement image into an intermediate fidelity feature extraction model to obtain a first fidelity feature corresponding to the sample distortion image and a second fidelity feature corresponding to the sample enhancement image;
inputting the first fidelity feature and the second fidelity feature into a multi-layer perceptron to obtain a first fidelity vector and a second fidelity vector corresponding to the distorted image;
determining a loss value corresponding to the iterative process according to the first fidelity vector, the second fidelity vector and a preset temperature parameter;
and according to the loss value, carrying out model parameter adjustment on the intermediate fidelity feature extraction model.
5. The method according to claim 1, wherein the method further comprises:
for each sample original image, determining a similarity measure between the sample original image and the sample distorted image according to pixel values of all pixel points in the sample original image and pixel values of pixel points of the sample distorted image corresponding to the sample original image;
And carrying out iterative training on the initial structural feature extraction model based on the sample distortion image and the similarity measure to obtain the target structural feature extraction model.
6. The method of claim 5, wherein iteratively training an initial structural feature extraction model based on the sample distortion image and the similarity measure to obtain the target structural feature extraction model, comprises:
for a sequential iterative process, inputting the sample distorted image into an intermediate structural feature extraction model, and processing and outputting image structural features corresponding to the sample distorted image by the initial structural feature extraction model through a multi-head attention mechanism;
performing projection processing on the image structural features through a multi-layer perceptron to obtain image structural vectors corresponding to the initial structural features;
and adjusting model parameters of the intermediate feature extraction model based on the image structure vector and the similarity measure.
7. A quality feature extraction apparatus, the apparatus comprising:
the feature extraction module is used for inputting a target image into the target fidelity feature extraction model to obtain target fidelity features of the target image, and inputting the target image into the target structure feature extraction model to obtain target structure features of the target image;
The characteristic splicing module is used for carrying out splicing treatment on the target fidelity characteristic and the target structural characteristic to obtain the quality characteristic of the target image; the target fidelity feature extraction model is trained based on a plurality of sample distortion images, and the target structural feature extraction model is trained based on the plurality of sample distortion images and similarity measures corresponding to the sample distortion images.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202311419589.4A 2023-10-30 2023-10-30 Quality feature extraction method, quality feature extraction device, computer equipment and storage medium Pending CN117409211A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311419589.4A CN117409211A (en) 2023-10-30 2023-10-30 Quality feature extraction method, quality feature extraction device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311419589.4A CN117409211A (en) 2023-10-30 2023-10-30 Quality feature extraction method, quality feature extraction device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117409211A true CN117409211A (en) 2024-01-16

Family

ID=89497664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311419589.4A Pending CN117409211A (en) 2023-10-30 2023-10-30 Quality feature extraction method, quality feature extraction device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117409211A (en)

Similar Documents

Publication Publication Date Title
Zhou et al. Local and global feature learning for blind quality evaluation of screen content and natural scene images
CN109543714B (en) Data feature acquisition method and device, electronic equipment and storage medium
CN111666994A (en) Sample image data enhancement method and device, electronic equipment and storage medium
CN109934792B (en) Electronic device and control method thereof
CN111771226A (en) Electronic device, image processing method thereof, and computer-readable recording medium
US20220156943A1 (en) Consistency measure for image segmentation processes
CN108876716B (en) Super-resolution reconstruction method and device
US11604963B2 (en) Feedback adversarial learning
Wu et al. Blind quality assessment for screen content images by combining local and global features
CN110163095B (en) Loop detection method, loop detection device and terminal equipment
CN113919444B (en) Training method of target detection network, target detection method and device
Chen et al. Face super resolution based on parent patch prior for VLQ scenarios
CN116030466B (en) Image text information identification and processing method and device and computer equipment
CN116071279A (en) Image processing method, device, computer equipment and storage medium
CN117409211A (en) Quality feature extraction method, quality feature extraction device, computer equipment and storage medium
CN113269730A (en) Image processing method, image processing device, computer equipment and storage medium
Yang et al. An end‐to‐end perceptual enhancement method for UHD portrait images
CN116229130A (en) Type identification method and device for blurred image, computer equipment and storage medium
CN117011945B (en) Action capability assessment method, action capability assessment device, computer equipment and readable storage medium
CN116659520B (en) Matching positioning method, device and equipment based on bionic polarization vision enhancement
CN117409330B (en) Aquatic vegetation identification method, aquatic vegetation identification device, computer equipment and storage medium
CN115761239B (en) Semantic segmentation method and related device
WO2023220891A1 (en) Resolution-switchable segmentation networks
WO2023220859A1 (en) Multi-dimensional attention for dynamic convolutional kernel
CN117975473A (en) Bill text detection model training and detection method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination