CN115760822A - Image quality detection model establishing method and system - Google Patents

Image quality detection model establishing method and system Download PDF

Info

Publication number
CN115760822A
CN115760822A CN202211502424.9A CN202211502424A CN115760822A CN 115760822 A CN115760822 A CN 115760822A CN 202211502424 A CN202211502424 A CN 202211502424A CN 115760822 A CN115760822 A CN 115760822A
Authority
CN
China
Prior art keywords
evaluation
model
image quality
image
subjective
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211502424.9A
Other languages
Chinese (zh)
Other versions
CN115760822B (en
Inventor
韩运恒
袁克虹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jieyi Technology Co ltd
Original Assignee
Shenzhen Jieyi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jieyi Technology Co ltd filed Critical Shenzhen Jieyi Technology Co ltd
Priority to CN202211502424.9A priority Critical patent/CN115760822B/en
Publication of CN115760822A publication Critical patent/CN115760822A/en
Application granted granted Critical
Publication of CN115760822B publication Critical patent/CN115760822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to a method and a system for establishing an image quality detection model, which belong to the technical field of image recognition, and comprise the steps of obtaining the quality evaluation results of images, and taking each image quality evaluation result and corresponding image data as an evaluation subtask; the image quality evaluation result is obtained by evaluating K images by N persons respectively; performing iterative training on a preset meta-learning model based on each evaluation subtask to obtain an objective evaluation model of the image; establishing a subjective evaluation model of the image; and fusing the objective evaluation model and the subjective evaluation model to obtain an image quality detection model. The image quality detection model is obtained by fusing the objective evaluation model and the subjective evaluation model, so that the quality of the image can be conveniently detected by combining the objective evaluation and the subjective evaluation. The method and the device have the effect of conveniently detecting the quality of the image acquired by image recognition.

Description

Image quality detection model establishing method and system
Technical Field
The invention relates to the technical field of image recognition, in particular to an image quality detection model establishing method and system.
Background
Image recognition, which is a technique for processing, analyzing and understanding images by using a computer to recognize various different patterns of targets and objects, is a practical application of applying a deep learning algorithm, and is generally divided into four steps: image acquisition, image preprocessing, feature extraction and image recognition.
The image quality refers to the quality of an image acquired in image recognition, the acquired image quality is closely related to the result of the image recognition, the clearer and more complete the quality of the acquired image is, the more accurate the subsequent feature extraction and the result obtained by the image recognition are, however, at present, no unified standard is available for evaluating the quality of the acquired image, whether the acquired image meets the related requirements of the image recognition cannot be judged, and the final result of the image recognition is easily inaccurate, so that how to detect the quality of the acquired image is a problem at present.
Disclosure of Invention
In order to facilitate the detection of the quality of an image acquired by image recognition, the application provides an image quality detection model establishing method and system.
In a first aspect, the present application provides a method and a system for establishing an image quality detection model, which adopt the following technical solutions:
a method and a system for establishing an image quality detection model comprise the following steps:
obtaining the quality evaluation results of the images, and taking each image quality evaluation result and the corresponding image data as an evaluation subtask; the image quality evaluation result is obtained by evaluating the K images by N persons respectively;
performing iterative training on a preset meta-learning model based on each evaluation subtask to obtain an objective evaluation model of the image;
establishing a subjective evaluation model of the image;
and fusing the objective evaluation model and the subjective evaluation model to obtain an image quality detection model.
By adopting the technical scheme, the quality evaluation results obtained by evaluating the K images by N persons and the image data corresponding to the image quality evaluation results are used as an evaluation subtask, the evaluation subtask is used for carrying out iterative training on a preset meta-learning model to obtain an objective evaluation model of the images so as to obtain objective evaluation of the images, a subjective evaluation model of the images is established so as to obtain subjective evaluation of the images, and the objective evaluation model and the subjective evaluation model are fused to obtain an image quality detection model, so that the quality of the images can be conveniently detected by combining the objective evaluation and the subjective evaluation.
Optionally, taking K evaluation subtasks corresponding to each person as one evaluation task; the iterative training is performed on the preset meta-learning model based on the evaluation subtask to obtain an objective evaluation model of the image, and the method specifically comprises the following steps:
acquiring the gradient of the evaluation subtask;
screening the evaluation tasks based on the gradients of all the evaluation subtasks, and taking the screened evaluation tasks as a training set of a meta-learning model;
and selecting an evaluation task from the training set according to a preset sequence to carry out iterative training on a preset meta-learning model until a loss function of the meta-learning model is converged to obtain an objective evaluation model of the image.
By adopting the technical scheme, the evaluation tasks are screened by utilizing the gradient of the evaluation subtasks, and the screened evaluation tasks are used as the training set of the meta-learning model, so that the selection of the training set is more accurate, the evaluation tasks are selected from the training set according to the preset sequence to carry out iterative training on the meta-learning model until the loss function of the meta-learning model converges, and the objective evaluation model can be obtained and is selectable, the evaluation tasks are screened based on the gradient of the evaluation subtasks, and the screened evaluation tasks are used as the training set of the preset meta-learning model, which specifically comprises the following steps:
obtaining a gradient matrix of each personnel evaluation task according to the gradients of all evaluation subtasks corresponding to each personnel;
respectively carrying out similarity calculation on the gradient matrix of each person in the N persons and the gradient matrices of the rest persons in the N persons to obtain similarity factors between the evaluation task of each person and the evaluation tasks of the rest persons;
sequencing similarity factors of evaluation tasks of all the personnel, and screening M personnel corresponding to the similarity factors which are sequenced in the front;
and taking the evaluation tasks of the M persons as a training set of the meta-learning model.
By adopting the technical scheme, all evaluation subtasks corresponding to each person, namely K subtasks, are formed into a gradient matrix of each person evaluation task by utilizing the gradient of the K evaluation subtasks corresponding to each person, the gradient matrix of each person evaluation task is utilized to calculate and obtain the similarity factor between the evaluation task of each person and the evaluation tasks of the rest persons, M persons with high similarity factors are screened out according to the sequencing result of the similarity factors, the evaluation tasks of the M persons are used as a training set, namely, representative evaluation tasks are screened out as the training set, and therefore the meta-learning model is convenient to train.
Optionally, according to a preset sequence, selecting an evaluation task from the training set to perform iterative training on a preset meta-learning model, specifically including:
acquiring parameters to be trained in the meta-learning model in real time;
training the parameters to be trained by utilizing the gradient of each evaluation subtask in the selected evaluation tasks to obtain a single training parameter theta i =θ-αg i (ii) a Wherein theta is a parameter to be trained, alpha is a single learning rate, and g i Is the gradient of the subtask; theta.theta. i ′←θ-αg i
Obtaining a joint training parameter according to single training parameters corresponding to all evaluation subtasks in the selected evaluation task
Figure BDA0003968287570000031
Wherein, beta is the joint learning rate;
and taking the joint training parameters as parameters to be trained in the meta-learning model, and selecting the next evaluation task according to a preset sequence.
By adopting the technical scheme, the parameters to be trained in the meta-learning model are trained by utilizing the gradient of each evaluation subtask in the selected evaluation task to obtain single training parameters, the single training parameters obtained by all the evaluation subtasks in the evaluation task are integrated to obtain joint training parameters, the joint training parameters are used as the parameters to be trained in the meta-learning model, and the next evaluation task is selected according to the preset sequence to train the new parameters to be trained, so that iterative training of the meta-learning model is realized.
Optionally, the establishing of the subjective evaluation model of the image specifically includes:
acquiring relative evaluation and absolute evaluation of an image, wherein the relative evaluation and the absolute evaluation respectively comprise z grades;
generating a loss cost matrix of relative evaluation-absolute evaluation according to historical data of the relative evaluation and historical data of the absolute evaluation;
combining the relative evaluation and the absolute evaluation and generating a subjective perception vector;
normalizing the subjective perception vector to obtain the proportion of each evaluation grade of the image;
and generating a subjective evaluation model according to the proportion of the loss cost matrix to the evaluation grade.
By adopting the technical scheme, the relative evaluation and the absolute evaluation of the image are obtained, a loss cost matrix of the relative evaluation and the absolute evaluation is generated according to the historical data of the relative evaluation and the historical data of the absolute evaluation, the relative evaluation and the combination are carried out to obtain a subjective perception vector, the subjective perception vector is normalized to obtain the proportion of each evaluation grade of the image, and then a subjective evaluation model is generated according to the proportion of the loss cost matrix and the evaluation grade.
Optionally, the generating a subjective evaluation model according to the ratio of the loss cost matrix to the evaluation level specifically includes: calculating an evaluation category cost vector X according to the ratio of the loss cost matrix to the evaluation level i =D ij ·P i (ii) a Wherein D is ij Is a loss cost matrix, and P is the ratio of the evaluation grades;
and generating a subjective evaluation model according to the evaluation category cost vector.
By adopting the technical scheme, the evaluation category cost vector is calculated according to the ratio of the loss cost matrix to the evaluation level, and the subjective evaluation model is generated according to the evaluation category cost vector, so that the ratio of the closest evaluation level of the image output by the subjective evaluation model is facilitated.
Optionally, the objective evaluation model and the subjective evaluation model are fused to obtain an image quality detection model, and the method specifically includes:
obtaining an output result of the objective evaluation model and an output result of the subjective evaluation model;
generating an image quality category based on an output result of the objective evaluation model and an output result of the subjective evaluation model
Figure BDA0003968287570000041
Wherein x is a To output the subjective evaluation model, θ a The output result is an objective evaluation model; based on the image quality categories, an image quality detection model is generated.
By adopting the technical scheme, the image quality category is generated according to the output result of the objective evaluation model and the output result of the subjective evaluation model, and then the image quality detection model is generated according to the image quality category, so that the image quality detection model integrates two angles of objective evaluation and subjective evaluation, and the detection of the image quality is more accurate.
In a second aspect, the present application provides a system for establishing an image quality detection model, which adopts the following technical solutions:
an image quality inspection model building system comprising:
the evaluation acquisition unit is used for acquiring the quality evaluation results of the images and taking each image quality evaluation result and the corresponding image data as an evaluation subtask; the image quality evaluation result is obtained by evaluating the K images by N persons respectively;
the objective model generation unit is used for carrying out iterative training on a preset meta-learning model based on all the evaluation subtasks to obtain an objective evaluation model of the image;
the subjective model generating unit is used for establishing a subjective evaluation model of the image; and the number of the first and second groups,
and the detection model generation unit is used for fusing the objective evaluation model and the subjective evaluation model to obtain an image quality detection model.
By adopting the technical scheme, the evaluation obtaining unit is used for obtaining the result of the image quality evaluation, the objective model generating unit is used for generating the objective evaluation model of the image, the objective evaluation of the image is facilitated, the subjective evaluation model is established by the subjective model generating unit, the subjective evaluation of the image is facilitated, and the objective evaluation and the subjective evaluation of the generated image quality detection model are fused, so that the detection result of the generated image quality detection model on the image quality is more accurate.
In a third aspect, the present application provides a computer device, which adopts the following technical solutions:
a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing an image quality detection model building method and system as described in any one of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, which adopts the following technical solutions:
a computer readable storage medium comprising a computer program that can be loaded by a processor and executed to perform the image quality inspection model building method and system according to any one of the first aspect.
Drawings
Fig. 1 is a flowchart illustrating an image quality detection method according to an embodiment of the present disclosure.
FIG. 2 is a flowchart of a method for training a meta-learning model according to an embodiment of the present application.
Fig. 3 is a flowchart of a method for training set selection according to an embodiment of the present application.
FIG. 4 is a flowchart illustrating a method for iteratively training a meta-learning model according to an embodiment of the present application.
Fig. 5 is a flowchart of a subjective evaluation model generation method according to an embodiment of the present application.
FIG. 6 is a flowchart of a method for generating an image quality inspection model according to an embodiment of the present application.
FIG. 7 is a block diagram of a system for image quality inspection according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described in detail below with reference to fig. 1-7 and the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The embodiment of the application discloses an image quality detection model establishing method and system. Referring to fig. 1, an image quality detection model establishing method and system includes:
step S101: and acquiring the quality evaluation results of the images, and taking each image quality evaluation result and the corresponding image data as an evaluation subtask.
The image quality evaluation result is obtained by evaluating K images by N persons, and K and N are positive integers greater than or equal to 1.
The K images may be selected from images in different scenes, for example, images of different qualities such as brightness, angle, exposure, and the like of the images acquired in image recognition, and the N persons evaluate the K images of different qualities respectively.
Step S102: performing iterative training on a preset meta-learning model based on each evaluation subtask to obtain an objective evaluation model of the image;
the meta-learning model is a machine learning model and aims to solve the problems of insufficient generalization performance and poor adaptability to different types of tasks of a common neural network model, and can rapidly learn new concepts through a small number of data samples or can be well adapted and generalized to a new task after being trained by different tasks. In particular, in the present application, because there are many factors that affect the image quality when the image is acquired by image recognition, the common neural network model cannot enumerate images under all conditions, so that it is difficult for the common neural network model to evaluate the quality of the image.
It should be appreciated that the meta-learning model is human-oriented, so when training the meta-learning model, it needs to be trained with evaluation subtasks of N persons.
Step S103: establishing a subjective evaluation model of the image;
step S104: and fusing the objective evaluation model and the subjective evaluation model to obtain an image quality detection model.
In the above embodiment, the quality evaluation result obtained by evaluating the K images by the N people and the image data corresponding to the image quality evaluation result are used as an evaluation subtask, the evaluation subtask is used to perform iterative training on a preset meta-learning model to obtain an objective evaluation model of the images to obtain objective evaluation of the images, a subjective evaluation model of the images is then established to obtain subjective evaluation of the images, and the fusion objective evaluation model and the subjective evaluation model are fused to obtain an image quality detection model, so that the quality of the images can be detected conveniently by combining the objective evaluation and the subjective evaluation.
Referring to fig. 2, as an embodiment of step S102, taking K evaluation subtasks corresponding to each person as one evaluation task, step S102 specifically includes:
step S1021: acquiring the gradient of the evaluation subtask;
the gradient is a vector, and is a partial derivative used for finding the optimal parameter in the meta-learning model, that is, each evaluation subtask is expressed in the meta-learning model by means of the gradient.
Step S1022: screening the evaluation tasks based on the gradients of all the evaluation subtasks, and taking the screened evaluation tasks as a training set of the meta-learning model;
step S1023: and according to a preset sequence, selecting an evaluation task from the training set to carry out iterative training on the preset meta-learning model until the loss function of the meta-learning model is converged, thereby obtaining an objective evaluation model of the image.
The loss function is used for estimating the inconsistency degree of the predicted value and the true value of the model, the predicted value of the representative model is closer to the true value along with the fact that the training loss function of the model is smaller, and when the loss function of the model after the model is trained is not reduced, the task loss function is converged.
In the above embodiment, the evaluation tasks are screened by using the gradients of the evaluation subtasks, and the screened evaluation tasks are used as the training set of the meta-learning model, so that the selection of the training set is more accurate, and the evaluation tasks are selected from the training set according to a preset sequence to perform iterative training on the meta-learning model until the loss function of the meta-learning model converges, so that the objective evaluation model can be obtained.
Referring to fig. 3, as an embodiment of step S1022, step S1022 specifically includes:
step S10221: obtaining a gradient matrix of each personnel evaluation task according to the gradients of all evaluation subtasks corresponding to each personnel;
it should be understood that each person corresponds to K evaluation subtasks, that is, each person corresponds to the gradient of K evaluation subtasks, and the K evaluation subtasks corresponding to one person constitute the evaluation task of the person, so that the gradient matrix of the evaluation task of the person can be obtained by combining the gradients of the K evaluation subtasks of each person.
Step S10222: respectively carrying out similarity calculation on the gradient matrix of each person in the N persons and the gradient matrices of the rest persons in the N persons to obtain similarity factors between the evaluation task of each person and the evaluation tasks of the rest persons; wherein, the similarity factor of the a-th person
Figure BDA0003968287570000061
R is a set of all evaluation subtasks corresponding to all people, a is an integer of 1 or more and N or less, and g ai Is the gradient matrix corresponding to the a-th person, g j And g h The evaluation task gradient matrix is a gradient matrix of evaluation tasks of the rest personnel, and j and h are integers which are more than or equal to 1 and less than or equal to K.
It should be understood that the higher the similarity factor of the gradient matrix of the person, the higher the similarity factor of the gradient matrix of the person to the gradient matrix and the gradient matrices of the remaining persons, thereby indicating that the result of the image quality evaluation by the person is more representative.
Step S10223: sequencing similarity factors of evaluation tasks of all the personnel, and screening M personnel corresponding to the similarity factors which are sequenced in the front;
wherein M is a positive integer greater than or equal to 1 and less than or equal to N.
Specifically, similarity factors W = { W) of N persons 1 ,w 2 ,L,w N Sorting W: w is a group of rank = sort (W), where sort is a descending sort algorithm. And the person corresponding to the similarity factor ranked in the top is the person with the high similarity factor.
Step S10224: and taking the evaluation tasks of the M persons as a training set of the meta-learning model.
It should be understood that the evaluation task for each of the M persons includes K evaluation subtasks.
In the above embodiment, all the evaluation subtasks corresponding to each person, that is, K subtasks, form a gradient matrix of each person evaluation task by using the gradients of the K evaluation subtasks corresponding to each person, calculate and obtain the similarity factor between the evaluation task of each person and the evaluation tasks of the remaining persons by using the gradient matrix of the evaluation task of each person, screen out M persons with high similarity factors according to the ranking result of the similarity factor, and use the evaluation tasks of the M persons as a training set, that is, screen out the representative evaluation task as the training set, thereby facilitating the training of the meta-learning model.
Referring to fig. 4, as an embodiment of step S1023, step S1023 specifically includes:
step S10231: acquiring parameters to be trained in the meta-learning model in real time;
step S10232: training the parameters to be trained by utilizing the gradient of each evaluation subtask in the selected evaluation task to obtain a single training parameter theta i =θ-αg i
Wherein theta is a parameter to be trained; alpha is single learning rate, and the single learning rate refers to the magnitude of each parameter updating amplitude; g is a radical of formula i I is a positive integer which is greater than or equal to 1 and less than or equal to K;
it should be appreciated that the present application trains the meta-learning model using a gradient descent method, which is a parameter optimization algorithm that is widely used to minimize model errors.
Step S10233: obtaining a joint training parameter according to single training parameters corresponding to all evaluation subtasks in the selected evaluation task
Figure BDA0003968287570000071
Wherein β is the joint learning rate;
it should be understood that each evaluation subtask of the selected evaluation task corresponds to one single training parameter, that is, the selected evaluation task includes K single training parameters in total.
Step S10234: and taking the joint training parameters as parameters to be trained in the meta-learning model, and selecting the next evaluation task according to a preset sequence.
Generating a joint training parameter, namely completing one training of the meta-learning model, taking the joint training parameter as a parameter to be trained in the meta-learning model, and repeatedly executing the steps S10231 to S10234 to realize iterative training of the meta-learning model.
The selection of the next evaluation task according to the preset sequence can select the corresponding evaluation tasks according to the sequence of similarity factors from large to small, and can also extract the evaluation tasks in a random manner.
In the above embodiment, the parameters to be trained in the meta-learning model are trained by using the gradient of each evaluation subtask in the selected evaluation task to obtain a single training parameter, the single training parameters obtained by all the evaluation subtasks in the evaluation task are integrated to obtain a joint training parameter, the joint training parameter is used as the parameters to be trained in the meta-learning model, and the next evaluation task is selected according to the preset sequence to train the new parameters to be trained, so that iterative training of the meta-learning model is realized.
Referring to fig. 5, as an embodiment of step S103, step S103 specifically includes:
step S1031: acquiring relative evaluation and absolute evaluation of an image, wherein the relative evaluation and the absolute evaluation respectively comprise z grades;
in the present embodiment, the relative evaluation is classified into five grades, i.e., z =5, and specifically, the K images are relatively evaluated from high quality to low quality according to five grades, i.e., better, above, below, and worse of the K images.
The absolute evaluation refers to evaluation obtained by comparing with a standard image, and the standard image is preset manually. In the present embodiment, the absolute evaluation is also divided into five levels, and specifically, the absolute evaluation of K images from high quality to low quality is performed on the basis of five levels that are consistent and distinct but do not obstruct viewing, slightly obstruct viewing, and severely obstruct viewing as compared with the quality of the standard image.
Step S1032: generating a loss cost matrix of the relative evaluation-absolute evaluation according to the historical data of the relative evaluation and the historical data of the absolute evaluation;
specifically, a loss cost matrix of the relative evaluation and a loss cost matrix of the absolute evaluation are respectively obtained, weights of the loss cost matrix of the relative evaluation and the loss cost matrix of the absolute evaluation are respectively preset, and the loss cost matrix of the relative evaluation and the loss cost matrix of the absolute evaluation are fused to obtain the loss cost matrix of the relative evaluation and the loss cost matrix of the absolute evaluation. The weights of the loss cost matrix of the relative evaluation and the loss cost matrix of the absolute evaluation can be obtained by combining the actual data verification.
The historical data of the relative evaluation and the historical data of the absolute evaluation can be collected in a big data mode, and can also be obtained by methods such as expert scoring or a mixed matrix of multi-task classification.
Step S1033: combining the relative evaluation and the absolute evaluation and generating a subjective perception vector;
the combination of the relative evaluation and the absolute evaluation means that all the relative evaluations and all the relative evaluations are combined into one subjective perception vector.
Before step S1033, the method further includes performing a KAPPA check on the relative evaluation and the absolute evaluation, determining whether the KAPPA check passes, and if so, executing step S1033. The KAPPA check is an index for indicating the consistency check and a method of measuring the effect of classification, that is, a check as to whether the relative evaluation and the absolute evaluation for each image match.
It should be understood that the KAPPA check measures the image quality by calculating a KAPPA coefficient, which is generally selected to be a number between-1 and 1, the greater the KAPPA coefficient, the better the consistency, and when the KAPPA coefficient is 1, it is stated that the level of the relative evaluation and the level of the absolute evaluation are completely consistent, i.e., the more accurate the relative evaluation and the absolute evaluation are. In the present embodiment, when the KAPPA coefficient is larger than 0.6, it is determined that the KAPPA check passes.
Step S1034: normalizing the subjective perception vector to obtain the proportion of each evaluation grade of the image;
the normalization is to map the characteristic value of the sample into a [0,1] interval, that is, to convert the subjective perception vector into the proportion of the image evaluation level.
Step S1035: and generating a subjective evaluation model according to the proportion of the loss cost matrix to the evaluation grade.
In the above embodiment, the relative evaluation and the absolute evaluation of the image are obtained, a loss cost matrix of the relative evaluation and the absolute evaluation is generated according to the historical data of the relative evaluation and the historical data of the absolute evaluation, the relative evaluation and the combination are performed to obtain a subjective perception vector, the subjective perception vector is normalized to obtain the proportion of each evaluation level of the image, and then a subjective evaluation model is generated according to the proportion of the loss cost matrix and the evaluation level.
Referring to fig. 6, as an embodiment of step S1036, step S1036 specifically includes:
calculating an evaluation category cost vector X according to the proportion of the loss cost matrix and the evaluation grade i =D ij ·P i (ii) a And generating a subjective evaluation model according to the evaluation category cost vector.
Wherein D is ij Is a loss cost matrix, P i Is a ratio of evaluation grades, and P 1 +P 2 +L+P z =1; i.e. the proportion of the ith class is selected for an image. i and j are positive integers which are more than or equal to 1 and less than or equal to z;
z is 5 in this embodiment, and the class cost vector X i =D ij ·P i Comprises the following steps:
Figure BDA0003968287570000101
in the above embodiment, the evaluation category cost vector is calculated according to the ratio of the loss cost matrix to the evaluation level, and the subjective evaluation model is generated according to the evaluation category cost vector, so that the subjective evaluation model can output the ratio of the evaluation level closest to the image.
As an implementation manner of step S104, step S104 specifically includes:
step S1041: obtaining an output result of the objective evaluation model and an output result of the subjective evaluation model;
in the embodiment, the objective evaluation model and the subjective evaluation model both output the proportion of each grade of image evaluation, and the output result of the subjective evaluation model is the category cost vector.
Step S1042: generating image quality categories based on the output result of the objective evaluation model and the output result of the subjective evaluation model
Figure BDA0003968287570000102
Wherein x is a For output of the subjective evaluation model, θ a The output result is an objective evaluation model;
in the embodiment, the image quality categories are also divided into five categories, which are C from high quality to low quality 1 、C 2 、C 3 、C 4 And C 5
For example, the objective observation model has output results of 0.1, 0.5, 0.2, and 0.1, and the observation model has output results of 0.2, 0.1, 0.4, 0.1, and 0.2, according to the formula
Figure BDA0003968287570000103
When a =3, x a θ a Is the largest, the image quality class is C 3 And the image quality is shown to be in a third level, so that the detection of the image quality is realized.
Step S1043: based on the image quality categories, an image quality detection model is generated.
In the embodiment, the image quality category is generated according to the output result of the objective evaluation model and the output result of the subjective evaluation model, and then the image quality detection model is generated according to the image quality category, so that the image quality detection model integrates two angles of objective evaluation and subjective evaluation, and the detection of the image quality is more accurate.
The embodiment of the application discloses an image quality detection model establishing system. Referring to fig. 7, an image quality inspection model building system includes:
the evaluation acquisition unit is used for acquiring the quality evaluation results of the images and taking each image quality evaluation result and the corresponding image data as an evaluation subtask; the image quality evaluation result is obtained by evaluating the K images by N persons respectively;
the objective model generation unit is used for carrying out iterative training on a preset meta-learning model based on all the evaluation subtasks to obtain an objective evaluation model of the image;
the subjective model generating unit is used for establishing a subjective evaluation model of the image; and (c) a second step of,
and the detection model generation unit is used for fusing the objective evaluation model and the subjective evaluation model to obtain an image quality detection model.
The implementation principle of the image quality detection model establishing system in the embodiment of the application is as follows: the image quality detection method comprises the steps of obtaining a result of image quality evaluation by using an evaluation obtaining unit, generating an objective evaluation model of an image by using an objective model generating unit so as to be convenient for objectively evaluating the image, establishing a subjective evaluation model by using a subjective model generating unit so as to be convenient for subjectively evaluating the image, and fusing objective evaluation and subjective evaluation by using a detection model generating unit so as to enable a detection result of the generated image quality detection model on image quality to be more accurate.
The image quality detection model establishing system provided by the application can realize the image quality detection model establishing method and the image quality detection model establishing system, and the specific working process of the image quality detection model establishing system can refer to the corresponding process in the embodiment of the method.
It should be noted that, in the foregoing embodiments, descriptions of the respective embodiments have respective emphasis, and reference may be made to relevant descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
Based on the same technical concept, the invention also discloses a computer device, wherein the computer device comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, and the processor executes any one of the image quality detection model establishing method and the system method.
The invention also discloses a computer readable storage medium, which is characterized by comprising a computer program which can be loaded by a processor and executes any one of the image quality detection model establishing methods and the image quality detection model establishing system.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The foregoing is a preferred embodiment of the present application and is not intended to limit the scope of the present application in any way, and any features disclosed in this specification (including the abstract and drawings) may be replaced by alternative features serving equivalent or similar purposes, unless expressly stated otherwise. That is, unless expressly stated otherwise, each feature is only an example of a generic series of equivalent or similar features.

Claims (10)

1. A method and a system for establishing an image quality detection model are characterized by comprising the following steps:
obtaining image quality evaluation results, and taking each image quality evaluation result and corresponding image data as an evaluation subtask; the image quality evaluation result is obtained by evaluating K images by N persons respectively;
performing iterative training on a preset meta-learning model based on each evaluation subtask to obtain an objective evaluation model of the image;
establishing a subjective evaluation model of the image;
and fusing the objective evaluation model and the subjective evaluation model to obtain an image quality detection model.
2. The image quality inspection model building method according to claim 1, wherein K evaluation subtasks corresponding to each person are taken as one evaluation task; the iterative training is performed on the preset meta-learning model based on the evaluation subtask to obtain an objective evaluation model of the image, and the method specifically comprises the following steps:
acquiring the gradient of the evaluation subtask;
screening the evaluation tasks based on the gradients of all the evaluation subtasks, and taking the screened evaluation tasks as a training set of a meta-learning model;
and according to a preset sequence, selecting an evaluation task from the training set to carry out iterative training on the preset meta-learning model until the loss function of the meta-learning model is converged, thereby obtaining an objective evaluation model of the image.
3. The image quality detection model building method according to claim 2, wherein the screening of the evaluation tasks based on the gradients of the evaluation subtasks and the use of the screened evaluation tasks as a training set of a preset meta-learning model specifically comprises:
obtaining a gradient matrix of each personnel evaluation task according to the gradients of all evaluation subtasks corresponding to each personnel;
respectively carrying out similarity calculation on the gradient matrix of each person in the N persons and the gradient matrices of the rest persons in the N persons to obtain similarity factors between the evaluation task of each person and the evaluation tasks of the rest persons;
sorting the similarity factors of the evaluation tasks of all the persons, and screening out M persons corresponding to the similarity factors which are sorted in the front;
and taking the evaluation tasks of the M persons as a training set of the meta-learning model.
4. The image quality detection model building method according to claim 3, wherein the selecting an evaluation task from a training set according to a preset sequence to perform iterative training on a preset meta-learning model specifically comprises:
acquiring parameters to be trained in the meta-learning model in real time;
training the parameters to be trained by utilizing the gradient of each evaluation subtask in the selected evaluation tasks to obtain a single training parameter theta i =θ-αg i (ii) a Wherein theta is a parameter to be trained, alpha is a single learning rate, and g i Is the gradient of the subtask; theta i ′←θ-αg i
Obtaining a joint training parameter according to single training parameters corresponding to all evaluation subtasks in the selected evaluation task
Figure FDA0003968287560000021
Wherein, beta is the joint learning rate;
and taking the joint training parameters as parameters to be trained in the meta-learning model, and selecting the next evaluation task according to a preset sequence.
5. The image quality inspection model building method according to claim 1, characterized in that: the establishing of the subjective evaluation model of the image specifically comprises the following steps:
acquiring relative evaluation and absolute evaluation of an image, wherein the relative evaluation and the absolute evaluation respectively comprise z grades;
generating a loss cost matrix of relative evaluation-absolute evaluation according to historical data of the relative evaluation and historical data of the absolute evaluation;
combining the relative evaluation and the absolute evaluation and generating a subjective perception vector;
normalizing the subjective perception vector to obtain the proportion of each evaluation grade of the image;
and generating a subjective evaluation model according to the proportion of the loss cost matrix to the evaluation grade.
6. The method for establishing an image quality detection model according to claim 5, wherein the generating a subjective evaluation model according to the ratio of the loss cost matrix to the evaluation level specifically comprises:
according to the ratio of the loss cost matrix to the evaluation gradeCalculating an evaluation category cost vector X i =D ij ·P i (ii) a Wherein D is ij Is a loss cost matrix, and P is the ratio of the evaluation grades;
and generating a subjective evaluation model according to the evaluation category cost vector.
7. The method for establishing an image quality detection model according to claim 1, wherein the step of fusing the objective evaluation model and the subjective evaluation model to obtain the image quality detection model specifically comprises the steps of:
obtaining an output result of the objective evaluation model and an output result of the subjective evaluation model;
generating image quality categories based on the output result of the objective evaluation model and the output result of the subjective evaluation model
Figure FDA0003968287560000022
Wherein x is a For output of the subjective evaluation model, θ a The output result is an objective evaluation model;
based on the image quality category, an image quality detection model is generated.
8. An image quality detection model building system is characterized in that:
the evaluation acquisition unit is used for acquiring the quality evaluation results of the images and taking each image quality evaluation result and the corresponding image data as an evaluation subtask; the image quality evaluation result is obtained by evaluating the K images by N persons respectively;
the objective model generation unit is used for carrying out iterative training on a preset meta-learning model based on all the evaluation subtasks to obtain an objective evaluation model of the image;
the subjective model generating unit is used for establishing a subjective evaluation model of the image; and the number of the first and second groups,
and the detection model generation unit is used for fusing the objective evaluation model and the subjective evaluation model to obtain an image quality detection model.
9. A computer device, characterized by: comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing an image quality inspection model building method and system as claimed in any one of claims 1-7.
10. A computer-readable storage medium, comprising a computer program that can be loaded by a processor and executed to perform the image quality inspection model building method and system according to any one of claims 1 to 7.
CN202211502424.9A 2022-11-28 2022-11-28 Image quality detection model building method and system Active CN115760822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211502424.9A CN115760822B (en) 2022-11-28 2022-11-28 Image quality detection model building method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211502424.9A CN115760822B (en) 2022-11-28 2022-11-28 Image quality detection model building method and system

Publications (2)

Publication Number Publication Date
CN115760822A true CN115760822A (en) 2023-03-07
CN115760822B CN115760822B (en) 2024-03-19

Family

ID=85339379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211502424.9A Active CN115760822B (en) 2022-11-28 2022-11-28 Image quality detection model building method and system

Country Status (1)

Country Link
CN (1) CN115760822B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118037602A (en) * 2024-04-15 2024-05-14 深圳市捷易科技有限公司 Image quality optimization method, device, electronic equipment, medium and program product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728656A (en) * 2019-09-06 2020-01-24 西安电子科技大学 Meta-learning-based no-reference image quality data processing method and intelligent terminal
CN111539404A (en) * 2020-04-16 2020-08-14 华北电力大学 Full-reference image quality evaluation method based on structural clues
CN113033693A (en) * 2021-04-09 2021-06-25 中国矿业大学 User subjective attribute fused personalized image aesthetic evaluation method and device
CN114066857A (en) * 2021-11-18 2022-02-18 烟台艾睿光电科技有限公司 Infrared image quality evaluation method and device, electronic equipment and readable storage medium
CN114972232A (en) * 2022-05-17 2022-08-30 西安电子科技大学 No-reference image quality evaluation method based on incremental meta-learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728656A (en) * 2019-09-06 2020-01-24 西安电子科技大学 Meta-learning-based no-reference image quality data processing method and intelligent terminal
CN111539404A (en) * 2020-04-16 2020-08-14 华北电力大学 Full-reference image quality evaluation method based on structural clues
CN113033693A (en) * 2021-04-09 2021-06-25 中国矿业大学 User subjective attribute fused personalized image aesthetic evaluation method and device
CN114066857A (en) * 2021-11-18 2022-02-18 烟台艾睿光电科技有限公司 Infrared image quality evaluation method and device, electronic equipment and readable storage medium
CN114972232A (en) * 2022-05-17 2022-08-30 西安电子科技大学 No-reference image quality evaluation method based on incremental meta-learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
祝汉城: "用户性格分析与个性化图像美学评价研究", 《中国博士学位论文全文数据库信息科技辑》, no. 07, pages 138 - 15 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118037602A (en) * 2024-04-15 2024-05-14 深圳市捷易科技有限公司 Image quality optimization method, device, electronic equipment, medium and program product

Also Published As

Publication number Publication date
CN115760822B (en) 2024-03-19

Similar Documents

Publication Publication Date Title
JP6993027B2 (en) Image analysis methods, equipment and computer programs
CN108090902B (en) Non-reference image quality objective evaluation method based on multi-scale generation countermeasure network
CN103528617B (en) A kind of cockpit instrument identifies and detection method and device automatically
CN105574550A (en) Vehicle identification method and device
US20020186882A1 (en) Method and apparatus for generating special-purpose image analysis algorithms
CN111507370A (en) Method and device for obtaining sample image of inspection label in automatic labeling image
KR102045223B1 (en) Apparatus, method and computer program for analyzing bone age
CN110956615B (en) Image quality evaluation model training method and device, electronic equipment and storage medium
CN111242899B (en) Image-based flaw detection method and computer-readable storage medium
CN115138059B (en) Pull-up standard counting method, pull-up standard counting system and storage medium of pull-up standard counting system
CN113761259A (en) Image processing method and device and computer equipment
CN111914902A (en) Traditional Chinese medicine identification and surface defect detection method based on deep neural network
CN107563427A (en) The method and corresponding use that copyright for oil painting is identified
CN115760822B (en) Image quality detection model building method and system
CN114066848A (en) FPCA appearance defect visual inspection system
CN110082106B (en) Bearing fault diagnosis method based on Yu norm deep measurement learning
CN111414930B (en) Deep learning model training method and device, electronic equipment and storage medium
CN106682604B (en) Blurred image detection method based on deep learning
CN106951924B (en) Seismic coherence body image fault automatic identification method and system based on AdaBoost algorithm
CN113705310A (en) Feature learning method, target object identification method and corresponding device
CN112508946B (en) Cable tunnel anomaly detection method based on antagonistic neural network
JP2020077158A (en) Image processing device and image processing method
CN109767430A (en) The quality determining method and quality detecting system of valuable bills
CN113919983A (en) Test question portrait method, device, electronic equipment and storage medium
CN112487227A (en) Deep learning fine-grained image classification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant