CN105389594B - Information processing method and electronic equipment - Google Patents
Information processing method and electronic equipment Download PDFInfo
- Publication number
- CN105389594B CN105389594B CN201510800602.XA CN201510800602A CN105389594B CN 105389594 B CN105389594 B CN 105389594B CN 201510800602 A CN201510800602 A CN 201510800602A CN 105389594 B CN105389594 B CN 105389594B
- Authority
- CN
- China
- Prior art keywords
- image
- training
- quality
- images
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses an information processing method and electronic equipment, wherein the information processing method comprises the following steps: sampling a training image to form n image training groups; each of the training sets of images comprises m patches from the training images; n is an integer not less than 1; m is an integer not less than 2; training a neural network by using the n image training groups and the quality scores of the training images to form a training result; determining an image quality measurement parameter based on the training result; the image quality measurement parameters are used for carrying out quality grading on the image to be measured.
Description
Technical Field
The present invention relates to the field of information technologies, and in particular, to an information processing method and an electronic device.
Background
With the proliferation of consumer electronic cameras, especially mobile phone cameras, wearable cameras, vehicle-mounted cameras, and aerial cameras, it is urgently required for electronic devices to recognize images with high image quality from a large number of continuously shot images. On one hand, the image quality measuring system can select high-quality images from massive photo albums, so that the images are convenient for users to arrange; on the other hand, high-quality images can be selected for image recognition applications such as image translation and image search, so that the image translation efficiency and the image search efficiency are improved.
Some image quality measuring methods are provided in the prior art, but the accuracy is not too low, and the processing process is not very complicated; therefore, it is a problem to be solved in the prior art to provide a simple, convenient and accurate image quality measuring method.
Disclosure of Invention
In view of the above, embodiments of the present invention are directed to an information processing method and an electronic device, which at least partially solve the problem that the image quality score is not accurate enough and/or the processing procedure is complicated.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a first aspect of an embodiment of the present invention provides an information processing method, where the method includes:
sampling a training image to form n image training groups; each of the training sets of images comprises m patches from the training images; n is an integer not less than 1; m is an integer not less than 2;
training a neural network by using the n image training groups and the quality scores of the training images to form a training result;
determining an image quality measurement parameter based on the training result; the image quality measurement parameters are used for carrying out quality grading on the image to be measured.
Based on the above scheme, the method further comprises:
outputting a first image group before sampling the training images; the first image group at least comprises two images to be scored;
receiving the grading and sorting information of each image in the first image group;
and determining the quality scores of the images in the first image group as the training images based on the grading ranking information.
Based on the above scheme, the sampling a training image to form n image training sets includes:
carrying out n times of random segmentation on one training image; wherein each random segmentation segments the training image into m of the image patches.
Based on the above scheme, the training a neural network by using the n image training groups and the quality scores of the training images to form a training result, including:
using the n image training groups and the quality scores of the training images to pairTraining to obtain the value of P (x);
wherein y (x) represents the x-th mass fraction; p (x) is the probability that the training image is the xth quality score; x is the total number of the mass fractions and is a positive integer not less than 2; and Q is the quality fraction of the training image.
Based on the above scheme, the method further comprises:
updating the image quality measurement parameters at regular time;
carrying out image sampling on an image to be detected to form an image group to be detected;
and according to the image quality measurement parameters updated regularly, performing quality scoring on the image group to be measured.
A second aspect of the embodiments of the present invention also provides an electronic device, including:
the forming unit is used for sampling a training image to form n image training groups; each of the training sets of images comprises m patches from the training images; n is an integer not less than 1; m is an integer not less than 2;
the training unit is used for training the neural network by using the n image training groups and the quality scores of the training images to form a training result;
a first determination unit configured to determine an image quality measurement parameter based on the training result; the image quality measurement parameters are used for carrying out quality grading on the image to be measured.
Based on the above scheme, the electronic device further includes:
an output unit, configured to output a first image group before sampling the training image; the first image group at least comprises two images to be scored;
the receiving unit is used for receiving the grading and sorting information of each image in the first image group;
and the second determining unit is used for determining the quality scores of the images in the first image group as the training images based on the grading ranking information.
Based on the above scheme, the forming unit is specifically configured to perform n-time random segmentation on one training image; wherein each random segmentation segments the training image into m of the image patches.
Based on the above scheme, the training unit is specifically configured to pair the n image training groups and the quality scores of the training imagesTraining to obtain the value of P (x);
Wherein y (x) represents the x-th mass fraction; p (x) is the probability that the training image is the xth quality score; x is the total number of the mass fractions and is a positive integer not less than 2; and Q is the quality fraction of the training image.
Based on the above scheme, the electronic device further comprises a measurement unit:
the first determining unit is further configured to update the image quality measurement parameter at regular time;
the forming unit is also used for carrying out image sampling on the image to be detected to form an image group to be detected;
and according to the image quality measurement parameters updated regularly, performing quality scoring on the image group to be measured.
According to the information processing method and the electronic equipment provided by the embodiment of the invention, one training image is divided into n image training groups; one image training set can be used for training the neural network once, so that the neural network can be trained for n times by one training image, and the number of training images can be reduced; meanwhile, the n image training groups correspond to one quality score, so that the phenomenon that the training result is not accurate enough due to the difference of the quality scores of the multiple training images, and the result is not accurate enough when the quality of the image to be tested is subsequently evaluated by utilizing the neural network can be reduced.
Drawings
Fig. 1 is a schematic flowchart of a first information processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a second information processing method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an image sharpness score evaluation system according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a neural network according to an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is further described in detail with reference to the drawings and the specific embodiments of the specification.
The first embodiment is as follows:
as shown in fig. 1, the present embodiment provides an information processing method, including:
step S110: sampling a training image to form n image training groups; each of the training sets of images comprises m patches from the training images; n is an integer not less than 1; m is an integer not less than 2;
step S120: training a neural network by using the n image training groups and the quality scores of the training images to form a training result;
step S130: determining an image quality measurement parameter based on the training result; the image quality measurement parameters are used for carrying out quality grading on the image to be measured.
The information processing method in this embodiment may be applied to various electronic devices such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, a server, a service platform, or the like.
The step S110 is to sample a training image, and may include segmenting the training image for n times; and the secondary training image is divided into m non-overlapping image blocks each time. It is obvious that each image block is a part of the training image.
In this embodiment the training image corresponds to a quality score. The quality score here is a quality parameter characterizing the training image. The quality score here may be a sharpness score of the training image. For example, the sharpness is evaluated with a score of 10, so that the secondary training image corresponds to a score from 0 to 10. The score represents the sharpness of the training image.
Generally, the quality score is in a one-to-one mapping relation with the quality parameter of the training image, for example, the higher the definition score; one sharpness score may not be used for two training images that correspond to a large difference in sharpness.
Of course, the quality score described in this embodiment may also be a blur degree score, which represents a blur degree of an image, and an evaluation value for evaluating the image opposite to the sharpness.
In step S120, the n training sets of images are trained to form training results. Here, neural networks are used for training. The neural network here is worth simulating an artificial neural network with electronics. Artificial Neural Networks (ANNs) are also referred to as Neural Networks (NNs) or as Connection models (Connection models). The artificial neural network is an algorithmic mathematical model simulating animal neural network behavior characteristics and performing distributed parallel information processing. The artificial neural network achieves the purpose of processing information by adjusting the interconnection relationship among a large number of internal nodes according to the complexity of the system. Computational models of artificial neural networks are modeled from the animal's central nervous system (particularly the brain) and are used to estimate or may rely on a large number of inputs and generally unknown approximation functions. Artificial neural networks are typically presented as interconnected "neurons". Neural networks can compute values from inputs and are capable of machine learning and pattern recognition systems due to their adaptive nature.
In short, the trained neural network can form an information processing network, and the weight in each branch of the information processing network and the operational relationship among the branches are determined, so that after the information to be detected is input, a processing result is obtained through the information processing network. In this embodiment, the training input for training the neural network is the quality scores of the n training sets and the secondary training image. In the neural network formed in the way, m image blocks are input into the neural network during subsequent work, and after the image blocks are processed by the information processing network, the mass fractions of the m image blocks are obtained. It should be noted that the information processing network herein is a network formed by paths for performing data processing inside the electronic device.
In this embodiment, each training set is a training input for one training, so that n training sets of images are formed in step S110, and at least n times of training are performed in step S120, where the n times of training correspond to quality scores of the same training image.
Due to the n training sets of images, n training passes are required. In this way, when the neural network trains one image, n times of different training can be carried out, thus reducing the number of training images required for training the neural network; and the n image training groups are from the same training image and correspond to one quality score, so that the inaccuracy of the training result caused by the deviation of the quality scores of the multiple training images and the inaccuracy of the image measurement quality parameter caused by the inaccuracy of the training result can be reduced relative to n times of training of the multiple training images.
Example two:
as shown in fig. 1, the present embodiment provides an information processing method, including:
step S110: sampling a training image to form n image training groups; each of the training sets of images comprises m patches from the training images; n is an integer not less than 1; m is an integer not less than 2;
step S120: training a neural network by using the n image training groups and the quality scores of the training images to form a training result;
step S130: determining an image quality measurement parameter based on the training result; the image quality measurement parameters are used for carrying out quality grading on the image to be measured.
As shown in fig. 2, the method further comprises:
step S101: outputting a first image group before sampling the training images; the first image group at least comprises two images to be scored;
step S102: receiving the grading and sorting information of each image in the first image group;
step S103: and determining the quality scores of the images in the first image group as the training images based on the grading ranking information.
In this embodiment, the first image group may include at least two images to be scored. For example, the electronic device outputs image a and image B; the requesting user compares the sharpness of image a and image B. In step S102, ranking information of scores, for example, ranking information of sharpness of image a and image B, will be received. In a specific implementation process, the step S101 may output the images to be scored in the first image group on the same screen each time, so that the user can conveniently compare the images to be scored and then give the scoring ranking information; and repeatedly outputting until the grading ranking between any two images to be graded in the first group of images is determined. In step S103, the quality scores of the respective images in the first image group are determined according to the rating ordering information.
Of course, in a specific implementation process, the quality score of each image in the first image group given by the user may also be directly received. However, different users may have great differences in understanding quality parameters such as sharpness, and if the quality score of the training image is determined in this way, the image quality parameter determined in this way may be used for determining the quality score of the image, which may result in a low accuracy.
However, for two images to be scored with different qualities, which can be compared with each other, of different users, the image sharpness is determined through comparison to obtain scoring ranking information, and the probability of occurrence of a difference between the different users is obviously smaller than the probability of a difference caused by the fact that the user directly gives the quality score.
Example three:
as shown in fig. 1, the present embodiment provides an information processing method, including:
step S110: sampling a training image to form n image training groups; each of the training sets of images comprises m patches from the training images; n is an integer not less than 1; m is an integer not less than 2;
step S120: training a neural network by using the n image training groups and the quality scores of the training images to form a training result;
step S130: determining an image quality measurement parameter based on the training result; the image quality measurement parameters are used for carrying out quality grading on the image to be measured.
The step S110 may include:
carrying out n times of random segmentation on one training image; wherein each random segmentation segments the training image into m of the image patches.
In this embodiment, n times of random segmentation are performed on one training image, and each segmentation will form m image blocks. In the present embodiment, random segmentation is adopted to obtain training results that enable the neural network to obtain more accurate training results as much as possible. Certainly, in a specific implementation process, non-random segmentation may also be adopted, for example, a segmentation strategy is set in advance; the partitioning policy herein may include: the first segmentation unequally divides the training image starting from a first edge of the image; the second segmentation unequally divides the training image starting from a second edge; the training image is not divided from the middle part of the image for the third time; the fourth segmentation equally divides the training image … … from the first edge, and in this embodiment the areas of the unequal divided image patches formed are not equal. The areas equally divided into image blocks formed by dividing the training image are equal. The first edge and the second edge are here different edges. Obviously, the segmentation strategies are different, and even if the same training image is segmented, the obtained image blocks are different, so that at least one image block in the n image training sets can be ensured to be different.
By adopting random segmentation, the natural segmentation of the image can be simulated as much as possible, so that the image quality measurement parameters obtained by training can be used for accurately grading the image to be measured.
Example four:
as shown in fig. 1, the present embodiment provides an information processing method, including:
step S110: sampling a training image to form n image training groups; each of the training sets of images comprises m patches from the training images; n is an integer not less than 1; m is an integer not less than 2;
step S120: training a neural network by using the n image training groups and the quality scores of the training images to form a training result;
step S130: determining an image quality measurement parameter based on the training result; the image quality measurement parameters are used for carrying out quality grading on the image to be measured.
The step S120 may include:
using the n image training groups and the quality scores of the training images to pairTraining to obtain the value of P (x);
wherein y (x) represents the x-th mass fraction; p (x) is the probability that the training image is the xth quality score; x is the total number of the mass fractions and is a positive integer not less than 2; and Q is the quality fraction of the training image.
In the present embodimentI.e. to the neural network, said y (x) corresponding to at least one branch of the neural network.
And Q is the quality score of the training image, and each P (x) can be obtained through training.
Therefore, the quality of the image to be measured can be scored by utilizing the P (x) to obtain the corresponding quality score.
The functional relation of the neural network is adopted for training, and the method has the characteristic of simple and convenient realization.
Example five:
as shown in fig. 1, the present embodiment provides an information processing method, including:
step S110: sampling a training image to form n image training groups; each of the training sets of images comprises m patches from the training images; n is an integer not less than 1; m is an integer not less than 2;
step S120: training a neural network by using the n image training groups and the quality scores of the training images to form a training result;
step S130: determining an image quality measurement parameter based on the training result; the image quality measurement parameters are used for carrying out quality grading on the image to be measured.
The method further comprises the following steps:
updating the image quality measurement parameters at regular time;
carrying out image sampling on an image to be detected to form an image group to be detected;
and according to the image quality measurement parameters updated regularly, performing quality scoring on the image group to be measured.
In this embodiment, the image quality measurement parameter is updated at regular time, where the timing update may include periodic update and also includes updating according to a specified time interval, where each of the time intervals may be equal or unequal. For example, the time interval may be determined according to the current time, such as day time, and the electronic device may be frequently applied to perform other functions, and in this case, the image quality measurement score is not updated in day time and is updated in night time in order to reduce the occupation of resources of the electronic device.
In a specific application, the image to be tested is sampled, where the sampling of the image to be tested is generally selected from the manner of sampling the training image in step S110, for example, randomly sampling the image to be tested. In this embodiment, an image to be measured including m image blocks may be formed by sampling the image to be measured. And taking the m image groups as the input of the neural network, and processing each image block in the image group to be detected by the neural network by using the image quality measurement parameters updated at regular time to obtain the quality score corresponding to the image to be detected. For example, a sharpness score or the like of the image to be measured will be obtained.
The information processing method described in this embodiment trains the neural network by using n image blocks formed by one training image, and then performs quality scoring on the image to be measured by using the neural network, and has the characteristics of simple neural network training, fewer required training images, and high accuracy of quality scoring.
Example six:
as shown in fig. 3, the present embodiment provides an electronic device, including:
a forming unit 110, configured to sample one training image to form n training sets of images; each of the training sets of images comprises m patches from the training images; n is an integer not less than 1; m is an integer not less than 2;
a training unit 120, configured to train a neural network by using the n image training groups and the quality scores of the training images to form a training result;
a first determining unit 130 for determining an image quality measurement parameter based on the training result; the image quality measurement parameters are used for carrying out quality grading on the image to be measured.
The electronic device described in this embodiment may be various electronic devices capable of performing processing.
The forming unit 110 and the first determining unit 130 may correspond to a processor or a processing circuit; the processor may include an application processor, a central processing unit, a microprocessor, a digital signal processor, or a programmable array, etc. The processing circuit may comprise an application specific integrated circuit. The training unit can comprise information processing results of a learning machine or a training machine and the like which can be carried out on the neural network.
In this embodiment, the forming unit 110 samples a training image to form n training sets of images. And each image training set can be used for training the neural network model once to obtain a training result.
In the embodiment, the image quality measurement parameters are obtained by utilizing the training of the neural network, so that the number of training images required in the training process of the neural network can be reduced, and the problem that the training result is not accurate enough due to the error of the quality scores of a plurality of training images is solved.
Example seven:
as shown in fig. 3, the present embodiment provides an electronic device, including:
a forming unit 110, configured to sample one training image to form n training sets of images; each of the training sets of images comprises m patches from the training images; n is an integer not less than 1; m is an integer not less than 2;
a training unit 120, configured to train a neural network by using the n image training groups and the quality scores of the training images to form a training result;
a first determining unit 130 for determining an image quality measurement parameter based on the training result; the image quality measurement parameters are used for carrying out quality grading on the image to be measured.
The electronic device further includes:
an output unit, configured to output a first image group before sampling the training image; the first image group at least comprises two images to be scored;
the receiving unit is used for receiving the grading and sorting information of each image in the first image group;
and the second determining unit is used for determining the quality scores of the images in the first image group as the training images based on the grading ranking information.
In this embodiment, the output unit may correspond to a display screen, and the display screen may include a liquid crystal display, an electronic ink display, a projection display, or an organic light emitting diode OLED display. The display screen may be operable to display a first image group. In a specific implementation, the output unit may be configured to control the display screen to output two images to be scored of the first image group at a time.
The receiving unit may correspond to a human-computer interaction interface, and is configured to receive rating ranking information input by a user. The man-machine interaction interface can comprise a keyboard, a mouse, a touch screen or a floating touch screen or a voice interaction structure.
The second determining unit may also correspond to a processor or a processing circuit, and the like, and may be capable of determining the quality score of each training image based on the score ranking information, so as to ensure the accuracy of the final training result of the training unit 120.
Example eight:
as shown in fig. 3, the present embodiment provides an electronic device, including:
a forming unit 110, configured to sample one training image to form n training sets of images; each of the training sets of images comprises m patches from the training images; n is an integer not less than 1; m is an integer not less than 2;
a training unit 120, configured to train a neural network by using the n image training groups and the quality scores of the training images to form a training result;
a first determining unit 130 for determining an image quality measurement parameter based on the training result; the image quality measurement parameters are used for carrying out quality grading on the image to be measured.
The forming unit 110 is specifically configured to perform n-time random segmentation on one training image; wherein each random segmentation segments the training image into m of the image patches.
In this embodiment, a training image is divided n times, and each division results in m image blocks. For example, a training image is divided into 10 partitions, each of which divides the training image into 9 image blocks of equal or unequal area. The 10 segmentations form 10 training sets of images, each training set of images comprising 9 patches. The 9 image blocks are used as input for one training of the neural network, and the quality scores are output required by the neural network training.
In this embodiment, the forming unit 110 randomly segments the training image in order to accurately score the quality of the pattern to be measured in the following step, so as to avoid the phenomenon of insufficient accuracy of quality scoring caused by segmenting the training image according to a certain segmentation mode.
Example nine:
as shown in fig. 3, the present embodiment provides an electronic device, including:
a forming unit 110, configured to sample one training image to form n training sets of images; each of the training sets of images comprises m patches from the training images; n is an integer not less than 1; m is an integer not less than 2;
a training unit 120, configured to train a neural network by using the n image training groups and the quality scores of the training images to form a training result;
a first determining unit 130 for determining an image quality measurement parameter based on the training result; the image quality measurement parameters are used for carrying out quality grading on the image to be measured.
The training unit 120 is specifically configured to pair the n image training sets and the quality scores of the training imagesTraining to obtain the value of P (x);
wherein y (x) represents the x-th mass fraction; p (x) is the probability that the training image is the xth quality score; x is the total number of the mass fractions and is a positive integer not less than 2; and Q is the quality fraction of the training image.
In the present embodimentThe value of P (x) can be obtained for a function expression relation of the neural network through training, and finally the value can be used for quality scoring of the image to be detected subsequently.
The training of the neural network by the training unit 120 has the characteristic of simple and convenient implementation.
Example ten:
as shown in fig. 3, the present embodiment provides an electronic device, including:
a forming unit 110, configured to sample one training image to form n training sets of images; each of the training sets of images comprises m patches from the training images; n is an integer not less than 1; m is an integer not less than 2;
a training unit 120, configured to train a neural network by using the n image training groups and the quality scores of the training images to form a training result;
a first determining unit 130 for determining an image quality measurement parameter based on the training result; the image quality measurement parameters are used for carrying out quality grading on the image to be measured.
The electronic device further includes a measurement unit:
the first determining unit 130 is configured to update the image quality measurement parameter at regular time;
the forming unit 110 is further configured to perform image sampling on an image to be detected to form an image group to be detected;
and according to the image quality measurement parameters updated regularly, performing quality scoring on the image group to be measured.
First, the electronic device provided in this embodiment is a further improvement on the electronic device provided in any one of the sixth to ninth embodiments. In this embodiment, the first determining unit 130 is further configured to update the image quality measurement parameter at regular time, specifically, to determine the image quality measurement parameter according to the training result updated by the training unit, so that the accuracy of quality score of the image to be measured is improved by updating in time.
Two specific examples are provided below in connection with any of the embodiments described above:
example one:
the present embodiment provides a system for measuring an image sharpness score. The system comprises a neural network and an image quality database. The image quality data is divided into two parts of training and measuring by the neural network. And an image set is arranged in the image quality library. The image set stores training images.
The first step is as follows: the image set outputs a set of image pairs. Here an image pair group may comprise at least two training images from the image set. The image pair groups may be image pair groups formed by randomly combining training images. Of course, in order to reduce the number of evaluation rounds, the unselected images may be selected preferentially, or only the unselected images may be selected. The image pair group here corresponds to the first image group in the foregoing embodiment.
The second step is that: and carrying out crowd-sourced image evaluation. Crowd-sourced image evaluation aims to collect the quality scores that testers rate images in an image set. For each test subject, an image pair was selected from the image set for evaluation. For the convenience of evaluation of a tester, only a limited number of images (for example, 2 to 4 images) are randomly selected from the image pair group in each round to form an image pair group, and the tester orders the quality of the images in the image pair group to form a quality order. For example 2>3>1, indicating that the 2 nd image is the best and the 1 st image is the worst. The number of rounds is repeated until all images of the set of images have been evaluated by the user at least once. The system calculates the (absolute) quality score of each image according to the quality ranking of each round of images. The quality rank here corresponds to the score rank information in the foregoing embodiment.
The third step: and performing quality sequence pair regression, wherein the evaluation system calculates the (absolute) quality score of each image according to the quality sequence of each image, and the operation is called quality sequence pair regression, so that the quality score of the training image is determined.
The fourth step: and returning the operation result of the third step to the image quality database.
The fifth step: and outputting a training image and the quality score of the training image by the image quality database to perform data set expansion. Data set expansion herein may include: randomly sampling each training image to obtain a training image group, wherein each training image group comprises m image blocks; and randomly sampling one training image for n times to obtain n training image groups.
And a sixth step: inputting each image block of the image training set into a neural network, and calculating to obtain a mass fraction; until the quality score obtained by the neural network training is the quality score corresponding to the image training set. In a particular implementation, the neural network is constantly updated. The neural network regularly acquires the image quality data set for retraining, and obtains the image quality measurement parameters according to the training result.
The seventh step: the training portion of the neural network transmits the image quality score to the testing portion.
Eighth step: receiving an image uploaded by a user;
the ninth step: and randomly blocking the image.
The tenth step: and (4) delivering the image blocks formed by random blocking in the ninth step to a measuring part of the neural network, and evaluating the image blocks by the measuring part by using the image quality scores obtained from the training part to obtain the quality scores of the images uploaded by the user. The quality score here may include a sharpness score of the image.
The image sharpness score is a scalar that scales the sharpness or blur of an image.
In the measurement process, the image uploaded by each user is calculated by a neural network to obtain the predicted image definition score.
It is noted that, in a specific implementation, the first step to the seventh step are performed sequentially, and the eighth step to the tenth step may be performed in parallel with the first step to the seventh step.
Example two:
FIG. 5 is a schematic diagram of a neural network that may be used in embodiments of the present invention; after the image blocks in the training image group are used as the input of the neural network (corresponding to S1 and Sj in fig. 5), the image neural network is trained, and then the values of h11 to h1i and h21 to h2r are obtained. The neural network includes 7 outputs, corresponding to probabilities of image quality scores of 1 to 7, respectively. For example, the probability that the image quality score is 1 is P (y is 1| x), the probability that the image quality score is 2 is P (y is 2| x), and the probability that the image quality score is 7 is P (y is 7| x)
Finally, the neural network lost image is used for measurement, and the obtained quality score can be expressed by adopting the following functional relationship: q is P (y is 1| x) 1+ P (y is 2| x) 2+ … + P (y is 7| x) 7. Of course, in a specific implementation process, other functional relationships may be used to express the function, and the above is just an example.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (8)
1. An information processing method, the method comprising:
sampling a training image to form n image training groups; each of the training sets of images comprises m patches from the training images; n is an integer not less than 1; m is an integer not less than 2;
training a neural network by using the n image training groups and the quality scores of the training images to form a training result;
determining an image quality measurement parameter based on the training result; the image quality measurement parameters are used for performing quality grading on an image to be measured, and image blocks formed after the image to be measured is blocked are input into a neural network for determining the image quality measurement parameters so as to obtain the quality score of the image to be measured;
wherein, the training the neural network by using the n image training groups and the quality scores of the training images to form a training result comprises:
Training to obtain the value of P (x);
wherein y (x) represents an xth quality score, the quality score evaluated from a crowd-sourced image prior to the training image being blocked; p (x) is the probability that the training image is the xth quality score; x is the total number of the mass fractions and is a positive integer not less than 2; and Q is the quality fraction of the training image.
2. The method of claim 1,
the method further comprises the following steps:
outputting a first image group before sampling the training images; the first image group at least comprises two images to be scored;
receiving the grading and sorting information of each image in the first image group;
and determining the quality scores of the images in the first image group as the training images based on the grading ranking information.
3. The method of claim 1,
the sampling of one training image to form n image training sets includes:
carrying out n times of random segmentation on one training image; wherein each random segmentation segments the training image into m of the image patches.
4. The method of claim 1, 2 or 3,
the method further comprises the following steps:
updating the image quality measurement parameters at regular time;
carrying out image sampling on an image to be detected to form an image group to be detected;
and according to the image quality measurement parameters updated regularly, performing quality scoring on the image group to be measured.
5. An electronic device, the electronic device comprising:
the forming unit is used for sampling a training image to form n image training groups; each of the training sets of images comprises m patches from the training images; n is an integer not less than 1; m is an integer not less than 2;
the training unit is used for training the neural network by using the n image training groups and the quality scores of the training images to form a training result;
a first determination unit configured to determine an image quality measurement parameter based on the training result; the image quality measurement parameters are used for performing quality grading on an image to be measured, and image blocks formed after the image to be measured is blocked are input into a neural network for determining the image quality measurement parameters so as to obtain the quality score of the image to be measured;
wherein the training unit is specifically configured to pair the n image training groups and the quality scores of the training imagesTraining to obtain the value of P (x);
wherein y (x) represents an xth quality score, the quality score evaluated from a crowd-sourced image prior to the training image being blocked; p (x) is the probability that the training image is the xth quality score; x is the total number of the mass fractions and is a positive integer not less than 2; and Q is the quality fraction of the training image.
6. The electronic device of claim 5,
the electronic device further includes:
an output unit, configured to output a first image group before sampling the training image; the first image group at least comprises two images to be scored;
the receiving unit is used for receiving the grading and sorting information of each image in the first image group;
and the second determining unit is used for determining the quality scores of the images in the first image group as the training images based on the grading ranking information.
7. The electronic device of claim 5,
the forming unit is specifically configured to perform n-time random segmentation on one training image; wherein each random segmentation segments the training image into m of the image patches.
8. The electronic device of claim 5, 6 or 7,
the electronic device further includes a measurement unit:
the first determining unit is further configured to update the image quality measurement parameter at regular time;
the forming unit is also used for carrying out image sampling on the image to be detected to form an image group to be detected;
the first determining unit is further configured to perform quality scoring on the image group to be measured according to the image quality measurement parameter updated at the fixed time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510800602.XA CN105389594B (en) | 2015-11-19 | 2015-11-19 | Information processing method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510800602.XA CN105389594B (en) | 2015-11-19 | 2015-11-19 | Information processing method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105389594A CN105389594A (en) | 2016-03-09 |
CN105389594B true CN105389594B (en) | 2020-10-27 |
Family
ID=55421864
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510800602.XA Active CN105389594B (en) | 2015-11-19 | 2015-11-19 | Information processing method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105389594B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019222936A1 (en) * | 2018-05-23 | 2019-11-28 | 富士通株式会社 | Method and device for training classification neural network for semantic segmentation, and electronic apparatus |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103679188A (en) * | 2012-09-12 | 2014-03-26 | 富士通株式会社 | Image classifier generating method and device as well as image classifying method and device |
CN104915945A (en) * | 2015-02-04 | 2015-09-16 | 中国人民解放军海军装备研究院信息工程技术研究所 | Quality evaluation method without reference image based on regional mutual information |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2839778C (en) * | 2011-06-26 | 2019-10-29 | Universite Laval | Quality control and assurance of images |
CN103544708B (en) * | 2013-10-31 | 2017-02-22 | 南京邮电大学 | Image quality objective evaluation method based on MMTD |
CN103745466A (en) * | 2014-01-06 | 2014-04-23 | 北京工业大学 | Image quality evaluation method based on independent component analysis |
CN104134204B (en) * | 2014-07-09 | 2017-04-19 | 中国矿业大学 | Image definition evaluation method and image definition evaluation device based on sparse representation |
-
2015
- 2015-11-19 CN CN201510800602.XA patent/CN105389594B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103679188A (en) * | 2012-09-12 | 2014-03-26 | 富士通株式会社 | Image classifier generating method and device as well as image classifying method and device |
CN104915945A (en) * | 2015-02-04 | 2015-09-16 | 中国人民解放军海军装备研究院信息工程技术研究所 | Quality evaluation method without reference image based on regional mutual information |
Also Published As
Publication number | Publication date |
---|---|
CN105389594A (en) | 2016-03-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11551153B2 (en) | Localized learning from a global model | |
CN110147456B (en) | Image classification method and device, readable storage medium and terminal equipment | |
CN109376615B (en) | Method, device and storage medium for improving prediction performance of deep learning network | |
CN109949290B (en) | Pavement crack detection method, device, equipment and storage medium | |
CN107636690B (en) | Full reference image quality assessment based on convolutional neural network | |
CN109034365A (en) | The training method and device of deep learning model | |
EP4036796A1 (en) | Automatic modeling method and apparatus for object detection model | |
CN109919252A (en) | The method for generating classifier using a small number of mark images | |
CN109816043B (en) | Method and device for determining user identification model, electronic equipment and storage medium | |
US20200372624A1 (en) | Methods and systems for assessing the quality of geospatial data | |
CN115391561A (en) | Method and device for processing graph network data set, electronic equipment, program and medium | |
CN113516417A (en) | Service evaluation method and device based on intelligent modeling, electronic equipment and medium | |
CN112420125A (en) | Molecular attribute prediction method and device, intelligent equipment and terminal | |
CN108875901B (en) | Neural network training method and universal object detection method, device and system | |
CN112084825A (en) | Cooking evaluation method, cooking recommendation method, computer device and storage medium | |
CN105389594B (en) | Information processing method and electronic equipment | |
CN113742069A (en) | Capacity prediction method and device based on artificial intelligence and storage medium | |
CN111325255B (en) | Specific crowd delineating method and device, electronic equipment and storage medium | |
CN111369063B (en) | Test paper model training method, test paper combining method and related device | |
CN116090666B (en) | Material demand prediction method, device, equipment and medium based on environment and time sequence | |
CN114881540B (en) | Method and device for determining water source treatment scheme, electronic equipment and storage medium | |
CN110489104A (en) | Suitable for the mathematical formulae processing method and processing device of experimental data, storage medium | |
CN113961765B (en) | Searching method, searching device, searching equipment and searching medium based on neural network model | |
CN115758717A (en) | Method and device for estimating simulated brightness temperature deviation, electronic equipment and storage medium | |
CN109472289A (en) | Critical point detection method and apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |