CN110782448A - Rendered image evaluation method and device - Google Patents

Rendered image evaluation method and device Download PDF

Info

Publication number
CN110782448A
CN110782448A CN201911029262.XA CN201911029262A CN110782448A CN 110782448 A CN110782448 A CN 110782448A CN 201911029262 A CN201911029262 A CN 201911029262A CN 110782448 A CN110782448 A CN 110782448A
Authority
CN
China
Prior art keywords
image
neural network
convolutional neural
network model
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911029262.XA
Other languages
Chinese (zh)
Inventor
喻一凡
王璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong 3vjia Information Technology Co Ltd
Original Assignee
Guangdong 3vjia Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong 3vjia Information Technology Co Ltd filed Critical Guangdong 3vjia Information Technology Co Ltd
Priority to CN201911029262.XA priority Critical patent/CN110782448A/en
Publication of CN110782448A publication Critical patent/CN110782448A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for evaluating a rendered image, which relate to the technical field of home decoration design and comprise the steps of firstly receiving the rendered image; then, standardizing the rendering image to obtain a standardized rendering image; finally, inputting the standardized rendering image into a pre-trained convolutional neural network model to obtain the score probability distribution of the rendering image; the convolutional neural network model is a network model which introduces an inclusion structure and is added with a full connection layer. The addition of a fully connected layer convolutional neural network model can learn the probability distributions of subjective scores of different designers for the same rendered image. Therefore, the convolutional neural network model is closer to human perception and meets the aesthetic requirement of human. The invention can improve the accuracy of quality evaluation based on the convolutional neural network model, thereby realizing automatic scoring.

Description

Rendered image evaluation method and device
Technical Field
The invention relates to the technical field of home decoration design, in particular to a method and a device for evaluating a rendered image.
Background
At present, in home design, because the levels of designers are different, the quality of rendering drawings designed by different designers is uneven. The excellent rendering graphs are selected from a large number of rendering graphs, so that the user experience can be enhanced, and the experience effect of the user can be improved. In addition, for pictorial presentations, higher scoring renderings are generally more attractive to traffic.
The quality evaluation based on the image generally comprises the evaluation of the image quality and the evaluation of the image aesthetics, wherein the image quality generally refers to basic indexes such as image blurring degree, noise, distortion and the like, and the evaluation of the image quality is objective; the evaluation based on the image aesthetics is relatively subjective, different designers have different tendencies and different aesthetic understandings and levels of aesthetics, so that certain difficulty is brought to the overall quality evaluation of the rendered image.
Disclosure of Invention
The invention aims to provide a method and a device for evaluating a rendered image, so that an evaluation result is closer to human perception, the aesthetic requirements of human are met, the accuracy of quality evaluation is improved, and automatic scoring is realized.
The invention provides an evaluation method of a rendered image, which comprises the following steps: receiving a rendered image;
standardizing the rendering image to obtain a standardized rendering image; inputting the standardized rendering image into a pre-trained convolutional neural network model to obtain the score probability distribution of the rendering image; the convolutional neural network model is a network model which introduces an inclusion structure and is added with a full connection layer.
Further, prior to receiving the rendered image, the method further comprises: introducing an inclusion structure into the convolutional neural network, adding a full-connection layer, and constructing a convolutional neural network model; and training the convolutional neural network model based on training samples adopted by training to obtain a pre-trained convolutional neural network model.
Further, the step of training the convolutional neural network model based on the training samples adopted for training to obtain a pre-trained convolutional neural network model includes: obtaining a training sample; wherein the training sample comprises a plurality of sets of rendered images, each set of rendered images comprising an image sample and a first scored probability distribution for the image sample; inputting the image sample into the convolutional neural network model, and outputting a second scoring probability distribution of the image sample; determining errors of the first scoring probability distribution and the second scoring probability distribution based on a preset loss function; and training parameters in the convolutional neural network model according to the errors until the parameters are converged to obtain a pre-trained convolutional neural network model.
Further, the method further comprises: and obtaining the score of the rendering image based on the score probability distribution prediction of the rendering image.
Further, the method further comprises: sorting all the rendered images according to the scores; and pushing a preset number of rendering images to the client based on the sorting result.
The invention provides an evaluation device of a rendered image, which comprises: a receiving module for receiving a rendered image; the standardization processing module is used for standardizing the rendering image to obtain a standardized rendering image; the input module is used for inputting the standardized rendering image to a pre-trained convolutional neural network model to obtain the score probability distribution of the rendering image; the convolutional neural network model is a network model which introduces an inclusion structure and is added with a full connection layer.
Further, the apparatus further comprises: the building module is used for introducing an inclusion structure into the convolutional neural network, adding a full-connection layer and building a convolutional neural network model; and the training module is used for training the convolutional neural network model based on a training sample adopted by training to obtain a pre-trained convolutional neural network model.
Further, the training module comprises: an acquisition unit for acquiring a training sample; wherein the training sample comprises a plurality of sets of rendered images, each set of rendered images comprising an image sample and a first scored probability distribution for the image sample; the output unit is used for inputting the image sample into the convolutional neural network model and outputting a second scoring probability distribution of the image sample; a determining unit, configured to determine an error of the first scoring probability distribution and the second scoring probability distribution based on a preset loss function; and the training unit is used for training the parameters in the convolutional neural network model according to the errors until the parameters are converged to obtain a pre-trained convolutional neural network model.
The invention also provides an electronic device, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor realizes the evaluation method of the rendered image when executing the computer program.
The present invention also provides a computer readable medium having non-volatile program code executable by a processor, wherein the program code causes the processor to execute the method of evaluating a rendered image.
The method and the device for evaluating the rendered image provided by the invention firstly receive the rendered image; then, standardizing the rendering image to obtain a standardized rendering image; finally, inputting the standardized rendering image into a pre-trained convolutional neural network model to obtain the score probability distribution of the rendering image; the convolutional neural network model is a network model which introduces an inclusion structure and is added with a full connection layer. The addition of a fully connected layer convolutional neural network model can learn the probability distributions of subjective scores of different designers for the same rendered image. The convolutional neural network model is closer to human perception and accords with human aesthetic requirements. Therefore, the method and the device can improve the accuracy of quality evaluation and realize automatic scoring.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of an evaluation method for a rendered image according to an embodiment of the present invention;
FIG. 2 is a flowchart of another method for evaluating a rendered image according to an embodiment of the present invention;
FIG. 3 is a flow chart of a convolutional neural network model;
FIG. 4 is a network framework diagram of a convolutional neural network model;
FIG. 5 is a flowchart of step S102 in FIG. 2;
fig. 6 is a schematic structural diagram of an evaluation apparatus for rendering an image according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another evaluation apparatus for rendering an image according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of the training module in fig. 7.
Icon:
11-a receiving module; 12-a standardization processing module; 13-an input module; 14-building a module; 15-a training module; 16-a prediction module; 17-a sorting module; 18-a push module; 21-an acquisition unit; 22-an output unit; 23-a determination unit; 24-training unit.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Image quality evaluation can be classified into an objective evaluation method and a subjective evaluation method. According to whether a reference image is needed during evaluation, the objective evaluation method is divided into three types of full reference, half reference and no reference. Because the undistorted reference image is difficult to obtain in the home rendering image, the quality evaluation method for the reference-free image has a wider application range and higher practical value. Deep learning is often applied to reference-free objective assessment of image quality. Most deep learning algorithms can only determine the quality class of an image, such as: whether the image is contaminated by noise, etc. The overall composition and aesthetic appeal are rarely evaluated.
Currently, in the field of home design, along with the continuous application of AI (Artificial Intelligence) Intelligence in the design of home products, the task of scoring and screening the quality of rendered images, including the degree of beauty, becomes extremely important. In home design, due to the fact that quality of designers is uneven, in an application scene presented by rendering, the quality difference of rendered images is large, and rendering images with low quality often exist. Such as: the design scene is open, and a plurality of angles are not designed; inconsistent color matching, use of excessively pure colors over a large area, and the like. These evaluations are mostly perceived by humans and involve subjective thoughts of humans. Therefore, for such problems, it is necessary to add artificial perception modules in the network to enhance the accuracy of the scoring. Based on the above, the invention provides an evaluation method and device for rendered images, which are based on the learning of probability distribution of subjective scores of different designers of the same rendered image by a convolutional neural network model with a full connection layer. Because the convolutional neural network model is closer to human perception, the method accords with human aesthetic requirements. Therefore, the method can improve the accuracy of quality evaluation and realize automatic scoring.
To facilitate understanding of the embodiment, a detailed description will be first given of an evaluation method for a rendered image disclosed in the embodiment of the present invention.
The first embodiment is as follows:
referring to fig. 1, the present invention provides an evaluation method of a rendered image, wherein the evaluation method may include the steps of:
step S110, receiving a rendering image;
in the embodiment of the invention, the number of the rendering images is not limited, and the received rendering images are not uniform in format and are not uniform in naming format.
Step S120, the rendering image is standardized to obtain a standardized rendering image;
in the embodiment of the present invention, the process of normalizing the rendered image is as follows: 1) format conversion is carried out on all rendering images, and the rendering images are unified into a fixed format; 2) renaming operation is carried out on all rendering images, and renaming formats are unified; the purpose of the uniform rendering image naming format is to facilitate processing and data tracking, so all rendering images are named uniformly according to a self-defined rule.
Step S130, inputting the standardized rendering image into a pre-trained convolutional neural network model to obtain the score probability distribution of the rendering image; the convolutional neural network model is a network model which introduces an inclusion structure and is added with a full connection layer.
In the embodiment of the invention, the pre-trained convolutional neural network model is the key point of the invention, the model increases the loss function of visual aesthetics by manually scoring a large number of rendered images by a professional designer, learns the rule of probability distribution of the convolutional neural network in a training mode of the convolutional neural network to obtain model parameters, and can realize the quality aesthetic scoring of the rendered images.
The method for evaluating the rendered image provided by the embodiment of the invention firstly receives the rendered image; then, standardizing the rendering image to obtain a standardized rendering image; finally, inputting the standardized rendering image into a pre-trained convolutional neural network model to obtain the score probability distribution of the rendering image; the convolutional neural network model is a network model which introduces an inclusion structure and is added with a full connection layer. The addition of a fully connected layer convolutional neural network model can learn the probability distributions of subjective scores of different designers for the same rendered image. The convolutional neural network model is closer to human perception and accords with human aesthetic requirements. Therefore, the embodiment of the invention can improve the accuracy of quality evaluation and realize automatic scoring.
Further, referring to fig. 2, before performing step S110, the evaluation method may further include the steps of:
step S101, introducing an inclusion structure into a convolutional neural network, adding a full-connection layer, and constructing a convolutional neural network model;
the embodiment of the invention utilizes a basic convolutional neural network to extract the image characteristics of the rendered image, and the network depth adopted by the convolutional neural network is 47-layer network. Referring to fig. 3, in the embodiment of the present invention, some network modifications or additions are performed on a convolutional neural network, an inclusion mechanism is introduced to better extract image features, specifically, features of different scales of a rendered image can be extracted, and global structure information and local detail information of the rendered image are respectively extracted, where the global structure information is a global feature and the local detail information is a local feature; the global features are propagated layer by layer, so that all the global features of a rendered image are obtained by each layer network, and the local features refer to that an acceptance mechanism can directly connect some local features of a certain layer network to a network layer to be connected, so that the local features of the layer network are obtained.
Since the rendered image is a colored rgb image, the image features are the convolution features of the rgb image, which contain all the information to be expressed by the rendered image. The introduced inclusion mechanism adopts convolution of 1x1, 3x3 and 5x5 to extract image features of the same input and carry out splicing operation on the extracted image features, so that image features of different scales can be fused. The method of the convolutional neural network and the inclusion learning mechanism is used for extracting the local features and the global features in the rendered image, and the structural information of the rendered image can be fully expressed.
Referring to fig. 4, similar to a general convolutional neural network, the network framework of the convolutional neural network model in the embodiment of the present invention is also based on a deep convolutional neural network. In the process of designing the network, on one hand, the network is deepened to 47 layers, and the number of deepened layers is not specifically limited herein; on the other hand, an inclusion mechanism is adopted, the image features are extracted by filters with different sizes, aesthetic feeling information in the rendered image can be better learned, jump connection in the inclusion needs to transmit the features of different network layers to other layers in a cross-level manner instead of transmitting the features one by one backwards, and therefore two jump-level features need to be combined and spliced. And finally, adding a full connection layer at the tail end of the convolutional neural network to construct a convolutional neural network model.
And S102, training the convolutional neural network model based on the training samples adopted for training to obtain a pre-trained convolutional neural network model.
In the embodiment of the present invention, referring to fig. 5, step S102 may include the following steps:
step S301, obtaining a training sample; the training sample comprises a plurality of rendering image sets, and each rendering image set comprises an image sample and a first scoring probability distribution of the image sample;
in embodiments of the present invention, the source of the image sample acquisition includes, but is not limited to, the following: 1. acquiring an existing rendering image by using a network, such as webpage crawler acquisition; 2. rendering the image by adopting a professional renderer to generate the image; 3. provided by the designer of the finishing company. The embodiment of the present invention does not specifically limit the acquisition source of the image sample. The image sample may be a rendered image with good aesthetic feeling or a rendered image with poor aesthetic feeling, the quality of the image sample is not particularly limited, and whether the quality of the image sample is good or bad, the image sample finally corresponds to a scoring result which is scored by a plurality of designers, and the scoring result is determined as a scoring result, which is also called as a standard score. And converting the scoring result to determine a first scoring probability distribution, wherein the first scoring probability distribution is a real scoring probability distribution.
The image sample is a rendering image after standardization processing, and the purpose of the standardization processing is as follows: on one hand, unifying the image samples into a fixed format, and converting the label file for storing the grading result of the image samples into a Json form; on the other hand, according to the corresponding relation, the image samples can be in one-to-one correspondence with the label files.
The presently disclosed aesthetic data set-based, e.g., AVA data set, has 255000 pictures in total, with each picture being scored on average by 200 different layperson photographers. In the embodiment of the invention, since the level and the number of persons of the designer will largely determine the quality of the effect, the first scoring result of the embodiment of the invention is scored by a professional home designer with professional work experience for not less than 10 persons of each image sample (each person has only one scoring opportunity). The scoring criteria for different image samples are the same, e.g., a score range of 0-10.
Step S302, inputting the image sample into a convolutional neural network model, and outputting a second scoring probability distribution of the image sample;
in the embodiment of the invention, the convolutional neural network model utilizes the constructed convolutional neural network to extract the characteristics of the image sample, then a full connection layer is added at the last layer of the convolutional neural network, the full connection layer is added to aim at predicting the scoring probability distribution of the image sample, and the scoring probability distribution predicted by the model is determined as a second scoring probability distribution. Further, the predicted second scoring probability distribution may be converted into a predicted score.
Step S303, determining errors of the first scoring probability distribution and the second scoring probability distribution based on a preset loss function;
in the embodiment of the invention, the loss function not only considers the difference of the score probability distribution among different image samples, but also considers the difference of aesthetic perception of different people on the same image sample.
The loss function may refer to a function designed for a subjective aesthetic evaluation component with the goal of training a convolutional neural network model to learn an aesthetic evaluation consistent with human perception. Specifically, model parameters can be adjusted by performing model training using the loss function. After the initial parameters of the convolutional neural network model are set, training samples can be input to train the convolutional neural network model. During training, the loss function is set to determine the difference between the second scoring probability distribution and the first scoring probability distribution. Compared with the traditional simple learning score, the model can learn data distribution, and the learning in the form can better fit the subjective aesthetic standard of human, so that the prediction scoring probability distribution is close to the human perception scoring probability distribution.
The preset loss function is:
Figure BDA0002248655570000091
wherein the content of the first and second substances, for error, p is the probability density function of the standard score,
Figure BDA0002248655570000093
is a probability density function of the prediction scores, N represents the score category (i.e. N stages dividing the score range of 0-10), K is the prediction set corresponding to K, K is an integer between 1-N,
Figure BDA0002248655570000094
in order to be a function of the cumulative distribution,
Figure BDA0002248655570000095
is the estimated probability of the ith fractional interval.
In the above-described preset loss function, r is set to 2 in order to calculate the euclidean distance between the standard score and the predicted score.
And step S304, training the parameters in the convolutional neural network model according to the errors until the parameters are converged to obtain the pre-trained convolutional neural network model.
In the embodiment of the present invention, the parameter may refer to a weight parameter of each layer of network in the model, that is, a value of a convolution kernel. And after the parameters are converged, obtaining the optimized parameters. And determining the convolutional neural network model corresponding to the optimized parameters as a pre-trained convolutional neural network model, and automatically scoring the rendered image by using the pre-trained convolutional neural network model. Specifically, in the training process, based on the designed initial parameters and training samples, the convolutional neural network is subjected to multiple iterative computations, and finally an optimized parameter is obtained.
Further, referring to fig. 2, the evaluation method further includes the steps of:
step S140, based on the score probability distribution prediction of the rendering image, obtaining the score of the rendering image;
s150, sequencing all rendering images according to the scores;
step S160, pushing a preset number of rendering images to the client based on the sorting result.
The embodiment of the invention can score and sort the rendered images with uneven quality. The score of the rendered image can be obtained in two ways, namely, predicting the average value of all scores of the rendered image according to the score probability distribution of the rendered image to obtain the specific score of the rendered image. And the second mode is automatic scoring, namely, model parameters obtained by network training are utilized to perform feedforward fast calculation, so that the function of automatically scoring the rendered image can be realized. The embodiment of the invention can carry out threshold value division on the scores, and define the fixed scores by self to judge whether the rendered images are qualified or rank levels.
In the embodiment of the present invention, a variance threshold and a mean threshold of the score probability distribution may be set. Setting a label of the rendered image as a "bad image" when a variance of the score probability distribution is above a variance threshold and the score mean is above a mean threshold; setting a label of the rendered image to "normal" when a variance of the score probability distribution is below a variance threshold and the score mean is above a mean threshold; when the variance of the scored probability distribution is below the variance threshold and the scored mean is below the mean threshold, the label of the rendered image is set to "good image".
The pushing of the preset number of rendering images can refer to intelligent recommendation, the best optimal scheme needs to be selected from massive rendering images in some scenes, at the moment, one score can be carried out on all rendering images in the rendering image set, the rendering images with high scores and arranged in front are recommended according to the ranking of the scores, and the user can be helped to select the rendering images.
For selecting excellent design schemes and rendering images, the experience effect of the user can be improved, and the user experience is enhanced. In addition, for picture presentations, rendered images with higher scores are more attractive to traffic. The traditional rendering image pushing is manual screening, so that the automatic scoring function of the rendering image is realized, the working time of manual processing is greatly reduced, and the working efficiency is improved.
The invention has the technical effects and advantages that: according to the embodiment of the invention, by means of multi-scale convolution characteristics and a mode of increasing a loss function, and a training mode of an inclusion mechanism is utilized, so that model parameters obtained by training have good robustness and good generalization performance on rendered images with different angles and patterns. Compared with the traditional method, obviously, by means of the neural network, the method has the advantages that the effect is greatly improved due to the fact that the description mode with higher expressive ability is arranged on the characteristic extraction, and the method is good in ductility.
In the field of rendering images in the home industry, an automatic scoring mechanism based on image quality and image quality aesthetics is created. The embodiment of the invention can automatically score and sort the rendering images designed by designers with different levels and technical levels, more importantly, provides evaluation guidance for users, can better and more quickly help the users select the desired design scheme drawings, greatly reduces the time consumption for selecting excellent schemes in the home industry, and provides important guarantee for realizing intelligent application in the home field.
The embodiment of the invention is based on the rendering image of the home furnishing design, and performs a plurality of specific thinning works aiming at the condition of uneven quality of the rendering image.
From the rendering image quality evaluation itself: the quality evaluation based on the rendered image is generally the evaluation of the image quality and the evaluation of the image aesthetics, the image quality generally refers to basic indexes such as image blurriness, noise, distortion and the like, the evaluation based on the image aesthetics is relatively subjective, different people have different tendencies, and aesthetic comprehension and level of the aesthetics are different, so that certain difficulty is brought to the overall quality evaluation of the image, and therefore, a network model needs to be designed to learn certain rules from the aesthetic tendencies of most professional people, so that the network model simulates the idea of human and gives the aesthetic judgment conforming to the human. For the neural network, an article is made on the objective function, so that the embodiment of the invention designs a visual aesthetic function in the objective function, and the visual aesthetic function is embodied in the loss function. The loss function is said to be an aesthetic vision based function because the features learned with the loss function in embodiments of the present invention are evaluation probability distributions for aesthetic evaluation based on rendered images, which function is the largest contribution to making an effective score for image quality aesthetics.
From the practical scene application: the acquired rendering images are designed by designers, the technical level difference of the designers is large, and the understanding of the home design is different in every autumn, so that the evaluation of the rendering images is far away, and the scoring standards are difficult to relatively converge; furthermore, home design includes various styles, and due to different space types (such as a living room, a kitchen, a toilet and the like), and different home styles, greater difficulty is brought to quality scoring of rendered images. In the embodiment of the invention, a convolutional neural network is adopted for extracting the image features. The extracted image features are convolution features that express all the information of the rendered image itself, while the score probability distribution based on aesthetic judgment is determined by a loss function. Compared with a feature extraction method in the traditional method, the features extracted by the neural network-based method can better express the structural semantic information of the image, so that the image features of the rendered image have richness and robustness; in the design of neural networks, in order to better accommodate the extraction of aesthetic features, for example: the embodiment of the invention introduces an inclusion mechanism structure, extracts and fuses image features of different scales of the rendered image so as to learn global features and local features of the rendered image. In particular, the model learns the probability distribution of the subjective score of the rendered image. Different people have different opinions and scores on the same rendered image, so that the probability distribution of subjective scores of different designers on the same rendered image can be learned by the model, and the learned model is closer to human perception, namely more in line with the aesthetic requirements of human beings.
The invention provides an evaluation method of a rendered image, which comprises the following steps: firstly, obtaining a rendering image; converting the image data of the rendered image into a standardized data format readable by a deep learning network by using an image processing means; the method comprises the steps of extracting features of all home models and chartlet data in rendered images through a built deep learning network framework, obtaining an image quality grading model parameter capable of meeting actual requirements by means of guiding training of a designed objective function according to the extracted features including color, texture, aesthetic composition and structural feature information, and achieving automatic grading of different rendered images in the field of home design by the aid of the model, giving specific scores, arranging the scores in a sequence from high scores to low scores according to the quality of the images, and facilitating user-defined screening of unqualified images. What needs to be particularly pointed out is: the objective function design and learning mode of the embodiment of the invention is the only point of invention. On one hand, the embodiment of the invention can rapidly grade and grade the rendered image, has an important function on image screening, and provides good data discrimination support for artificial intelligence development in the field of home furnishing; on the other hand, the workload of manual screening of rendered images by designers can be greatly reduced, and the working efficiency is improved; in addition, the image quality evaluation score can be used for screening the webpage header image, and the best quality image can attract users to the greatest extent, so that the good feeling is improved, and the drainage purpose is achieved.
Example two:
referring to fig. 6, an evaluation apparatus for a rendered image according to an embodiment of the present invention may include the following modules:
a receiving module 11, configured to receive a rendered image;
the standardization processing module 12 is configured to standardize the rendered image to obtain a standardized rendered image;
the input module 13 is configured to input the standardized rendering image to a pre-trained convolutional neural network model to obtain a score probability distribution of the rendering image; the convolutional neural network model is a network model which introduces an inclusion structure and is added with a full connection layer.
In the embodiment of the invention, the evaluation device of the rendered image provided by the invention firstly utilizes a receiving module to receive the rendered image; then, standardizing the rendered image by using a standardization processing module to obtain a standardized rendered image; finally, inputting the standardized rendering image to a pre-trained convolutional neural network model by using an input module to obtain the score probability distribution of the rendering image; the convolutional neural network model is a network model which introduces an inclusion structure and is added with a full connection layer. The addition of a fully connected layer convolutional neural network model can learn the probability distributions of subjective scores of different designers for the same rendered image. The convolutional neural network model is closer to human perception and accords with human aesthetic requirements. Therefore, the embodiment of the invention can improve the accuracy of quality evaluation and realize automatic scoring.
Further, referring to fig. 7, the evaluation apparatus further includes the following modules:
the building module 14 is used for introducing an inclusion structure into the convolutional neural network, adding a full-connection layer and building a convolutional neural network model;
and the training module 15 is configured to train the convolutional neural network model based on a training sample adopted for training, so as to obtain a pre-trained convolutional neural network model.
Further, referring to fig. 8, the training module 15 may include the following elements:
an obtaining unit 21 configured to obtain a training sample; the training sample comprises a plurality of rendering image sets, and each rendering image set comprises an image sample and a first scoring probability distribution of the image sample;
the output unit 22 is used for inputting the image sample into the convolutional neural network model and outputting a second scoring probability distribution of the image sample;
a determining unit 23, configured to determine errors of the first scoring probability distribution and the second scoring probability distribution based on a preset loss function;
and the training unit 24 is used for training the parameters in the convolutional neural network model according to the errors until the parameters are converged to obtain a pre-trained convolutional neural network model.
Further, referring to fig. 7, the evaluation apparatus further includes:
the prediction module 16 is configured to obtain a score of the rendered image based on the score probability distribution prediction of the rendered image;
a sorting module 17, configured to sort all rendered images according to the scores;
and the pushing module 18 is configured to push a preset number of rendering images to the client based on the sorting result.
In another embodiment of the present invention, an electronic device is further provided, which includes a memory and a processor, where the memory stores a computer program executable on the processor, and the processor implements the steps of the method of the above method embodiment when executing the computer program.
In yet another embodiment of the invention, a computer-readable medium having non-volatile program code executable by a processor, the program code causing the processor to perform the method of the method embodiment is also provided.
In the examples provided by the embodiments of the present invention, the disclosed methods and tools may be implemented in other ways. The above detailed description of the embodiments is merely exemplary, the blocks in the drawings may exist independently or integrally, each block in the flowchart or block diagram may represent a module, a program segment, or a portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
The above-mentioned embodiments are only preferred embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered as falling within the technical scope of the present invention, and equivalent substitutions or changes according to the technical solutions and the inventive concepts thereof should be covered by the scope of the present invention.
In the description of the present invention, it should be noted that the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. An evaluation method of a rendered image, comprising:
receiving a rendered image;
standardizing the rendering image to obtain a standardized rendering image;
inputting the standardized rendering image into a pre-trained convolutional neural network model to obtain the score probability distribution of the rendering image; the convolutional neural network model is a network model which introduces an inclusion structure and is added with a full connection layer.
2. The evaluation method of claim 1, wherein prior to receiving the rendered image, the method further comprises:
introducing an inclusion structure into the convolutional neural network, adding a full-connection layer, and constructing a convolutional neural network model;
and training the convolutional neural network model based on training samples adopted by training to obtain a pre-trained convolutional neural network model.
3. The evaluation method according to claim 2, wherein the step of training the convolutional neural network model based on the training samples adopted in the training to obtain a pre-trained convolutional neural network model comprises:
obtaining a training sample; wherein the training sample comprises a plurality of sets of rendered images, each set of rendered images comprising an image sample and a first scored probability distribution for the image sample;
inputting the image sample into the convolutional neural network model, and outputting a second scoring probability distribution of the image sample;
determining errors of the first scoring probability distribution and the second scoring probability distribution based on a preset loss function;
and training parameters in the convolutional neural network model according to the errors until the parameters are converged to obtain a pre-trained convolutional neural network model.
4. The evaluation method according to claim 1, characterized in that the method further comprises:
and obtaining the score of the rendering image based on the score probability distribution prediction of the rendering image.
5. The evaluation method according to claim 4, characterized in that the method further comprises:
sorting all the rendered images according to the scores;
and pushing a preset number of rendering images to the client based on the sorting result.
6. An evaluation apparatus for rendering an image, comprising:
a receiving module for receiving a rendered image;
the standardization processing module is used for standardizing the rendering image to obtain a standardized rendering image;
the input module is used for inputting the standardized rendering image to a pre-trained convolutional neural network model to obtain the score probability distribution of the rendering image; the convolutional neural network model is a network model which introduces an inclusion structure and is added with a full connection layer.
7. The evaluation device according to claim 6, further comprising:
the building module is used for introducing an inclusion structure into the convolutional neural network, adding a full-connection layer and building a convolutional neural network model;
and the training module is used for training the convolutional neural network model based on a training sample adopted by training to obtain a pre-trained convolutional neural network model.
8. The evaluation device of claim 7, wherein the training module comprises:
an acquisition unit for acquiring a training sample; wherein the training sample comprises a plurality of sets of rendered images, each set of rendered images comprising an image sample and a first scored probability distribution for the image sample;
the output unit is used for inputting the image sample into the convolutional neural network model and outputting a second scoring probability distribution of the image sample;
a determining unit, configured to determine an error of the first scoring probability distribution and the second scoring probability distribution based on a preset loss function;
and the training unit is used for training the parameters in the convolutional neural network model according to the errors until the parameters are converged to obtain a pre-trained convolutional neural network model.
9. An electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the method according to any of claims 1 to 5 when executing the computer program.
10. A computer-readable medium having non-volatile program code executable by a processor, the program code causing the processor to perform the method of any of claims 1 to 5.
CN201911029262.XA 2019-10-25 2019-10-25 Rendered image evaluation method and device Pending CN110782448A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911029262.XA CN110782448A (en) 2019-10-25 2019-10-25 Rendered image evaluation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911029262.XA CN110782448A (en) 2019-10-25 2019-10-25 Rendered image evaluation method and device

Publications (1)

Publication Number Publication Date
CN110782448A true CN110782448A (en) 2020-02-11

Family

ID=69386897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911029262.XA Pending CN110782448A (en) 2019-10-25 2019-10-25 Rendered image evaluation method and device

Country Status (1)

Country Link
CN (1) CN110782448A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382781A (en) * 2020-02-21 2020-07-07 华为技术有限公司 Method for obtaining image label and method and device for training image recognition model
CN112581360A (en) * 2020-12-30 2021-03-30 杭州电子科技大学 Multi-style image aesthetic quality enhancement method based on structural constraint
CN113255743A (en) * 2021-05-12 2021-08-13 展讯通信(上海)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114049420A (en) * 2021-10-29 2022-02-15 马上消费金融股份有限公司 Model training method, image rendering method, device and electronic equipment
CN116030040A (en) * 2023-02-23 2023-04-28 腾讯科技(深圳)有限公司 Data processing method, device, equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107256246A (en) * 2017-06-06 2017-10-17 西安工程大学 PRINTED FABRIC image search method based on convolutional neural networks
CN107562963A (en) * 2017-10-12 2018-01-09 杭州群核信息技术有限公司 A kind of method and apparatus screened house ornamentation design and render figure
CN107610123A (en) * 2017-10-11 2018-01-19 中共中央办公厅电子科技学院 A kind of image aesthetic quality evaluation method based on depth convolutional neural networks
CN110070087A (en) * 2019-05-05 2019-07-30 广东三维家信息科技有限公司 Image identification method and device
CN110189291A (en) * 2019-04-09 2019-08-30 浙江大学 A kind of general non-reference picture quality appraisement method based on multitask convolutional neural networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107256246A (en) * 2017-06-06 2017-10-17 西安工程大学 PRINTED FABRIC image search method based on convolutional neural networks
CN107610123A (en) * 2017-10-11 2018-01-19 中共中央办公厅电子科技学院 A kind of image aesthetic quality evaluation method based on depth convolutional neural networks
CN107562963A (en) * 2017-10-12 2018-01-09 杭州群核信息技术有限公司 A kind of method and apparatus screened house ornamentation design and render figure
CN110189291A (en) * 2019-04-09 2019-08-30 浙江大学 A kind of general non-reference picture quality appraisement method based on multitask convolutional neural networks
CN110070087A (en) * 2019-05-05 2019-07-30 广东三维家信息科技有限公司 Image identification method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HOSSEIN TALEBI ET AL: "NIMA: Neural Image Assessment", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382781A (en) * 2020-02-21 2020-07-07 华为技术有限公司 Method for obtaining image label and method and device for training image recognition model
CN111382781B (en) * 2020-02-21 2023-09-12 华为云计算技术有限公司 Method for acquiring image tag, method and device for training image recognition model
CN112581360A (en) * 2020-12-30 2021-03-30 杭州电子科技大学 Multi-style image aesthetic quality enhancement method based on structural constraint
CN112581360B (en) * 2020-12-30 2024-04-09 杭州电子科技大学 Method for enhancing aesthetic quality of multi-style image based on structural constraint
CN113255743A (en) * 2021-05-12 2021-08-13 展讯通信(上海)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114049420A (en) * 2021-10-29 2022-02-15 马上消费金融股份有限公司 Model training method, image rendering method, device and electronic equipment
CN116030040A (en) * 2023-02-23 2023-04-28 腾讯科技(深圳)有限公司 Data processing method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN110782448A (en) Rendered image evaluation method and device
US10685434B2 (en) Method for assessing aesthetic quality of natural image based on multi-task deep learning
CN107330956B (en) Cartoon hand drawing unsupervised coloring method and device
US8692830B2 (en) Automatic avatar creation
CN109118445B (en) Underwater image enhancement method based on multi-branch generation countermeasure network
CN111144483B (en) Image feature point filtering method and terminal
CN110379020B (en) Laser point cloud coloring method and device based on generation countermeasure network
CN110222722A (en) Interactive image stylization processing method, calculates equipment and storage medium at system
CN110807757B (en) Image quality evaluation method and device based on artificial intelligence and computer equipment
CN112614077A (en) Unsupervised low-illumination image enhancement method based on generation countermeasure network
CN111652822B (en) Single image shadow removing method and system based on generation countermeasure network
CN110689093B (en) Image target fine classification method under complex scene
CN111724400A (en) Automatic video matting method and system
CN113505854B (en) Face image quality evaluation model construction method, device, equipment and medium
CN114648681B (en) Image generation method, device, equipment and medium
CN109509156A (en) A kind of image defogging processing method based on generation confrontation model
CN110245550A (en) A kind of face noise data collection CNN training method based on overall cosine distribution
CN115064020A (en) Intelligent teaching method, system and storage medium based on digital twin technology
CN114332559A (en) RGB-D significance target detection method based on self-adaptive cross-modal fusion mechanism and depth attention network
CN107729821B (en) Video summarization method based on one-dimensional sequence learning
CN112258420B (en) DQN-based image enhancement processing method and device
CN112767038B (en) Poster CTR prediction method and device based on aesthetic characteristics
CN108665455B (en) Method and device for evaluating image significance prediction result
CN112348809A (en) No-reference screen content image quality evaluation method based on multitask deep learning
CN115018729B (en) Content-oriented white box image enhancement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200211