CN111199540A - Image quality evaluation method, image quality evaluation device, electronic device, and storage medium - Google Patents

Image quality evaluation method, image quality evaluation device, electronic device, and storage medium Download PDF

Info

Publication number
CN111199540A
CN111199540A CN201911379449.2A CN201911379449A CN111199540A CN 111199540 A CN111199540 A CN 111199540A CN 201911379449 A CN201911379449 A CN 201911379449A CN 111199540 A CN111199540 A CN 111199540A
Authority
CN
China
Prior art keywords
image
information
processed
user preference
quality evaluation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911379449.2A
Other languages
Chinese (zh)
Inventor
彭冬炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911379449.2A priority Critical patent/CN111199540A/en
Publication of CN111199540A publication Critical patent/CN111199540A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides an image quality evaluation method, an apparatus, an electronic device and a computer-readable storage medium, which relate to the technical field of image processing, and the image quality evaluation method includes: acquiring an image to be processed; extracting the features of the image to be processed, performing prediction bias processing on the feature extraction result according to the user preference information, and generating scoring information for representing the user preference information; and performing aesthetic quality evaluation on the image to be processed according to the grading information to obtain an evaluation result of the image to be processed. The method and the device can improve the accuracy and pertinence of image quality evaluation and realize personalized evaluation.

Description

Image quality evaluation method, image quality evaluation device, electronic device, and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image quality evaluation method, an image quality evaluation device, an electronic device, and a computer-readable storage medium.
Background
With the development of image technology, the quality of the image can be scored so as to facilitate the subsequent processing of the image.
In the related art, generally, the quality of an image is evaluated based on a model. In this way, the result of the aesthetic quality evaluation of the image is inaccurate, and cannot meet the actual requirements, which has great limitations.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the present disclosure is to provide an image quality evaluation method, apparatus, electronic device, and computer-readable storage medium, thereby overcoming, at least to some extent, the problem of inaccurate image quality evaluation due to limitations and disadvantages of the related art.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of the present disclosure, there is provided an image quality evaluation method including: acquiring an image to be processed; extracting the features of the image to be processed, performing prediction bias processing on the feature extraction result according to the user preference information, and generating scoring information for representing the user preference information; and performing aesthetic quality evaluation on the image to be processed according to the grading information to obtain an evaluation result of the image to be processed.
According to an aspect of the present disclosure, there is provided an image quality evaluation apparatus including: the image acquisition module is used for acquiring an image to be processed; the score determining module is used for extracting the features of the image to be processed, executing prediction bias processing on the feature extraction result according to the user preference information and generating score information for representing the user preference information; and the image evaluation module is used for performing aesthetic quality evaluation on the image to be processed according to the grading information to obtain an evaluation result of the image to be processed.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform any of the image quality assessment methods described above via execution of the executable instructions.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image quality evaluation method of any one of the above.
In the image quality evaluation method, the apparatus, the electronic device, and the computer-readable storage medium provided in the present exemplary embodiment, on one hand, feature extraction is performed by combining user preference information of a user, so that prediction bias processing is performed on a feature extraction result of an image to be processed, score information of the image to be processed for representing the user preference information is generated, the score information biased toward the user preference information can be obtained from the perspective of the user preference information of the user, and a function of performing aesthetic quality evaluation on the image to be processed for each user is accurately realized, so that for each user, an evaluation result conforming to the user preference information of the user can be obtained, and accuracy of image aesthetic evaluation of the image to be processed is improved. On the other hand, the scoring information is determined by combining the user preference information of the user so as to evaluate the image aesthetic quality, the rationality of evaluating the image aesthetic quality is improved, the scoring information of the image to be processed aiming at the user can be obtained, the limitation that the scoring information of each user is the same is avoided, the diversity is increased, the personalized image evaluation is realized, and the requirement in practical application can be met.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
Fig. 1 schematically shows a schematic diagram of a system architecture for implementing the image quality evaluation method.
Fig. 2 schematically illustrates a schematic diagram of an image quality evaluation method in an exemplary embodiment of the present disclosure.
Fig. 3 schematically illustrates a flow chart for determining scoring information in an exemplary embodiment of the present disclosure.
Fig. 4 schematically illustrates a flow chart for determining predictive scoring information in an exemplary embodiment of the disclosure.
Fig. 5 schematically illustrates a specific flowchart for mapping the prediction score information in an exemplary embodiment of the present disclosure.
Fig. 6 schematically illustrates a detailed flow chart of image aesthetic evaluation in an exemplary embodiment of the present disclosure.
Fig. 7 schematically shows a block diagram of an image quality evaluation apparatus in an exemplary embodiment of the present disclosure.
Fig. 8 schematically shows a schematic view of an electronic device in an exemplary embodiment of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
In the present exemplary embodiment, first, a system architecture diagram for performing an image quality evaluation method is provided. Referring to fig. 1, a system architecture 100 may include a first end 101, a network 102, and a second end 103. The first end 101 may be a user end, and may be, for example, various handheld devices (smart phones), desktop computers, vehicle-mounted devices, wearable devices, and the like, which have a photographing function, an image storage function, and an image display function. The network 102 is used as a medium for providing a communication link between the first end 101 and the second end 103, the network 102 may include various connection types, such as a wired communication link, a wireless communication link, and the like, and in the embodiment of the present disclosure, the network 102 between the first end 101 and the second end 103 may be a wired communication link, such as a communication link provided by a serial connection line, or a wireless communication link, such as a communication link provided by a wireless network. The second terminal 103 may be a user terminal, for example, a terminal device with a data processing function, such as a portable computer, a desktop computer, a smart phone, and the like, for performing a scoring process on the input image to be processed. When the first end and the second end are both clients, the first end and the second end may be the same client.
It should be understood that the number of first ends, networks and second ends in fig. 1 is merely illustrative. There may be any number of clients, networks, and servers, as desired for implementation.
It should be noted that the image quality evaluation method provided by the embodiment of the present disclosure may be completely executed by the second end or the first end, or may be executed by the first end and the second end, where the execution subject of the image quality evaluation method is not particularly limited. Accordingly, the image quality evaluation device may be disposed in the second end 103 or in the first end 101.
Based on the system architecture, the embodiment of the present disclosure provides an image quality evaluation method, which may be applied to any scene for evaluating and evaluating the quality of a photo, a video, or a picture. Next, the image quality evaluation method in the present exemplary embodiment is explained in detail with reference to fig. 2. The detailed description is as follows:
in step S210, an image to be processed is acquired.
In the embodiment of the present disclosure, the image to be processed may be an image captured by using any camera, or an image downloaded from a network, or an image acquired from another storage device. The image to be processed may be a still image or an image in a moving state, and the like. The image to be processed may also be an image that is in a photographed state but has not been generated and stored, i.e., a preview image. The image to be processed may be an image subjected to an image processing operation or an image not subjected to an image processing operation. The number of the images to be processed can be multiple, and the images can be used for being published to an information interaction platform or used in practical application scenes such as cover extraction.
In step S220, feature extraction is performed on the image to be processed, prediction bias processing is performed on a feature extraction result according to user preference information, and scoring information used for representing the user preference information is generated.
In the embodiment of the present disclosure, a user may be a user using a certain terminal device, and one user may correspond to one terminal device. The user preference information refers to aesthetic preference information for describing the aesthetic condition of the user, and the user preference information of each user may be the same or different, for example, the user preference information of the user a is the image operation 1, and the user preference information of the user B is the image operation 2. The user preference information can be used for representing the preference degree of the user for certain image operations, and can be specifically represented by numerical information or a vector, and the greater the numerical information is, the greater the preference degree is.
The user preference information can be acquired online according to a preset period and a preset acquisition mode. The preset period may be, for example, one month or three months, and the preset acquisition mode refers to an acquisition mode of user behavior data corresponding to a user behavior. Based on this, the user preference information may be specifically determined and characterized according to the user behavior data of the user, and the user preference information of each user may be characterized by the user behavior data corresponding to a plurality of user behaviors. According to different collection modes, the form of the user preference information can be numerical information or a vector, and the numerical information is taken as an example for description. The user behavior data may score behavior data for the user that processes the image in the behavior. Specifically, the user behavior data may be numerical information such as the frequency of use or the number of times of use of each operation button for performing image processing by the user during beautification of an image stored in an album or other location. In addition, the user behavior data may include frequency of use of special effects operations, such as filters or adding special effects, and the like. In the embodiment of the disclosure, part or all of the user behavior data can be screened out to depict the user preference information of the user. For example, the user preference information may be determined from the overall user behavior data. If the number of times of use of the operations 1, 2, and 3 by the user is 5, the number of times of use of the operations 4 is 50, and the number of times of use of the operations 5 is 35, the user preference information of the user can be described by these operations and the corresponding number of times of use, and the operation 2 with the greatest degree of user preference can be considered. In addition, the user preference information of the user may be characterized by using only one operation the most times. The user preference information may be performed by a feedback module. Specifically, the feedback module collects user behavior data of each user according to a preset period and a preset collection mode, determines user preference information of the user according to the user behavior data, and reversely transmits the user preference information to the feedback module according to the user preference information. It is necessary to supplement that the user preference behavior obtained according to the user behavior data can only be stored and retained at the user side, so as to ensure the user privacy and the security and privacy of the user preference information.
In the embodiment of the disclosure, the feature extraction may be performed on the image to be processed to obtain a feature extraction result, and the prediction bias processing may be further performed on the feature extraction result according to the user preference information. The feature extraction result refers to a result generated by performing a feature extraction operation, and may be specifically represented by prediction score information. The feature extraction of the image to be processed may be performed simultaneously with the determination of the user preference information, or may be performed before or after the determination, and the execution order is not limited herein. The prediction bias processing refers to a combination of prediction processing for predicting the score distribution of the image to be processed by an aesthetic quality evaluation model or other models and bias processing. Biasing refers to converting the predicted score distribution, such as shifting in a certain direction or to certain information. By the prediction bias processing, the obtained scoring information can be more in line with actual requirements, such as more in line with user preference information or more in line with scene information. Therefore, the obtained scoring information used for representing the user preference information can be more accurate and more in line with the actual requirement through the prediction processing and the bias processing.
Fig. 3 schematically shows a schematic diagram of determining scoring information, and referring to fig. 3, mainly includes the following steps:
in step S310, feature extraction is performed on the image to be processed to perform prediction processing, and prediction score information of the image to be processed is generated.
In the embodiment of the disclosure, before the image to be processed is predicted by combining the user preference information, the image to be processed may be subjected to common prediction, that is, the image to be processed is subjected to feature extraction based on a general model so as to be subjected to prediction for the first time to obtain the prediction scoring information, and then adjustment is performed on the basis of the prediction scoring information of the prediction for the first time to obtain the final scoring information.
The process of determining the prediction score information by means of a generic model is schematically illustrated in fig. 4, and with reference to fig. 4, mainly comprises the following steps:
in step S410, feature extraction is performed on the image to be processed to obtain feature data;
in step S420, a prediction score distribution of the image to be processed is obtained through the feature data, and the prediction score information is determined according to the prediction score distribution.
In the embodiment of the present disclosure, the common model refers to a common model that is applicable to all users, and the common model may be an aesthetic quality evaluation model that is used for aesthetic quality evaluation and that can be applied to all users. The aesthetic quality evaluation model may be a trained machine learning model to extract feature data characterizing features or characteristics of the image to be processed through a convolutional layer, a pooling layer, and a fully-connected layer of the trained machine learning model. The size and number of convolution kernels in the convolutional layer can be set according to actual needs. The feature data may include, but is not limited to, shape features, texture features, and the like.
In order to improve the accuracy of feature data extraction, the machine learning model can be trained to obtain the trained machine learning model as an aesthetic quality evaluation model. The model may be trained based on a data set in a common data pool. Specifically, the process of training the model may include: and training the machine learning model according to the sample images in the public data pool and the manually marked grading information of the sample images to obtain the trained machine learning model. In the embodiment of the present disclosure, the sample image refers to an image for which score information and image quality have been obtained. The sample image may be the same as or different from the object contained in the image to be processed, and the scene of the sample image may be the same as or different from the image to be processed. The number of sample images can be multiple, so as to improve the accuracy and reliability of training. The scoring information of the manually labeled sample image may be used to characterize the category label of the manually labeled sample image, and specifically may be real scoring information. The real score information refers to a real score distribution, namely the distribution situation of the real score, and can be represented by any number of numerical values between 1 and 5 or 1 and 10. Each real score corresponds to a weight parameter, and the weight parameter refers to the proportion of the number of people with the real score when a plurality of users score a certain sample image. For example, the weighting parameters for 1-3 points are all 5%, the weighting parameter for 4 points is 50%, and the weighting parameter for 5 points is 35%. Furthermore, the sample image can be used as the input of the machine learning model to obtain the predicted scoring information of the sample image, and then the model is trained according to the scoring information of the artificially labeled sample image until the predicted scoring information is consistent with the artificially labeled scoring information or until the model converges, so that the model training process is ended and the trained machine learning model is obtained to be used as the aesthetic quality evaluation model, and the accuracy and the reliability of the aesthetic quality evaluation model are improved.
In the embodiment of the disclosure, after the feature data of the image to be processed is obtained, the feature data may be subjected to prediction processing by a trained machine learning model, so as to obtain the prediction score information of the image to be processed. The prediction processing here can be realized by performing convolution operation on the feature data by the convolution layer and performing full-connection processing on the feature data by the full-connection layer. The presentation form of the prediction score information may be a prediction score distribution, and the prediction score information may be information after normalization. It should be noted that the trained aesthetic quality evaluation model may be deployed to a user terminal or a terminal device having a processor, such as a server, and is not limited herein as long as prediction can be achieved. Common prediction is carried out on the image to be processed through the aesthetic quality evaluation model, so that the public aesthetic quality of the image to be processed can be reserved, and the problem of large deviation is avoided.
In step S320, performing bias mapping on the predicted rating information according to the user preference information, and acquiring the rating information used for representing the user preference information.
In the embodiment of the disclosure, the predicted scoring information refers to an output result of the general model, and in this step, the predicted scoring information output by the general model may be converted and mapped in combination with the user preference information of the user, so as to jointly determine scoring information for characterizing the user preference information according to the user preference information and the predicted scoring information. The mapping here refers to remapping, i.e. a mapping mode different from the normal mapping mode. Under normal conditions, the remapping model can be a unit mapping, so that a common mapping mode can be used for mapping the prediction scoring information of the aesthetic quality evaluation model to the user terminal without any treatment. Specifically, the number of times of the operation for representing the user preference information may be normalized, that is, the user preference information may be normalized; and performing bias mapping on the prediction scoring information based on the normalized user preference information, wherein the bias mapping can be used for multiplying the user preference information by the prediction scoring information. Further scoring information may be derived from the bias mapping results. The process of mapping the prediction score information may be implemented by a model or other program, and is described herein as being implemented by a remapping model. The remapping model refers to a model for mapping an output result of the aesthetic quality evaluation model and transmitting the output result to the user terminal for use by the user, that is, the remapping model is used for mapping the prediction score information.
Fig. 5 schematically shows a schematic diagram of bias mapping of prediction score information, and referring to fig. 5, the method mainly includes the following steps:
in step S510, updating a mapping parameter of a remapping model used for mapping the prediction scoring information according to the normalized user preference information, and obtaining an updated remapping model;
in step S520, performing bias mapping on the prediction score distribution corresponding to the prediction score information according to the updated remapping model, and performing normalization processing on the bias mapping result to map the prediction score information to the score information.
In the embodiment of the disclosure, in order to implement personalized scoring and targeted evaluation, the prediction scoring information of the aesthetic quality evaluation model may be biased and mapped according to the user preference information of the user based on the remapping model. Since the prediction score information output by the aesthetic quality evaluation model is bias-mapped according to the user preference information, it can be considered that the bias-mapped score information can represent the characteristics of the user preference information. Specifically, the mapping parameters in the remapping model may be updated according to the user preference information to obtain an updated remapping model. When the mapping parameters of the remapping model are updated, the remapping model can be learned according to the user preference information so as to update the mapping parameters of the remapping model into the mapping parameters associated with the user preference information, and therefore the updated remapping model can embody the user preference information of each user. For example, a convex optimization algorithm or the like may be used to update the mapping parameters of the remapping model. The convex optimization algorithm may include, for example, a gradient descent method, a coordinate descent method, and the like. The remapping model may be understood as a linear model, which may in particular be represented by a matrix. The output result of the aesthetic quality evaluation model, i.e., the prediction score information represented by the prediction score distribution, may be represented by a vector.
The mapping process for mapping the prediction score information into the score information may be: multiplying the matrix represented by the remapping model with the prediction scoring information represented by the aesthetic distribution vector, and outputting a remapped aesthetic distribution vector as scoring information. The scoring information here is still the scoring distribution.
For example, the score distribution output by the general model is 5% for 1-3 points, 50% for 4 points, and 35% for 5 points. If the user preference information of the user A is: the number of uses for operation 1, operation 2, and operation 3 is 5, the number of uses for operation 4 is 50, and the number of uses for operation 5 is 35. The mapping parameters of the remapping model may be updated according to the number of uses of each operation to obtain the scoring information. The specific process can be as follows: and normalizing the operation of the user, correspondingly multiplying the operation to the predicted scoring information output by the universal model, and normalizing the obtained result again to obtain the final scoring information.
The normalization process is to sort the values into probability weights with a sum of 1. For example, after the normalization process is performed, operations 1 to 3 may be represented as 5/100, operation 4 may be represented as 50/100, and operation 5 may be represented as 35/100. The normalized results were 0.05, 0.5, 0.35. Further, the normalization result may be multiplied by the prediction score information output by the aesthetic quality evaluation model to obtain reference score information; still further, the reference scoring information may be normalized to obtain final scoring information. The scoring information is obtained by performing offset mapping on the prediction scoring information according to the user preference information, so that the scoring information is more in line with the user preference information and has higher pertinence.
Continuing to refer to fig. 2, in step S230, performing aesthetic quality evaluation on the image to be processed according to the scoring information to obtain an evaluation result of the image to be processed.
In the embodiment of the present disclosure, after obtaining the score information output by the updated remapping model, the aesthetic quality evaluation may be performed on the image to be processed based on the score information. And the aesthetic quality evaluation is used for determining the clown degree of the image according to the grading information so as to evaluate the aesthetic quality of the image and further perform subsequent processing on the image. The image quality is described through the aesthetic distribution represented by the grading information, so that the influence of subjective factors of each user can be avoided, and the accuracy is improved. And taking the distribution of the image as an evaluation standard, and carrying out all-around accurate evaluation on the aesthetic quality of the image. After an image is given, the system can give an aesthetic evaluation with reference significance according to the aesthetic quality evaluation model and the updated remapping model, the user preference information of each user can be reflected in the aesthetic evaluation result, and the system has stronger guiding significance in a real scene. And the limitation influence on the image quality evaluation only according to the general model can be avoided, the comprehensive, reasonable and fine personalized image quality evaluation is realized, and the accuracy of the image aesthetic quality evaluation for each user is improved. After the score distribution is obtained, the score distribution can be mapped to score scores and presented on the user side for viewing by the user. The score can be positively correlated with the evaluation result, that is, the higher the score is, the more the image to be processed conforms to the user preference information, the higher the aesthetic quality of the image to be processed is.
It is to be added that, after the evaluation result is obtained, if a plurality of images to be processed exist, at least one image to be processed is selected from the plurality of images to be processed as a target image according to the ranking order of the rating information of the plurality of images to be processed and the requirement information of the target operation, so as to perform the target operation on the target image. The target operation may be different depending on the source of the image to be processed, which may be an already existing image or an image being photographed. The target operation may be any one of a presentation operation, a cover extraction operation, or an image generation operation, or may be other operations, and may be determined according to a scene type. When the image to be processed belongs to an already captured or downloaded image, it can be considered to belong to an already existing image. For the existing images, aesthetic quality evaluation can be performed on the plurality of images to be processed based on the steps in fig. 2, so as to obtain the scoring information of each image to be processed and convert the scoring information into a scoring score to be displayed on the user terminal, and the images are recommended based on the scoring score so as to perform the target operation. Specifically, the multiple images to be processed may be sorted according to the ranking order of the score information of each image to be processed, so as to obtain at least one image to be processed that better conforms to the user preference information of the user as a target image, and perform a target operation on the target image. The number of specific selection target images may be determined according to the requirement information of the target operation (the scene type of the target operation and the number of images required for the scene type). The scene type may be, for example, uploading to an information interaction platform or extracting a cover page. The information interaction platform can be a microblog, a WeChat, a friend circle, a QQ and various instant chat windows and the like. When the scene type is uploaded to the information interaction platform, if the scene type is uploaded to the information interaction platform, the largest scoring information is used as a target image; and if the number of the images is multiple, sequentially taking the multiple images to be processed as target images according to the arrangement sequence of the grading information from large to small. When the scene type is the extraction envelope, if the number is fixed, the maximum scoring information is used as a target image; if the number of the images is one which can be switched, a plurality of images to be processed are sequentially used as target images according to the arrangement sequence of the grading information from large to small, and the image with the largest grading information can be used as a first cover to be displayed. By the method, the images which accord with the preference information of the user and are liked by the user can be quickly screened out, convenience is provided for the user, personalized recommendation and image screening are achieved, and the image recommendation efficiency and the image selection accuracy are improved. For example, 10 to-be-processed images stored in the album may be scored to obtain scoring information of each to-be-processed image. Next, according to the ranking order of the scoring information from large to small, 9 images to be processed are selected from the ranking order as target images, and the target images are issued to the information interaction platform for display operation. As another example, 1 image to be processed may be selected as a target image in the order of the score information from large to small, and the target image may be used as a cover of the album, so as to perform the operation of extracting the cover.
In addition, if the source of the image to be processed is the image being photographed, the preview image may be evaluated in conjunction with the user preference information to perform a target operation of generating the image. If the score information of the preview image is larger than a score threshold value, such as 5 points, the preview image can be directly output and taken as a finally output photographing result; if the score information of the preview image is smaller than the score threshold, image operation (such as beautifying, using a filter and the like) conforming to the user preference information can be directly performed on the preview image according to the user preference information, so that the preview image is mapped into a target image conforming to the user preference information, and the mapped target image is used as a final photographing result. Therefore, the preview images in the shooting state are scored based on the user preference information, the shooting images which accord with the aesthetic sense of the user can be obtained, the shooting effect is improved, the user experience is improved, the later image repairing process of the user is reduced, and convenience is provided for the user.
The overall flow chart of the image aesthetic evaluation is schematically shown in fig. 6, and with reference to the flow chart shown in fig. 6, mainly comprises the following parts: a common data pool 601, an aesthetic quality assessment model 602, a remapping module 603, a feedback module 604, and a user 605; on the basis of the parts, the main interactive process of the image aesthetic quality evaluation can comprise the following steps:
firstly, training and tuning by a server according to a data set (grading information of sample images and manually marked sample images) in a data pool to obtain a public aesthetic quality evaluation model; secondly, deploying a public aesthetic quality evaluation model to a user side; thirdly, deploying the feedback module and the remapping model to the user side; further, the feedback module collects user behavior data of each user according to a preset period and a preset collection mode, determines user preference information of the user according to the user behavior data, reversely transmits the user preference information to the feedback module according to the user preference information, further transmits the user preference information to the remapping module, enables the remapping model to learn new mapping parameters according to the user preference information and obtains an updated remapping model according to the mapping parameters, changes and biases the output distribution of the common aesthetic quality evaluation model according to the updated remapping model, and thus obtains biased scoring information which accords with the user preference information, and achieves personalized aesthetic quality evaluation on the image to be processed according to the newly obtained scoring information which accords with the user preference information. The remapping model corresponds to the feedback module one to one, and each user corresponds to one feedback module and one remapping module, so that the scoring information which is in accordance with the aesthetic preference of each user and is aimed at each user can be determined by combining the user preference information of each user on the basis of general evaluation based on a general aesthetic quality evaluation model, the remapping model and the feedback model, personalized aesthetic quality evaluation can be realized on the basis of keeping public aesthetic quality, and the effect, the accuracy and the pertinence of the aesthetic quality evaluation are improved.
In the present exemplary embodiment, there is also provided an image quality evaluation apparatus, as shown in fig. 7, the apparatus 700 may include: an image obtaining module 701, configured to obtain an image to be processed; a score determining module 702, configured to perform feature extraction on the image to be processed, perform prediction bias processing on a feature extraction result according to user preference information, and generate score information used for representing the user preference information; an image evaluation module 703 is configured to perform aesthetic quality evaluation on the image to be processed according to the scoring information to obtain an evaluation result of the image to be processed.
In an exemplary embodiment of the present disclosure, the score determining module includes: the prediction score acquisition module is used for performing feature extraction on the image to be processed so as to perform prediction processing, and generating prediction score information of the image to be processed; and the bias module is used for carrying out bias mapping on the prediction scoring information according to the user preference information to acquire the scoring information used for representing the user preference information.
In an exemplary embodiment of the present disclosure, the prediction score obtaining module includes: the characteristic extraction module is used for extracting the characteristics of the image to be processed to obtain characteristic data; and the characteristic data processing module is used for acquiring the prediction score distribution of the image to be processed through the characteristic data and determining the prediction score information according to the prediction score distribution.
In an exemplary embodiment of the present disclosure, the bias module includes: the user preference determining module is used for acquiring the user preference information according to the user behavior data; the normalization module is used for performing normalization processing on the user preference information to obtain normalized user preference information; and the bias mapping module is used for carrying out bias mapping on the prediction scoring information based on the normalized user preference information and carrying out normalization processing on a bias mapping result to obtain the scoring information.
In an exemplary embodiment of the present disclosure, the offset mapping module includes: the parameter updating module is used for updating the mapping parameters of the remapping model used for mapping the prediction scoring information according to the normalized user preference information to obtain an updated remapping model; and the model bias module is used for performing bias mapping on the prediction score distribution corresponding to the prediction score information according to the updated remapping model and performing normalization processing on the bias mapping result so as to map the prediction score information into the score information.
In an exemplary embodiment of the present disclosure, the user preference information is stored at the user terminal.
In an exemplary embodiment of the present disclosure, the apparatus further includes: and the target image acquisition module is used for selecting at least one image to be processed from the plurality of images to be processed as a target image according to the requirement information of target operation according to the arrangement sequence of the grading information of the plurality of images to be processed so as to perform the target operation on the target image if the plurality of images to be processed exist.
It should be noted that, the specific details of each module in the image quality evaluation apparatus have been elaborated in the corresponding method, and therefore are not described herein again.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 800 according to this embodiment of the invention is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is only an example and should not bring any limitations to the function and scope of use of the embodiments of the present invention.
As shown in fig. 8, electronic device 800 is in the form of a general purpose computing device. The components of the electronic device 800 may include, but are not limited to: the at least one processing unit 810, the at least one memory unit 820, a bus 830 connecting various system components (including the memory unit 820 and the processing unit 810), and a display unit 840.
Wherein the storage unit stores program code that is executable by the processing unit 810 to cause the processing unit 810 to perform steps according to various exemplary embodiments of the present invention as described in the above section "exemplary methods" of the present specification. For example, the processing unit 810 may perform the steps as shown in fig. 2.
The storage unit 820 may include readable media in the form of volatile memory units such as a random access memory unit (RAM)8201 and/or a cache memory unit 8202, and may further include a read only memory unit (ROM) 8203.
The storage unit 820 may also include a program/utility 8204 having a set (at least one) of program modules 8205, such program modules 8205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 830 may be any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The display unit 840 may be a display having a display function to show a processing result by the processing unit 810 performing the method in the present exemplary embodiment through the display. The display includes, but is not limited to, a liquid crystal display or other display.
The electronic device 800 may also communicate with one or more external devices 900 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 800, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 800 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 850. Also, the electronic device 800 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 860. As shown, the network adapter 860 communicates with the other modules of the electronic device 800 via the bus 830. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 800, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary methods" of the present description, when said program product is run on the terminal device.
According to the program product for realizing the method, the portable compact disc read only memory (CD-ROM) can be adopted, the program code is included, and the program product can be operated on terminal equipment, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (10)

1. An image quality evaluation method is characterized by comprising:
acquiring an image to be processed;
extracting the features of the image to be processed, performing prediction bias processing on the feature extraction result according to the user preference information, and generating scoring information for representing the user preference information;
and performing aesthetic quality evaluation on the image to be processed according to the grading information to obtain an evaluation result of the image to be processed.
2. The image quality evaluation method according to claim 1, wherein the performing feature extraction on the image to be processed and performing prediction bias processing on a feature extraction result according to user preference information to generate scoring information for representing the user preference information comprises:
performing feature extraction on the image to be processed to perform prediction processing, and generating prediction scoring information of the image to be processed;
and performing bias mapping on the prediction scoring information according to the user preference information to acquire the scoring information for representing the user preference information.
3. The image quality evaluation method according to claim 2, wherein the performing feature extraction on the image to be processed to perform prediction processing to generate prediction score information of the image to be processed includes:
extracting the features of the image to be processed to obtain feature data;
and acquiring the prediction score distribution of the image to be processed according to the characteristic data, and determining the prediction score information according to the prediction score distribution.
4. The image quality evaluation method according to claim 2, wherein the obtaining the score information for representing the user preference information by bias mapping the prediction score information according to the user preference information includes:
acquiring the user preference information according to the user behavior data;
normalizing the user preference information to obtain normalized user preference information;
and performing bias mapping on the prediction scoring information based on the normalized user preference information, and performing normalization processing on bias mapping results to obtain the scoring information.
5. The image quality evaluation method according to claim 4, wherein the bias mapping the prediction score information based on the normalized user preference information and normalizing a bias mapping result to obtain the score information comprises:
updating the mapping parameters of the remapping model used for mapping the prediction scoring information according to the normalized user preference information to obtain an updated remapping model;
and performing bias mapping on the prediction score distribution corresponding to the prediction score information according to the updated remapping model, and performing normalization processing on the bias mapping result so as to map the prediction score information into the score information.
6. The image quality evaluation method according to any one of claims 1 to 5, wherein the user preference information is stored at a user terminal.
7. The image quality evaluation method according to claim 1, characterized in that the method further comprises:
if a plurality of images to be processed exist, selecting at least one image to be processed from the plurality of images to be processed as a target image according to the ranking order of the grading information of the plurality of images to be processed and the requirement information of the target operation so as to perform the target operation on the target image.
8. An image quality evaluation apparatus, comprising:
the image acquisition module is used for acquiring an image to be processed;
the score determining module is used for extracting the features of the image to be processed, executing prediction bias processing on the feature extraction result according to the user preference information and generating score information for representing the user preference information;
and the image evaluation module is used for performing aesthetic quality evaluation on the image to be processed according to the grading information to obtain an evaluation result of the image to be processed.
9. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the image quality assessment method of any of claims 1-7 via execution of the executable instructions.
10. A computer-readable storage medium on which a computer program is stored, the computer program, when being executed by a processor, implementing the image quality evaluation method according to any one of claims 1 to 7.
CN201911379449.2A 2019-12-27 2019-12-27 Image quality evaluation method, image quality evaluation device, electronic device, and storage medium Pending CN111199540A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911379449.2A CN111199540A (en) 2019-12-27 2019-12-27 Image quality evaluation method, image quality evaluation device, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911379449.2A CN111199540A (en) 2019-12-27 2019-12-27 Image quality evaluation method, image quality evaluation device, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN111199540A true CN111199540A (en) 2020-05-26

Family

ID=70747551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911379449.2A Pending CN111199540A (en) 2019-12-27 2019-12-27 Image quality evaluation method, image quality evaluation device, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN111199540A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561334A (en) * 2020-12-16 2021-03-26 咪咕文化科技有限公司 Grading method and device for reading object, electronic equipment and storage medium
CN113179421A (en) * 2021-04-01 2021-07-27 影石创新科技股份有限公司 Video cover selection method and device, computer equipment and storage medium
CN113298139A (en) * 2021-05-21 2021-08-24 广州文远知行科技有限公司 Image data optimization method, device, equipment and medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005130273A (en) * 2003-10-24 2005-05-19 Canon Inc Image processing apparatus and image processing method
CN104581141A (en) * 2015-01-09 2015-04-29 宁波大学 Three-dimensional picture visual comfort evaluation method
US20170103512A1 (en) * 2015-10-13 2017-04-13 Siemens Healthcare Gmbh Learning-based framework for personalized image quality evaluation and optimization
CN107610123A (en) * 2017-10-11 2018-01-19 中共中央办公厅电子科技学院 A kind of image aesthetic quality evaluation method based on depth convolutional neural networks
CN107798652A (en) * 2017-10-31 2018-03-13 广东欧珀移动通信有限公司 Image processing method, device, readable storage medium storing program for executing and electronic equipment
CN108122029A (en) * 2017-12-29 2018-06-05 北京奇虎科技有限公司 A kind of recommendation method and device of camera special effect
WO2018192245A1 (en) * 2017-04-19 2018-10-25 中国电子科技集团公司电子科学研究院 Automatic scoring method for photo based on aesthetic assessment
CN108984657A (en) * 2018-06-28 2018-12-11 Oppo广东移动通信有限公司 Image recommendation method and apparatus, terminal, readable storage medium storing program for executing
CN109508321A (en) * 2018-09-30 2019-03-22 Oppo广东移动通信有限公司 Image presentation method and Related product
CN109902912A (en) * 2019-01-04 2019-06-18 中国矿业大学 A kind of personalized image aesthetic evaluation method based on character trait
CN109978836A (en) * 2019-03-06 2019-07-05 华南理工大学 User individual image esthetic evaluation method, system, medium and equipment based on meta learning
CN110223292A (en) * 2019-06-20 2019-09-10 厦门美图之家科技有限公司 Image evaluation method, device and computer readable storage medium
CN110473164A (en) * 2019-05-31 2019-11-19 北京理工大学 A kind of image aesthetic quality evaluation method based on attention mechanism

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005130273A (en) * 2003-10-24 2005-05-19 Canon Inc Image processing apparatus and image processing method
CN104581141A (en) * 2015-01-09 2015-04-29 宁波大学 Three-dimensional picture visual comfort evaluation method
US20170103512A1 (en) * 2015-10-13 2017-04-13 Siemens Healthcare Gmbh Learning-based framework for personalized image quality evaluation and optimization
WO2018192245A1 (en) * 2017-04-19 2018-10-25 中国电子科技集团公司电子科学研究院 Automatic scoring method for photo based on aesthetic assessment
CN107610123A (en) * 2017-10-11 2018-01-19 中共中央办公厅电子科技学院 A kind of image aesthetic quality evaluation method based on depth convolutional neural networks
CN107798652A (en) * 2017-10-31 2018-03-13 广东欧珀移动通信有限公司 Image processing method, device, readable storage medium storing program for executing and electronic equipment
CN108122029A (en) * 2017-12-29 2018-06-05 北京奇虎科技有限公司 A kind of recommendation method and device of camera special effect
CN108984657A (en) * 2018-06-28 2018-12-11 Oppo广东移动通信有限公司 Image recommendation method and apparatus, terminal, readable storage medium storing program for executing
CN109508321A (en) * 2018-09-30 2019-03-22 Oppo广东移动通信有限公司 Image presentation method and Related product
CN109902912A (en) * 2019-01-04 2019-06-18 中国矿业大学 A kind of personalized image aesthetic evaluation method based on character trait
CN109978836A (en) * 2019-03-06 2019-07-05 华南理工大学 User individual image esthetic evaluation method, system, medium and equipment based on meta learning
CN110473164A (en) * 2019-05-31 2019-11-19 北京理工大学 A kind of image aesthetic quality evaluation method based on attention mechanism
CN110223292A (en) * 2019-06-20 2019-09-10 厦门美图之家科技有限公司 Image evaluation method, device and computer readable storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHE HUA YEH ET AL.: "Personalized photograph ranking and selection system considering positive and negative user feedback", 《ACM TRANSACTIONS ON MULTIMEDIA COMPUTING, COMMUNICATIONS AND APPLICATIONS》, pages 361 - 180 *
JIAN REN ET AL.: "Personal Image Aesthetics", pages 638 - 647 *
KAYOUNG PARK ET AL.: "Personalized image aesthetic quality assessment by joint regression and ranking", 《IEEE WINTER CONFERENCE ON APPLICATION OF COMPUTER VISION》, pages 1206 - 1215 *
PEI LV ET AL.: "USAR: An interactive user-specific aesthetic ranking framework for images", 《SOCICAL & EMOTIONAL MULTIMEDIA》, pages 1328 - 1337 *
王猛: "基于深度特征的图像质量评价算法研究", pages 1 - 69 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112561334A (en) * 2020-12-16 2021-03-26 咪咕文化科技有限公司 Grading method and device for reading object, electronic equipment and storage medium
CN113179421A (en) * 2021-04-01 2021-07-27 影石创新科技股份有限公司 Video cover selection method and device, computer equipment and storage medium
CN113179421B (en) * 2021-04-01 2023-03-10 影石创新科技股份有限公司 Video cover selection method and device, computer equipment and storage medium
CN113298139A (en) * 2021-05-21 2021-08-24 广州文远知行科技有限公司 Image data optimization method, device, equipment and medium
CN113298139B (en) * 2021-05-21 2024-02-27 广州文远知行科技有限公司 Image data optimization method, device, equipment and medium

Similar Documents

Publication Publication Date Title
US10613726B2 (en) Removing and replacing objects in images according to a directed user conversation
CN108830235B (en) Method and apparatus for generating information
CN109816039B (en) Cross-modal information retrieval method and device and storage medium
CN108898186B (en) Method and device for extracting image
CN108805091B (en) Method and apparatus for generating a model
JP2022058915A (en) Method and device for training image recognition model, method and device for recognizing image, electronic device, storage medium, and computer program
JP6994588B2 (en) Face feature extraction model training method, face feature extraction method, equipment, equipment and storage medium
CN109522950B (en) Image scoring model training method and device and image scoring method and device
CN110956202B (en) Image training method, system, medium and intelligent device based on distributed learning
CN111199540A (en) Image quality evaluation method, image quality evaluation device, electronic device, and storage medium
US20210117484A1 (en) Webpage template generation
CN110009059B (en) Method and apparatus for generating a model
KR20210091057A (en) Method and apparatus for detecting temporal action of video, electronic device and stroage medium
CN111199541A (en) Image quality evaluation method, image quality evaluation device, electronic device, and storage medium
EP4113376A1 (en) Image classification model training method and apparatus, computer device, and storage medium
WO2019118236A1 (en) Deep learning on image frames to generate a summary
CN112487242A (en) Method and device for identifying video, electronic equipment and readable storage medium
CN112380392A (en) Method, apparatus, electronic device and readable storage medium for classifying video
CN114037003A (en) Question-answer model training method and device and electronic equipment
CN111797258B (en) Image pushing method, system, equipment and storage medium based on aesthetic evaluation
CN112000803B (en) Text classification method and device, electronic equipment and computer readable storage medium
CN116912187A (en) Image generation model training and image generation method, device, equipment and medium
CN109034085B (en) Method and apparatus for generating information
CN111062914A (en) Method, apparatus, electronic device and computer readable medium for acquiring facial image
CN113190154B (en) Model training and entry classification methods, apparatuses, devices, storage medium and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination