CN112102304A - Image processing method, image processing device, computer equipment and computer readable storage medium - Google Patents

Image processing method, image processing device, computer equipment and computer readable storage medium Download PDF

Info

Publication number
CN112102304A
CN112102304A CN202011015545.1A CN202011015545A CN112102304A CN 112102304 A CN112102304 A CN 112102304A CN 202011015545 A CN202011015545 A CN 202011015545A CN 112102304 A CN112102304 A CN 112102304A
Authority
CN
China
Prior art keywords
image
aesthetic
sample
personalized
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011015545.1A
Other languages
Chinese (zh)
Inventor
李雷达
祝汉城
唐旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Xidian University
Original Assignee
Tencent Technology Shenzhen Co Ltd
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd, Xidian University filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011015545.1A priority Critical patent/CN112102304A/en
Publication of CN112102304A publication Critical patent/CN112102304A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to an image processing method, an image processing device, a computer device and a computer readable storage medium. The method comprises the following steps: acquiring an image to be processed and a target user identifier; extracting image aesthetic attribute characteristics corresponding to the image to be processed; determining a generalized aesthetic evaluation result of the image to be processed according to the image aesthetic attribute characteristics; according to the image aesthetic attribute characteristics and the user character characteristics corresponding to the target user identification, determining personalized aesthetic evaluation correction parameters corresponding to the target user identification; and determining a personalized aesthetic evaluation result corresponding to the target user identification based on the generalized aesthetic evaluation result and the personalized aesthetic evaluation correction parameter. The method can improve the accuracy of the aesthetic evaluation result.

Description

Image processing method, image processing device, computer equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, a computer device, and a computer-readable storage medium.
Background
As artificial intelligence technology has been researched and developed, artificial intelligence technology has been developed and applied in various fields, such as the field of image aesthetic evaluation. The image aesthetic evaluation is to simulate the aesthetic perception of human beings on the image by using a computer system so as to perform aesthetic evaluation on the image. The image aesthetic evaluation can be particularly applied to the fields of image recommendation, image enhancement, image retrieval, personal album management and the like.
The traditional aesthetic evaluation of images is mainly to perform aesthetic evaluation on images based on the aesthetic attributes of the images, but due to differences of culture, age, gender and the like, aesthetic evaluation standards of different users on the images are different, so that an aesthetic evaluation result obtained by the aesthetic attribute evaluation based on the images cannot meet the aesthetic preference of part of users, and the aesthetic evaluation result is inaccurate.
Disclosure of Invention
In view of the above, it is necessary to provide an image processing method, an apparatus, a computer device, and a computer-readable storage medium capable of improving accuracy of an aesthetic evaluation result in view of the above technical problems.
A method of image processing, the method comprising:
acquiring an image to be processed and a target user identifier;
extracting image aesthetic attribute characteristics corresponding to the image to be processed;
determining a generalized aesthetic evaluation result of the image to be processed according to the aesthetic attribute characteristics of the image;
determining an individualized aesthetic evaluation correction parameter corresponding to the target user identifier according to the aesthetic attribute characteristics of the image and the corresponding user character characteristics of the target user identifier;
and determining the personalized aesthetic evaluation result corresponding to the target user identification based on the generalized aesthetic evaluation result and the personalized aesthetic evaluation correction parameter.
An image processing apparatus, the apparatus comprising:
the acquisition module is used for acquiring the image to be processed and the target user identification;
the extraction module is used for extracting the image aesthetic attribute characteristics corresponding to the image to be processed;
the generalized aesthetic evaluation module is used for determining a generalized aesthetic evaluation result of the image to be processed according to the aesthetic attribute characteristics of the image;
the correction parameter determining module is used for determining an individualized aesthetic evaluation correction parameter corresponding to the target user identifier according to the image aesthetic attribute characteristics and the user character characteristics corresponding to the target user identifier;
and the personalized aesthetic evaluation module is used for determining a personalized aesthetic evaluation result corresponding to the target user identifier based on the generalized aesthetic evaluation result and the personalized aesthetic evaluation correction parameter.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring an image to be processed and a target user identifier;
extracting image aesthetic attribute characteristics corresponding to the image to be processed;
determining a generalized aesthetic evaluation result of the image to be processed according to the aesthetic attribute characteristics of the image;
determining an individualized aesthetic evaluation correction parameter corresponding to the target user identifier according to the aesthetic attribute characteristics of the image and the corresponding user character characteristics of the target user identifier;
and determining the personalized aesthetic evaluation result corresponding to the target user identification based on the generalized aesthetic evaluation result and the personalized aesthetic evaluation correction parameter.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring an image to be processed and a target user identifier;
extracting image aesthetic attribute characteristics corresponding to the image to be processed;
determining a generalized aesthetic evaluation result of the image to be processed according to the aesthetic attribute characteristics of the image;
determining an individualized aesthetic evaluation correction parameter corresponding to the target user identifier according to the aesthetic attribute characteristics of the image and the corresponding user character characteristics of the target user identifier;
and determining the personalized aesthetic evaluation result corresponding to the target user identification based on the generalized aesthetic evaluation result and the personalized aesthetic evaluation correction parameter.
The image processing method, the image processing device, the computer equipment and the computer readable storage medium determine a generalized aesthetic evaluation result according to the image aesthetic attribute characteristics of the image to be processed, determine a personalized aesthetic evaluation correction parameter according to the image aesthetic attribute characteristics and the user character characteristics corresponding to the target user identification, and correct the generalized aesthetic evaluation result based on the personalized aesthetic evaluation correction parameter to obtain a personalized aesthetic evaluation result corresponding to the target user identification, so that the generalized aesthetic evaluation result is a result obtained by evaluating the aesthetic characteristics of the image per se and reflects the aesthetic quality of the image on an objective level, the generalized aesthetic evaluation result is corrected according to the aesthetic deviation corresponding to the user character characteristics, and the corrected personalized aesthetic evaluation result approaches to the aesthetic evaluation result obtained by the user based on the self aesthetic quality, the image is subjected to personalized aesthetic evaluation by combining subjective aesthetics of the user and objective aesthetic quality of the image, so that the personalized aesthetic evaluation result accords with aesthetic preference of an individual user, and the accuracy of the personalized aesthetic evaluation result is improved.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of an image processing method;
FIG. 2 is a flow diagram illustrating a method for image processing according to one embodiment;
FIG. 3 is a diagram of a personalized aesthetic evaluation model in one embodiment;
FIG. 4 is a comparison of personalized aesthetic evaluation performance in one embodiment;
FIG. 5 is a schematic diagram illustrating a process for obtaining personality traits of a user in one embodiment;
FIG. 6 is a schematic diagram of a personalized aesthetic evaluation model in one embodiment;
FIG. 7(a) is a schematic diagram of training of an image aesthetic property feature extraction network in one embodiment;
FIG. 7(b) is a training diagram of a generalized aesthetic evaluation network in one embodiment;
FIG. 7(c) is a schematic diagram of training of a personalized aesthetic evaluation modification network in one embodiment;
FIG. 8 is a flowchart illustrating an image processing method according to another embodiment;
FIG. 9 is a block diagram showing the configuration of an image processing apparatus according to an embodiment;
FIG. 10 is a diagram showing an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and applied in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical care, smart customer service, image recommendation, image enhancement, image retrieval, personal album management, and the like.
The scheme provided by the embodiment of the application relates to the technologies such as machine learning of artificial intelligence and the like, and is specifically explained by the following embodiment:
the image processing method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The terminal 102 acquires an image to be processed and a target user identifier, and sends the image to be processed and the target user identifier to the server 104; then, the server 104 extracts the image aesthetic attribute characteristics corresponding to the image to be processed; then, the server 104 determines a generalized aesthetic evaluation result of the image to be processed according to the aesthetic attribute characteristics of the image; then, the server 104 determines a personalized aesthetic evaluation correction parameter corresponding to the target user identifier according to the image aesthetic attribute feature and the user character feature corresponding to the target user identifier; next, the server 104 determines a personalized aesthetic evaluation result corresponding to the target user identifier based on the generalized aesthetic evaluation result and the personalized aesthetic evaluation modification parameter.
The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud storage, network services, cloud communication, big data, and an artificial intelligence platform. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.
In an embodiment, as shown in fig. 2, an image processing method is provided, and this embodiment is mainly exemplified by applying the method to the computer device (terminal 102 or server 104) in fig. 1, and includes the following steps:
step 202, acquiring an image to be processed and a target user identifier.
The image to be processed is an image to be subjected to aesthetic evaluation through the method provided by the embodiment of the application.
In the application, the aesthetic perception of the user on the image can be simulated for the specific user to perform aesthetic evaluation on the image, so that the aesthetic evaluation result obtained by evaluation approaches to the aesthetic evaluation result obtained by the user based on self aesthetics, and the accuracy of the aesthetic evaluation result is improved.
The target user identification is a user identification of the target user, the user identification is used for describing identity information of the user, and the user identification has uniqueness and can be specifically an application program account number, a mobile phone number, an identity document number and the like. And performing aesthetic evaluation on the image to be processed by simulating the aesthetic perception of the target user on the image, so that the aesthetic evaluation result obtained by the evaluation is matched with the aesthetic of the target user.
Specifically, an image processing application is run on the terminal, and a target user is registered with a target user identifier based on the image processing application. The terminal may start an image processing application based on an operation of a target user, and the image processing application performs aesthetic evaluation on the image to be processed based on aesthetic understanding of the target user.
In one embodiment, the source of the image to be processed is related to the image aesthetic evaluation task, that is, the image to be processed corresponding to the image aesthetic evaluation task is obtained. The image aesthetic evaluation task may be image recommendation, image enhancement, image retrieval, personal photo album management, and the like. Taking image recommendation as an example, the computer device acquires a plurality of images to be recommended, preferentially recommends images which are matched with the aesthetic sense of the target user to the target user based on the aesthetic sense understanding of the target user, and therefore the image recommendation accuracy is improved.
And step 204, extracting the image aesthetic attribute characteristics corresponding to the image to be processed.
Wherein the image aesthetic attribute feature is data reflecting an aesthetic characteristic of the image. The image aesthetic characteristics are features of the image at an aesthetic level, such as composition characteristics, color characteristics, brightness characteristics, depth characteristics, content characteristics, and the like. The composition characteristic describes a composition mode of an image, and specifically can be one-third composition, symmetrical composition, frame type composition, center composition, guideline composition, diagonal composition, triangle composition, balanced composition and the like; the color characteristics describe the color composition of the image, and specifically can be color temperature, hue, color components, color co-scheduling and the like; the luminance characteristic describes the brightness level of the image; the depth of field characteristic describes the degree of blurring of the background and the clear range of the subject object in the image; the content properties describe the content contained by the image.
In one embodiment, the image aesthetic features may include a plurality of feature dimensions, each feature dimension corresponding to an image aesthetic characteristic. For example, the image aesthetic features may include 11 feature dimensions, and the image aesthetic characteristics corresponding to each of the 11 feature dimensions may be: content of interest, subject object, content repeatability, brightness, color co-scheduling, color liveliness, depth of field, motion blur, one-third composition, symmetric composition, and balanced composition. The feature value of each feature dimension may be used to characterize the score of the image in the corresponding image aesthetic property.
In one embodiment, step 204 includes: acquiring an image aesthetic attribute feature extraction network; inputting the image to be processed into an image aesthetic property feature extraction network, and outputting the image aesthetic property feature through the image aesthetic property feature extraction network.
The image aesthetic attribute feature extraction network is a machine learning model with the image aesthetic attribute feature extraction capability. The image aesthetic property feature extraction network can be trained through the image samples and the image aesthetic property feature training labels.
It can be understood that a general machine learning model with image aesthetic property feature extraction capability, such as a ResNet model, a ResNet-18 model, etc., which extracts image aesthetic property features according to the requirements of the embodiments of the present application on the image aesthetic property features, can be used as the image aesthetic property feature extraction network of the embodiments of the present application.
Specifically, the computer equipment acquires the image to be processed and the image aesthetic property feature extraction network, inputs the image to be processed into the image aesthetic property feature extraction network, and obtains the image aesthetic property features output by the image aesthetic property feature extraction network.
And step 206, determining a generalized aesthetic evaluation result of the image to be processed according to the aesthetic attribute characteristics of the image.
The generalized aesthetic evaluation result is a result of evaluating the aesthetic characteristics of the image itself. To some extent, may reflect a popular aesthetic.
In the application, the aesthetic evaluation can be performed on the image based on the aesthetic property of the image, and the aesthetic evaluation result obtained based on the aesthetic property evaluation of the image reflects the aesthetic quality of the image on an objective level.
In one embodiment, the generalized aesthetic evaluation result may be a generalized aesthetic evaluation score, which may be positively correlated with the aesthetic quality of the image at an objective level, the higher the generalized aesthetic evaluation score, the higher the aesthetic quality of the image at the objective level.
In one embodiment, step 206 includes: acquiring a generalized aesthetic evaluation network; and inputting the image aesthetic attribute characteristics into a generalized aesthetic evaluation network, and outputting a generalized aesthetic evaluation result through the generalized aesthetic evaluation network.
Wherein, the generalized aesthetic evaluation network is a machine learning model with the generalized aesthetic evaluation capability. The generalized aesthetic evaluation network may be trained by the image aesthetic attribute feature samples and the aesthetic evaluation score probability distribution training labels.
Specifically, the computer equipment acquires the image aesthetic attribute characteristics and the generalized aesthetic evaluation network, and the computer equipment inputs the image aesthetic attribute characteristics into the generalized aesthetic evaluation network to obtain a generalized aesthetic evaluation result output by the generalized aesthetic evaluation network.
And step 208, determining a personalized aesthetic evaluation correction parameter corresponding to the target user identifier according to the image aesthetic attribute characteristics and the user character characteristics corresponding to the target user identifier.
Wherein the user personality characteristic is data reflecting a personality characteristic of the user, the user personality characteristic having a correlation with an aesthetic perception of the image by the user.
In the application, the parameter of the user character feature is introduced, and the user character feature is a parameter associated with the aesthetic feeling of the user on the image, considering that the aesthetic judgment standards of different users on the image are different, and the user character is an important factor influencing the aesthetic preference of the user. According to the method and the device, the user groups are divided based on the user character characteristics, the aesthetic deviation of different user character characteristics relative to the generalized aesthetic evaluation result is researched, and then the generalized aesthetic evaluation result is corrected according to the aesthetic deviation of the user character characteristics of the target user, so that the corrected personalized aesthetic evaluation result approaches to the aesthetic evaluation result of the target user based on self aesthetics.
In one embodiment, the user personality characteristics may include a plurality of characteristic dimensions, one for each personality characteristic. The feature value of each feature dimension may be used to characterize the score of the user at the corresponding personality trait, which is related not only to the personality trait of the user itself, but also to the aesthetic perception of the image by the user.
For example, the user personality characteristics may include 5 characteristic dimensions, and the personality characteristics corresponding to each of the 5 characteristic dimensions may be openness, accountability, camber, hommization, and neurogenic. The 5 personality characteristics are collectively called as five personality characteristics, and the five personality characteristics are one of the whole personality structure models. Openness is used to characterize how curious, intelligent, imaginative, and creative; accountability is used to characterize the sense of responsibility, reliability and continuity of work; camber is used to characterize the degree of speech and confidence; humanity is preferred for characterizing the degree of liberty, cooperativity, and trustworthiness; neurogenic is used to characterize the degree of calm, enthusiasm, and safety.
And the personalized aesthetic evaluation correction parameter is used for correcting the generalized aesthetic evaluation result, so that the corrected personalized aesthetic evaluation result is matched with the aesthetic of the target user.
In one embodiment, the user personality characteristics corresponding to the target user identifier may be predetermined, and the computer device may obtain the user personality characteristics corresponding to the input target user identifier, or extract the user personality characteristics corresponding to the target user identifier from pre-stored user personality characteristics.
In one embodiment, step 208 includes: acquiring a personalized aesthetic evaluation correction network; and obtaining a personalized aesthetic evaluation correction parameter according to the image aesthetic attribute characteristics and the user character characteristics through a personalized aesthetic evaluation correction network.
The personalized aesthetic evaluation correction network is a machine learning model with the personalized aesthetic evaluation correction parameter calculation capability. The personalized aesthetic evaluation modification network can be trained through the image aesthetic attribute feature sample, the user character feature sample and the personalized aesthetic evaluation modification parameter training label.
In one embodiment, the computer device obtains the personalized aesthetic evaluation correction parameters according to the fusion result of the image aesthetic attribute characteristics and the user character characteristics through the personalized aesthetic evaluation correction network.
The feature fusion can be splicing, adding, matrix multiplying, etc. the features. For example, the aesthetic attribute features of the image with 11 feature dimensions are fused with the user character features with 5 feature dimensions, and the fusion features with 55 feature dimensions are obtained through matrix multiplication.
Specifically, the computer device obtains image aesthetic attribute characteristics, user character characteristics and a personalized aesthetic evaluation correction network, determines a fusion result of the image aesthetic attribute characteristics and the user character characteristics, inputs the fusion result into the personalized aesthetic evaluation correction network, and obtains personalized aesthetic evaluation correction parameters output by the personalized aesthetic evaluation correction network. Or the computer equipment inputs the image aesthetic attribute characteristics and the user character characteristics into a personalized aesthetic evaluation correction network, the personalized aesthetic evaluation correction network determines the fusion result of the image aesthetic attribute characteristics and the user character characteristics at first, and then outputs personalized aesthetic evaluation correction parameters according to the fusion result.
And step 210, determining a personalized aesthetic evaluation result corresponding to the target user identification based on the generalized aesthetic evaluation result and the personalized aesthetic evaluation correction parameter.
Wherein the personalized aesthetic evaluation result is an aesthetic evaluation result obtained by simulating the aesthetic of the image to be processed by the target user.
In one embodiment, the personalized aesthetic evaluation result may be a personalized aesthetic evaluation score, the personalized aesthetic evaluation score may reflect the degree of matching between the subjective aesthetics of the user and the aesthetic quality of the image, and the higher the personalized aesthetic evaluation score is, the more the image is consistent with the subjective aesthetics of the user.
Specifically, the computer device adjusts the generalized aesthetic evaluation result according to the personalized aesthetic evaluation correction parameter to obtain a personalized aesthetic evaluation result corresponding to the target user identifier.
In one embodiment, referring to FIG. 3, FIG. 3 is a schematic illustration of a personalized aesthetic evaluation model in one embodiment. It can be seen that the personalized aesthetic evaluation model comprises an image aesthetic attribute feature extraction network, a generalized aesthetic evaluation network and a personalized aesthetic evaluation modification network. Firstly, inputting an image 302 to be processed into an image aesthetic property feature extraction network, and outputting an image aesthetic property feature 304 through the image aesthetic property feature extraction network; inputting the image aesthetic attribute characteristics 304 into a generalized aesthetic evaluation network, and outputting a generalized aesthetic evaluation result 306 through the generalized aesthetic evaluation network; then obtaining a personalized aesthetic evaluation correction parameter 310 according to the image aesthetic attribute feature 304 and the user character feature 308 of the target user identifier through a personalized aesthetic evaluation correction network; finally, based on generalized aesthetic evaluation result 306 and personalized aesthetic evaluation modification parameters 310, personalized aesthetic evaluation result 312 corresponding to the target user identification is determined.
According to the method, two traditional personalized aesthetic evaluation methods are selected, and personalized aesthetic evaluation performance tests are respectively carried out on the FLICKR-AES test set. And measuring the personalized aesthetic evaluation performance of the three methods by adopting a Spearman Rank Order Correlation Coefficient (SROCC), wherein the SROCC is used for quantitatively measuring the sequencing Correlation between the personalized aesthetic evaluation result and the real result, and the larger the SROCC value is, the better the personalized aesthetic evaluation performance is.
Specifically, 10 or 100 images scored by each user in the FLICKR-AES test set are randomly selected as training samples, the rest images are used as test samples, and training and testing are respectively carried out through three methods. For one method, in order to eliminate random selection errors, the process is repeated for 50 times, the SROCC mean value is taken as the personalized aesthetic evaluation performance corresponding to each user, and finally, the average result of the personalized aesthetic evaluation performances corresponding to 37 users in the test set is taken as the test result of the method.
Referring to FIG. 4, FIG. 4 is a graphical comparison of personalized aesthetic evaluation performance in one embodiment. It can be seen that the SROCC value obtained by the application for the individual user test is higher than the SROCC value obtained by the traditional personalized aesthetic evaluation method for the individual user test, which indicates that the application has more superior performance in the personalized aesthetic evaluation for the individual user.
The personalized aesthetic evaluation model provided by the application can simulate the personalized aesthetic of an individual user to an image, and experiments prove that the model has better aesthetic evaluation performance compared with the traditional personalized aesthetic evaluation method, and can be widely applied to personalized image aesthetic analysis.
In the image processing method, a generalized aesthetic evaluation result is determined according to the image aesthetic attribute characteristics of the image to be processed, a personalized aesthetic evaluation correction parameter is determined according to the image aesthetic attribute characteristics and the user character characteristics corresponding to the target user identification, the generalized aesthetic evaluation result is corrected based on the personalized aesthetic evaluation correction parameter to obtain the personalized aesthetic evaluation result corresponding to the target user identification, so that the generalized aesthetic evaluation result is the result obtained by evaluating the aesthetic characteristics of the image and reflects the aesthetic quality of the image on an objective level, the generalized aesthetic evaluation result is corrected according to the aesthetic deviation corresponding to the user character characteristics, the corrected personalized aesthetic evaluation result approaches the aesthetic evaluation result obtained by the user based on the self aesthetics, namely the personalized aesthetic evaluation is carried out on the image by combining the subjective aesthetics of the user and the objective aesthetic quality of the image, the personalized aesthetic evaluation result is made to conform to the aesthetic preference of the individual user, thereby improving the accuracy of the personalized aesthetic evaluation result.
In one embodiment, the image to be processed is plural; the personalized aesthetic evaluation result is a personalized aesthetic evaluation score; the method further comprises the following steps: acquiring the personalized aesthetic evaluation score of the target user identification on each image to be processed; selecting a preset number of target images from the images to be processed according to the personalized aesthetic evaluation scores; and outputting the target image to a terminal where the target user identification is located.
The source of the image to be processed is related to an image aesthetic evaluation task, wherein the image aesthetic evaluation task can be image recommendation, image enhancement, image retrieval, personal photo album management and the like.
Specifically, the computer device obtains a plurality of images to be processed corresponding to the image aesthetic evaluation tasks, obtains the personalized aesthetic evaluation scores of the target user identifier for the images to be processed, selects a preset number of target images from high to low according to the personalized aesthetic evaluation scores, and outputs the target images to the terminal where the target user identifier is located, so that in each image aesthetic evaluation task, the computer device preferentially displays the images meeting the aesthetic quality of the target user to the target user.
Taking image retrieval as an example, computer equipment acquires retrieval keywords input by a target user identifier, retrieves a plurality of retrieval images corresponding to the retrieval keywords, determines the personalized aesthetic evaluation scores of the target user identifier for the retrieval images by the method provided by the embodiment, selects a preset number of images from high to low according to the personalized aesthetic evaluation scores as retrieval results, and outputs the retrieval results to a terminal where the target user identifier is located so as to preferentially display the images conforming to the aesthetic sense of the target user to the target user, thereby improving the image retrieval efficiency.
The personalized aesthetic evaluation method can be applied to various image aesthetic evaluation tasks, and execution efficiency of the various image aesthetic evaluation tasks is improved.
The application relates to user personality characteristics, which can be predetermined or determined in an application scene according to a reference image. The following provides a method for acquiring the user character characteristics corresponding to the target user identifier.
In one embodiment, referring to fig. 5, fig. 5 is a flow diagram illustrating a process of obtaining a user personality trait in one embodiment. It can be seen that obtaining the user personality characteristics corresponding to the target user identifier includes:
step 502, acquiring more than one reference image corresponding to a target user identifier, wherein each reference image is marked with an individualized aesthetic evaluation result corresponding to the target user identifier; the personalized aesthetic evaluation result is a personalized aesthetic evaluation score.
In one embodiment, the computer device outputs more than one reference image to the target user identification and receives more than one reference image tagged with the personalized aesthetic evaluation result returned by the target user identification. Or the computer equipment directly receives more than one reference image corresponding to the target user identification, and each reference image is marked with an individualized aesthetic evaluation result corresponding to the target user identification.
In one embodiment, only the personalized aesthetic evaluation score corresponding to the target user identifier may be labeled on the reference image, or the personalized aesthetic evaluation score corresponding to the target user identifier and the personalized aesthetic evaluation scores corresponding to other user identifiers may be labeled at the same time.
And 504, acquiring a preferred user character feature extraction network, respectively inputting more than one reference image into the preferred user character feature extraction network, and acquiring the preferred user character features corresponding to the more than one reference image through the preferred user character feature extraction network.
Wherein, for a reference image, the preferred user character feature is a character feature that prefers the reference image. The preferred user personality trait may include a plurality of characteristic dimensions, each characteristic dimension corresponding to a personality trait, a characteristic value for each characteristic dimension characterizing a score of the user at the corresponding personality trait, the score being related only to the personality trait of the user.
It will be appreciated that the characteristic values of the preferred user personality characteristics relate only to the personality traits of the user himself, which are related not only to the personality traits of the user himself, but also to the aesthetic perception of the image by the user. For the purpose of distinction, the character features involved in the preference user character feature will be referred to as generic character features hereinafter, and the character features involved in the user character features will be referred to as aesthetic character features hereinafter.
The preferred user character feature extraction network is used for training a machine learning model with the preferred user character feature extraction capability. The preferred user personality feature extraction network may be trained by the image samples and the preferred user personality feature training labels.
Specifically, the computer device obtains a reference image and a preferred user character feature extraction network, and the computer device inputs more than one reference image into the preferred user character feature extraction network respectively to obtain preferred user character features corresponding to the more than one reference image output by the preferred user character feature extraction network.
And step 506, acquiring the maximum value and the minimum value of the personalized aesthetic evaluation corresponding to the target user identification of each reference image.
Specifically, the computer device selects a personalized aesthetic evaluation maximum value and a personalized aesthetic evaluation minimum value from personalized aesthetic evaluation scores corresponding to the reference images and the target user identification.
For example, 5 reference images corresponding to the target user identifier are acquired: reference picture a, reference picture B, reference picture C, reference picture D, reference picture E. The personalized aesthetic evaluation score corresponding to the reference image A and the target user identifier is 1, the personalized aesthetic evaluation score corresponding to the reference image B and the target user identifier is 2, the personalized aesthetic evaluation score corresponding to the reference image C and the target user identifier is 3, the personalized aesthetic evaluation score corresponding to the reference image D and the target user identifier is 4, the personalized aesthetic evaluation score corresponding to the reference image E and the target user identifier is 5, the personalized aesthetic evaluation maximum value is 5, and the personalized aesthetic evaluation minimum value is 1.
And step 508, for one reference image, obtaining the preference degree of the target user identifier to the reference image based on the personalized aesthetic evaluation score, the personalized aesthetic evaluation maximum value and the personalized aesthetic evaluation minimum value corresponding to the target user identifier.
In one embodiment, the computer device obtains a first difference value between the maximum value and the minimum value of the personalized aesthetic evaluation, obtains a second difference value between the score of the personalized aesthetic evaluation corresponding to the target user identifier on the reference image and the minimum value of the personalized aesthetic evaluation, and takes the ratio of the second difference value to the first difference value as the preference degree of the target user identifier on the reference image.
Continuing with the above example, if the difference between the maximum value of the personalized aesthetic measure and the minimum value of the personalized aesthetic measure is 4, then the preference degrees of the target user id for each of the reference image a, the reference image B, the reference image C, the reference image D, and the reference image E are: 0. 1/4, 1/2, 3/4, 1.
And step 510, obtaining the user character characteristics corresponding to the target user identification based on the preference degree of the target user identification to each reference image and the corresponding preference user character characteristics of each reference image.
In the application, the personality characteristics of the user are adjusted according to the preference degree of the user on the image, so that the characteristic value of the personality characteristics of the user is related to the aesthetic feeling of the user on the image.
In one embodiment, for one of the reference images, the computer device adjusts the feature values of the respective dimensions of the preference user character features corresponding to the reference image according to the preference degree of the target user identifier to the reference image, so as to obtain the user character features of the target user identifier relative to the reference image. And the computer equipment acquires the average value or weighted average value of the target user identification relative to each dimension of the user character characteristics of each reference image to obtain the user character characteristics corresponding to the target user identification.
With continued reference to the above example, the reference image A is characterized by a corresponding preferred user character of (a)1,a2,a3,a4,a5) The character of the target user relative to the user character of the reference image A is (0,0,0,0, 0); the corresponding preferred user character of the reference image B is (B)1,b2,b3,b4,b5) The user character of the target user identifier relative to the reference image B is (1/4B)1,1/4b2,1/4b3,1/4b4,1/4b5) (ii) a The corresponding preferred user character of the reference image C is (C)1,c2,c3,c4,c5) The user character of the target user identifier relative to the reference image C is (1/2C)1,1/2c2,1/2c3,1/2c4,1/2c5) (ii) a The corresponding preferred user character feature of the reference image D is (D)1,d2,d3,d4,d5) The user character of the target user identifier relative to the reference image D is (3/4D)1,3/4d2,3/4d3,3/4d4,3/4d5) (ii) a Reference image EThe corresponding preferred user personality is characterized by (e)1,e2,e3,e4,e5) The user character of the target user identification relative to the reference image E is (E)1E2, e3, e4, e5), then the target user identifies the corresponding user character characteristic as 1/5 (0+1/4 b)1+1/2c1+3/4d1+e1,……,0+1/4b5+1/2c5+3/4d5+e5)。
In the embodiment, the user character feature of the user is acquired based on the preference degree of the user to the reference image and the preference user character feature of the reference image, the acquired user character feature is related to the aesthetic feeling of the user to the image, and the aesthetic evaluation result is determined based on the user character feature and the image aesthetic attribute feature subsequently, so that the accuracy of the aesthetic evaluation result can be improved.
In one embodiment, referring to FIG. 6, FIG. 6 is a schematic diagram of a personalized aesthetic evaluation model in one embodiment. It can be seen that the personalized aesthetic evaluation model comprises an image aesthetic attribute feature extraction network, a preferred user character feature extraction network, a generalized aesthetic evaluation network and a personalized aesthetic evaluation modification network.
Specifically, first, the computer device obtains more than one reference image 602 corresponding to the target user identifier, and each reference image 602 is labeled with a personalized aesthetic evaluation score corresponding to the target user identifier. The computer device inputs the more than one reference image 602 into the preferred user character feature extraction network respectively, and obtains the preferred user character features 604 corresponding to the more than one reference images respectively through the preferred user character feature extraction network. The computer device obtains user character features 606 corresponding to the target user identifier according to the personalized aesthetic evaluation scores corresponding to the target user identifier and the corresponding preference user character features 604 of the reference images.
Secondly, after obtaining the user character feature 606 corresponding to the target user identification, the computer device inputs the image to be processed 608 into an image aesthetic property feature extraction network, and outputs an image aesthetic property feature 610 through the image aesthetic property feature extraction network. The image aesthetic attribute features 610 are then input into a generalized aesthetic evaluation network, through which generalized aesthetic evaluation results 612 are output. And obtaining a personalized aesthetic evaluation correction parameter 614 according to the image aesthetic attribute characteristic 610 and the user character characteristic 606 of the target user identifier through a personalized aesthetic evaluation correction network. Finally, based on generalized aesthetic evaluation result 612 and personalized aesthetic evaluation modification parameters 614, personalized aesthetic evaluation result 616 corresponding to the target user identification is determined.
In the embodiment, the personalized aesthetic evaluation model is constructed by using the objective attributes of the image and the subjective attributes of the user, namely, the difference between the personalized aesthetic and the popular aesthetic of the image is modeled by using different user character characteristics, so that the personalized aesthetic evaluation result obtained by the personalized aesthetic evaluation model is more in line with the aesthetic preference of the individual user.
The present application relates to a training method of a personalized aesthetic evaluation model, and is specifically described in the following examples.
Firstly, as for the image aesthetic attribute feature extraction network and the preferred user character feature extraction network, the image aesthetic attribute feature extraction network and the preferred user character feature extraction network can be trained respectively, namely, the image aesthetic attribute feature extraction network is trained through image samples and image aesthetic attribute feature training labels, and the preferred user character feature extraction network is trained through the image samples and the preferred user character feature training labels. The image aesthetic attribute feature extraction network and the preferred user character feature extraction network can also be trained together, and a co-training method is provided below.
In one embodiment, the image aesthetic attribute feature extraction network and the preference user character feature extraction network are trained together; the image aesthetic attribute feature extraction network and the preference user character feature extraction network share a basic sub-network and respectively comprise corresponding output sub-networks; the method for jointly training the image aesthetic attribute feature extraction network and the preference user character feature extraction network comprises the following steps of: acquiring a first image sample set, a second image sample set, an image aesthetic attribute feature extraction network and a preference user character feature extraction network; each first image sample in the first image sample set respectively has an image aesthetic attribute feature training label, and each second image sample in the second image sample set respectively has a preference user character feature training label; inputting the first image sample into an image aesthetic property feature extraction network, performing feature extraction on the first image sample through a basic sub-network of the image aesthetic property feature extraction network, and outputting an image aesthetic property feature prediction result through an output sub-network of the image aesthetic property feature extraction network; inputting the second image sample into a preferred user character feature extraction network, performing feature extraction on the second image sample through a basic sub-network of the preferred user character feature extraction network, and outputting a preferred user character feature prediction result through an output sub-network of the preferred user character feature extraction network; and training an image aesthetic attribute feature extraction network and a preference user character feature extraction network together based on the image aesthetic attribute feature prediction result and the image aesthetic attribute feature training label, the preference user character feature prediction result and the preference user character feature training label.
The image aesthetic attribute feature extraction network and the preference user character feature extraction network share a basic sub-network and respectively comprise corresponding output sub-networks. The number of base subnetworks may be one or two. When the number of the basic sub-networks is one, the image aesthetic attribute feature extraction network and the preference user character feature extraction network share one basic sub-network, and the basic sub-network is connected with two output sub-networks; when the number of the basic sub-networks is two, the image aesthetic attribute feature extraction network and the preferred user character feature extraction network respectively correspond to one basic sub-network, the two basic sub-networks are respectively connected with one output sub-network, and the model structures of the two basic sub-networks are the same.
In one embodiment, the base subnetwork may employ a convolutional network of ResNet18 and generate a fully connected network using a global averaging pooling operation. The two output sub-networks can adopt a fully-connected network, for the convenience of calculation, the characteristic values of the image aesthetic attribute characteristic and the preferred user character characteristic can be normalized to [0,1], and therefore, a Sigmoid function can be adopted as an activation function of the output sub-networks.
Wherein, each first image sample in the first image sample set has an image aesthetic attribute feature training label, the first image sample set may specifically adopt an AADB (image aesthesics Analysis database) image data set, the AADB image data set includes 1000 images, each image is labeled with 11 image aesthetic characteristics by multiple sample users: content of interest, subject object, content repeatability, brightness, color co-scheduling, color liveliness, depth of field, motion blur, one-third composition, symmetric composition, and balanced composition.
Each second image sample in the second image sample set has a training label of a preferred user character, the second image sample set specifically adopts a PsychoFlickr image data set, the PsychoFlickr image data set comprises preferred images of 300 sample users, and the number of the preferred images of each sample user is 200. The commonality of each sample user is characterized by a large Five personality trait (patency, accountability, camber, hommization, and neurogenic), as determined by BFI-10(10-Item Big Five Inventory) psychology testing.
Specifically, taking two basic sub-networks as an example, the computer device inputs a first image sample into the image aesthetic property feature extraction network, performs feature extraction on the first image sample through the basic sub-network of the image aesthetic property feature extraction network, and outputs an image aesthetic property feature prediction result through the output sub-network of the image aesthetic property feature extraction network. The computer device then inputs the second image sample into the preferred user personality feature extraction network, performs feature extraction on the second image sample through a base sub-network of the preferred user personality feature extraction network, and outputs a preferred user personality feature prediction result through an output sub-network of the preferred user personality feature extraction network. Then, the computer device trains the image aesthetic attribute feature extraction network and the preferred user character feature extraction network together based on the difference between the image aesthetic attribute feature prediction result and the image aesthetic attribute feature training label and the difference between the preferred user character feature prediction result and the preferred user character feature training label.
In one embodiment, training labels based on the image aesthetic attribute feature prediction result and the image aesthetic attribute feature, and training labels based on the preferred user character feature prediction result and the preferred user character feature, co-training an image aesthetic attribute feature extraction network and a preferred user character feature extraction network, comprises: constructing a first loss function based on the difference between the image aesthetic attribute feature prediction result and the image aesthetic attribute feature training label, and constructing a second loss function based on the difference between the preference user character feature prediction result and the preference user character feature training label; obtaining a first network gradient parameter by minimizing a first loss function, and obtaining a second network gradient parameter by minimizing a second loss function; and jointly training the image aesthetic attribute feature extraction network and the preference user character feature extraction network by combining the first network gradient parameter and the second network gradient parameter.
The network gradient parameter is a gradient for performing back propagation update on the model parameter, and can be specifically determined by continuously reducing a loss function.
The first loss function is specifically represented by the following formula:
Figure BDA0002698939390000171
wherein L is1Is a first loss function;
Figure BDA0002698939390000172
training labels for image aesthetic attribute features;
Figure BDA0002698939390000173
predicting results for image aesthetic attribute features; n is the number of first image samples; i is the feature dimension of the image aesthetic attribute feature.
The second loss function is specifically represented by the following formula:
Figure BDA0002698939390000174
wherein L is2Is a second loss function;
Figure BDA0002698939390000175
training labels for the preferred user personality characteristics;
Figure BDA0002698939390000176
predicting results for the preferred user personality characteristics; n is the number of first image samples; j is the feature dimension that favors the user's personality traits.
Specifically, the computer device constructs a first loss function based on a difference between the image aesthetic property feature prediction result and the image aesthetic property feature training label, for example, constructs the first loss function by a euclidean distance between the image aesthetic property feature prediction result and the image aesthetic property feature training label; and constructing a second loss function based on the difference between the preference user character feature prediction result and the preference user character feature training label, for example, constructing the second loss function by using the Euclidean distance between the preference user character feature prediction result and the preference user character feature training label. Next, the computer device obtains a first network gradient parameter by minimizing the first loss function, and obtains a second network gradient parameter by minimizing the second loss function. Then, the computer device trains the image aesthetic attribute feature extraction network and the preference user character feature extraction network together by combining the first network gradient parameter and the second network gradient parameter, for example, obtains the sum of the first network gradient parameter and the second network gradient parameter, and updates the model parameters of the image aesthetic attribute feature extraction network and the preference user character feature extraction network by back propagation of the sum of the first network gradient parameter and the second network gradient parameter.
In the embodiment, the image aesthetic attribute feature extraction network and the preferred user character feature extraction network share a basic sub-network, and the image aesthetic attribute feature extraction network and the preferred user character feature extraction network can adopt a co-training mode, so that computer resources are saved, and the model training speed is improved; in addition, the objective attribute of the image and the subjective attribute of the user are obtained in the same model, and the method has practical value for the research of personalized aesthetic evaluation.
Secondly, for the generalized aesthetic evaluation network, a training method of the generalized aesthetic evaluation network is provided below.
In one embodiment, the training step of the generalized aesthetic evaluation network includes: acquiring a fourth image sample set and a generalized aesthetic evaluation network; an aesthetic evaluation score probability distribution training label exists in each fourth image sample in the fourth image sample set; extracting an image aesthetic attribute feature sample corresponding to the fourth image sample; obtaining a generalized aesthetic evaluation prediction result according to the image aesthetic attribute characteristic sample through a generalized aesthetic evaluation network; and training labels based on the generalized aesthetic evaluation prediction result and the aesthetic evaluation score probability distribution, and training a generalized aesthetic evaluation network.
The fourth image sample set may specifically adopt a Flickr-AES (Flickr Images with aesthetical Annotation dataset) image data set, wherein the Flickr-AES image data set is obtained by performing aesthetic evaluation on 35263 Images by 173 sample users, each image collects the aesthetic evaluation scores of 5 sample users, and the aesthetic evaluation scores are all between 1 point and 5 points. For one of the fourth image samples, an aesthetic evaluation score probability distribution is generated based on the aesthetic evaluation scores of the users of each sample, and the sum of the aesthetic evaluation score probabilities is 1. In one embodiment, the computer device constructs the generalized aesthetic evaluation network using a fully connected network, and in order to make the sum of the aesthetic evaluation score probabilities 1, a Sigmoid function may be used as an activation function of the generalized aesthetic evaluation network.
In one embodiment, the step of extracting the image aesthetic property feature sample corresponding to the fourth image sample comprises: acquiring an image aesthetic attribute feature extraction network; inputting the fourth image sample into an image aesthetic property feature extraction network, and performing feature extraction on the fourth image sample through the image aesthetic property feature extraction network to obtain an image aesthetic property feature sample.
Specifically, the computer device inputs the fourth image sample into the image aesthetic property feature extraction network, and extracts the image aesthetic property feature sample corresponding to the fourth image sample through the image aesthetic property feature extraction network. Then, the computer device inputs the image aesthetic property characteristic sample into a generalized aesthetic evaluation network, and obtains an aesthetic evaluation score probability distribution prediction result through the generalized aesthetic evaluation network. Then, the computer device constructs a loss function based on a difference between the aesthetic evaluation score probability distribution prediction result and the aesthetic evaluation score probability distribution training label, for example, constructs a loss function through a euclidean distance between the aesthetic evaluation score probability distribution prediction result and the aesthetic evaluation score probability distribution training label; next, the computer device trains the generalized aesthetic evaluation network in a direction that minimizes the loss function.
The loss function of the generalized aesthetic evaluation network is specifically represented by the following formula:
Figure BDA0002698939390000191
wherein L is3A loss function for the generalized aesthetic evaluation network;
Figure BDA0002698939390000192
representing the probability of the aesthetic evaluation score being l in the training label of the probability distribution of the aesthetic evaluation score;
Figure BDA0002698939390000193
representing the probability that the aesthetic evaluation score is l in the prediction result of the probability distribution of the aesthetic evaluation score; n is the fourth image sample number.
It is understood that the generalized aesthetic evaluation result can be obtained by weighted summation of the aesthetic evaluation score probability distribution prediction result and the corresponding aesthetic evaluation score.
In the embodiment, the training labels are constructed by using the probability distribution of the aesthetic evaluation scores, so that the prediction of the generalized aesthetic evaluation network on the aesthetic evaluation scores better conforms to the actual situation, and the prediction accuracy of the generalized aesthetic evaluation network is improved. Further, for the personalized aesthetic evaluation and correction network, a training method for the personalized aesthetic evaluation and correction network is provided below.
In one embodiment, the training step of the personalized aesthetic evaluation revision network comprises: acquiring a third image sample corresponding to the sample user identifier and a personalized aesthetic evaluation correction network; the third image sample has an individualized aesthetic evaluation correction parameter training label corresponding to the sample user identification; extracting an image aesthetic property characteristic sample corresponding to the third image sample; obtaining a user character feature sample corresponding to the sample user identification; obtaining a personalized aesthetic evaluation correction parameter prediction result corresponding to the sample user identifier according to the fusion result of the image aesthetic attribute characteristic sample and the user character characteristic sample through a personalized aesthetic evaluation correction network; and training a personalized aesthetic evaluation correction network based on the personalized aesthetic evaluation correction parameter prediction result and the personalized aesthetic evaluation correction parameter training label.
The third image sample has a personalized aesthetic evaluation correction parameter training label corresponding to a sample user identifier, the third image sample can be specifically from a Flickr-AES image data set, the number of images for each sample user to perform aesthetic evaluation is 105-171, each image collects an aesthetic evaluation score of 5 sample users, and the aesthetic evaluation score range is 1-5.
In one embodiment, the computer device first constructs a personalized aesthetic evaluation modification network composed of a fully connected network, and then trains the personalized aesthetic evaluation modification network by using the third image sample set and the personalized aesthetic evaluation modification parameter training labels.
In one embodiment, the third image sample corresponds to more than one sample user identification; the acquisition step of the personalized aesthetic evaluation correction parameter training label comprises the following steps: obtaining personalized aesthetic evaluation result samples of user identifications of all samples corresponding to the third image sample; obtaining a personalized aesthetic evaluation mean value sample based on each personalized aesthetic evaluation result sample; and obtaining personalized aesthetic evaluation correction parameter samples corresponding to the user identifications of the samples based on the personalized aesthetic evaluation result samples and the personalized aesthetic evaluation mean value samples, and taking the personalized aesthetic evaluation correction parameter samples as personalized aesthetic evaluation correction parameter training labels corresponding to the third image samples and the user identifications of the samples.
Specifically, the third image sample has an aesthetic evaluation score corresponding to one sample user identifier, for one of the third image samples, an aesthetic evaluation score mean corresponding to the third image sample is obtained, and for one of the sample user identifiers, a difference between the corresponding aesthetic evaluation score and the aesthetic evaluation score mean is a personalized aesthetic evaluation modification parameter of the sample user identifier.
Specifically, the computer equipment inputs the third image sample into an image aesthetic property feature extraction network, and performs feature extraction on the third image sample through the image aesthetic property feature extraction network to obtain an image aesthetic property feature sample. Next, the computer device obtains a sample of the user personality characteristic corresponding to the sample user identification. And then, the computer equipment obtains a personalized aesthetic evaluation correction parameter prediction result corresponding to the sample user identifier according to the fusion result of the image aesthetic attribute feature sample and the user character feature sample through a personalized aesthetic evaluation correction network. And then, the computer device constructs a loss function based on the difference between the personalized aesthetic evaluation correction parameter prediction result and the personalized aesthetic evaluation correction parameter training label, and trains the personalized aesthetic evaluation correction network according to the direction of the minimum loss function.
The loss function of the personalized aesthetic evaluation modification network is specifically expressed by the following formula:
Figure BDA0002698939390000211
wherein L is4Modifying the loss function of the network for personalized aesthetic evaluation; e.g. of the typenTraining labels for the personalized aesthetic evaluation correction parameters; f. ofnModifying parameter predictions for personalized aesthetic evaluations(ii) a N is the third number of image samples.
In the embodiment, the model is built by using the aesthetic character characteristics of the individual user and the aesthetic quality of the image for the correction parameters of the personalized aesthetic relatively popular aesthetic, and the objective fact that users with different universal character characteristics have aesthetic differences is met.
The application relates to a user character feature sample, and provides a method for acquiring the user character feature sample corresponding to a sample user identifier.
In one embodiment, obtaining a sample of user personality characteristics corresponding to a sample user identification comprises: acquiring a preference user character feature sample corresponding to the third image sample; obtaining a personalized aesthetic evaluation result sample corresponding to the sample user identifier; and obtaining a user character characteristic sample corresponding to the sample user identification based on the personalized aesthetic evaluation result sample and the preference user character characteristic sample.
In one embodiment, the sample user identification corresponds to at least two third image samples; the personalized aesthetic evaluation result sample is a personalized aesthetic evaluation score sample; obtaining a user character characteristic sample corresponding to the sample user identification based on the personalized aesthetic evaluation result sample and the preference user character characteristic sample, wherein the user character characteristic sample comprises the following steps: obtaining a personalized aesthetic evaluation maximum value sample and a personalized aesthetic evaluation minimum value sample of the at least two third image samples; for one third image sample, obtaining the preference degree of the sample user identifier to the third image sample based on the personalized aesthetic evaluation score sample, the personalized aesthetic evaluation maximum value sample and the personalized aesthetic evaluation minimum value sample which correspond to the sample user identifier; and obtaining the user character feature sample corresponding to the sample user identification based on the preference degree of the sample user identification to each third image sample and the preference user character feature sample corresponding to each third image sample.
In one embodiment, the feature dimension of the preferred user personality feature sample is a plurality; obtaining a user character feature sample corresponding to the sample user identification based on the preference degree and the preference user character feature sample, wherein the user character feature sample comprises: and according to the preference degree, adjusting the characteristic values of the preference user character characteristic samples in all dimensions to obtain the user character characteristic samples corresponding to the sample user identifications.
The above process of obtaining the user personality characteristic sample corresponding to the sample user identifier may specifically refer to the method for obtaining the user personality characteristic corresponding to the target user identifier, and is not described herein again.
The application intends to perform personalized aesthetic evaluation on the image by combining the subjective aesthetic of the user and the objective aesthetic quality of the image, and the user character is an important factor influencing the aesthetic preference of the user, thereby conceiving the reference of the relevant parameters of the user character in the personalized aesthetic evaluation task.
There are currently sample sets of images, such as the PsychoFlickr image dataset, that simultaneously label preferred user personality characteristics and aesthetic evaluation scores. The PsychoFlickr image dataset comprises preference images of 300 sample users, the number of the preference images of each sample user is 200, and the sample users mark the preference images with aesthetic evaluation scores. The commonality of each sample user is characterized by a large Five personality trait (patency, accountability, camber, hommization, and neurogenic), as determined by BFI-10(10-Item Big Five Inventory) psychology testing.
First, for the sample user, the aesthetic evaluation score of the sample user for the image sample is related to its own scoring criteria, such as the aesthetic feelings of the same image sample by two sample users are the same, but the aesthetic evaluation scores may be different, so the aesthetic evaluation scores may not actually accurately reflect the aesthetic feelings of different sample users. In the application, for one sample user, the preference degree of the sample user for each image sample is determined based on the aesthetic evaluation scores corresponding to more than one image sample corresponding to the sample user, and the preference degree can accurately reflect the aesthetic feelings of different sample users because the preference degree eliminates the difference between the scoring standards of different sample users.
Secondly, the psychological test also has certain subjectivity, and the universal character feature of the sample user is updated according to the preference degree, so that the updated aesthetic character feature is associated with the aesthetic feeling of the user, the subjective error of the psychological test is reduced, and the accuracy of the user character feature on the aesthetic feeling level is improved.
The application also provides an application scene, and the application scene applies the image processing method. The application scene can be an image recommendation scene, an image enhancement scene, an image retrieval scene, a personal album management scene and the like. Specifically, referring to fig. 8, the application of the image processing method to the application scenario is as follows:
step 802, acquiring an image to be processed and a target user identifier.
Step 804, obtaining an image aesthetic property feature extraction network, inputting the image to be processed into the image aesthetic property feature extraction network, and outputting the image aesthetic property feature through the image aesthetic property feature extraction network.
And 806, acquiring a generalized aesthetic evaluation network, inputting the aesthetic attribute characteristics of the image into the generalized aesthetic evaluation network, and outputting a generalized aesthetic evaluation result through the generalized aesthetic evaluation network.
And 808, acquiring a personalized aesthetic evaluation correction network, and acquiring a personalized aesthetic evaluation correction parameter according to the fusion result of the image aesthetic attribute characteristics and the user character characteristics through the personalized aesthetic evaluation correction network.
The obtaining mode of the user character features is as follows: the method comprises the steps that computer equipment obtains more than one reference image corresponding to a target user identifier, and each reference image is marked with an individualized aesthetic evaluation score corresponding to the target user identifier; then, the computer equipment acquires a preferred user character feature extraction network, respectively inputs more than one reference image into the preferred user character feature extraction network, and obtains the preferred user character features corresponding to the more than one reference image through the preferred user character feature extraction network; then, the computer equipment acquires the maximum value and the minimum value of the personalized aesthetic evaluation corresponding to each reference image and the target user identification; then, for one reference image, the computer equipment obtains the preference degree of the target user identifier to the reference image based on the personalized aesthetic evaluation score, the personalized aesthetic evaluation maximum value and the personalized aesthetic evaluation minimum value corresponding to the target user identifier; and then, the computer equipment obtains the user character characteristics corresponding to the target user identification based on the preference degree of the target user identification to each reference image and the corresponding preference user character characteristics of each reference image.
And step 810, determining a personalized aesthetic evaluation result corresponding to the target user identification based on the generalized aesthetic evaluation result and the personalized aesthetic evaluation correction parameter.
In one embodiment, referring to fig. 7(a), fig. 7(a) is a schematic diagram of training of an image aesthetic property feature extraction network in one embodiment. For the image aesthetic property feature extraction network and the preferred user character feature extraction network, the first image sample 702 is input into the image aesthetic property feature extraction network, and the image aesthetic property feature prediction result 704 is output through the image aesthetic property feature extraction network. The second image sample 706 is input to the preferred user personality feature extraction network, through which the preferred user personality feature prediction results 708 are output. Constructing a first loss function based on the difference between the image aesthetic attribute feature prediction result 704 and the image aesthetic attribute feature training label, constructing a second loss function based on the difference between the preference user character feature prediction result 708 and the preference user character feature training label, obtaining a first network gradient parameter by minimizing the first loss function, obtaining a second network gradient parameter by minimizing the second loss function, and jointly training an image aesthetic attribute feature extraction network and a preference user character feature extraction network by combining the first network gradient parameter and the second network gradient parameter.
In one embodiment, referring to FIG. 7(b), FIG. 7(b) is a training diagram of a generalized aesthetic evaluation network in one embodiment. The fourth image sample 710 is input into the image aesthetic property feature extraction network, and the image aesthetic property feature sample 712 is output through the image aesthetic property feature extraction network. Next, the image aesthetic property feature sample 712 is input into a generalized aesthetic evaluation network, and an aesthetic evaluation score probability distribution prediction result 714 is obtained through the generalized aesthetic evaluation network. Next, a loss function is constructed based on the difference between the aesthetic evaluation score probability distribution prediction result 714 and the aesthetic evaluation score probability distribution training labels, and the generalized aesthetic evaluation network is trained in the direction of minimizing the loss function.
In one embodiment, referring to FIG. 7(c), FIG. 7(c) is a training diagram of a personalized aesthetic evaluation revision network in one embodiment. The third image sample 716 is input into the image aesthetic property feature extraction network, and the image aesthetic property feature sample 718 is output through the image aesthetic property feature extraction network. Next, the third image sample 716 is input into the preferred user character feature extraction network, the preferred user character feature sample 720 is output through the preferred user character feature extraction network, and a corresponding user character feature sample 722 of the sample user identification is obtained based on the preferred user character feature sample 720. Then, through the personalized aesthetic evaluation correction network, according to the fusion result of the image aesthetic attribute feature sample 718 and the user character feature sample 722, a personalized aesthetic evaluation correction parameter prediction result corresponding to the sample user identifier is obtained. And then constructing a loss function based on the difference between the personalized aesthetic evaluation correction parameter prediction result and the personalized aesthetic evaluation correction parameter training label, and training a personalized aesthetic evaluation correction network according to the direction of the minimized loss function.
It should be understood that, although the steps in the flowcharts of fig. 2, 5, and 8 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2, 5, and 8 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the steps or stages in other steps.
In one embodiment, as shown in fig. 9, an image processing apparatus is provided, which may be a part of a computer device using a software module or a hardware module, or a combination of the two, and specifically includes: an acquisition module 902, an extraction module 904, a generalized aesthetic evaluation module 906, a modification parameter determination module 908, and a personalized aesthetic evaluation module 910, wherein:
an obtaining module 902, configured to obtain an image to be processed and a target user identifier;
an extracting module 904, configured to extract image aesthetic attribute features corresponding to the image to be processed;
a generalized aesthetic evaluation module 906, configured to determine a generalized aesthetic evaluation result of the image to be processed according to the image aesthetic attribute characteristics;
a modification parameter determination module 908, configured to determine, according to the image aesthetic attribute feature and the user personality feature corresponding to the target user identifier, a personalized aesthetic evaluation modification parameter corresponding to the target user identifier;
and the personalized aesthetic evaluation module 910 is configured to determine a personalized aesthetic evaluation result corresponding to the target user identifier based on the generalized aesthetic evaluation result and the personalized aesthetic evaluation modification parameter.
In one embodiment, the extraction module 904 is further configured to: acquiring an image aesthetic attribute feature extraction network; inputting the image to be processed into an image aesthetic property feature extraction network, and outputting the image aesthetic property feature through the image aesthetic property feature extraction network.
In one embodiment, the image aesthetic attribute feature extraction network and the preference user character feature extraction network are trained together; the image aesthetic attribute feature extraction network and the preference user character feature extraction network share a basic sub-network and respectively comprise corresponding output sub-networks; the image processing apparatus further comprises a training module configured to: acquiring a first image sample set, a second image sample set, an image aesthetic attribute feature extraction network and a preference user character feature extraction network; each first image sample in the first image sample set respectively has an image aesthetic attribute feature training label, and each second image sample in the second image sample set respectively has a preference user character feature training label; inputting the first image sample into an image aesthetic property feature extraction network, performing feature extraction on the first image sample through a basic sub-network of the image aesthetic property feature extraction network, and outputting an image aesthetic property feature prediction result through an output sub-network of the image aesthetic property feature extraction network; inputting the second image sample into a preferred user character feature extraction network, performing feature extraction on the second image sample through a basic sub-network of the preferred user character feature extraction network, and outputting a preferred user character feature prediction result through an output sub-network of the preferred user character feature extraction network; and training an image aesthetic attribute feature extraction network and a preference user character feature extraction network together based on the image aesthetic attribute feature prediction result and the image aesthetic attribute feature training label, the preference user character feature prediction result and the preference user character feature training label.
In one embodiment, the training module is further to: constructing a first loss function based on the difference between the image aesthetic attribute feature prediction result and the image aesthetic attribute feature training label, and constructing a second loss function based on the difference between the preference user character feature prediction result and the preference user character feature training label; obtaining a first network gradient parameter by minimizing a first loss function, and obtaining a second network gradient parameter by minimizing a second loss function; and jointly training the image aesthetic attribute feature extraction network and the preference user character feature extraction network by combining the first network gradient parameter and the second network gradient parameter.
In one embodiment, the rework parameter determination module 908 is further configured to: acquiring a personalized aesthetic evaluation correction network; and obtaining a personalized aesthetic evaluation correction parameter according to the fusion result of the image aesthetic attribute characteristics and the user character characteristics through a personalized aesthetic evaluation correction network.
In one embodiment, the training module is further to: acquiring a third image sample corresponding to the sample user identifier and a personalized aesthetic evaluation correction network; the third image sample has an individualized aesthetic evaluation correction parameter training label corresponding to the sample user identification; extracting an image aesthetic property characteristic sample corresponding to the third image sample; obtaining a user character feature sample corresponding to the sample user identification; obtaining a personalized aesthetic evaluation correction parameter prediction result corresponding to the sample user identifier according to the fusion result of the image aesthetic attribute characteristic sample and the user character characteristic sample through a personalized aesthetic evaluation correction network; and training a personalized aesthetic evaluation correction network based on the personalized aesthetic evaluation correction parameter prediction result and the personalized aesthetic evaluation correction parameter training label.
In one embodiment, the third image sample corresponds to more than one sample user identification; the training module is further configured to: obtaining personalized aesthetic evaluation result samples of user identifications of all samples corresponding to the third image sample; obtaining a personalized aesthetic evaluation mean value sample based on each personalized aesthetic evaluation result sample; and obtaining personalized aesthetic evaluation correction parameter samples corresponding to the user identifications of the samples based on the personalized aesthetic evaluation result samples and the personalized aesthetic evaluation mean value samples, and taking the personalized aesthetic evaluation correction parameter samples as personalized aesthetic evaluation correction parameter training labels corresponding to the third image samples and the user identifications of the samples.
In one embodiment, the training module is further to: acquiring a preference user character feature sample corresponding to the third image sample; obtaining a personalized aesthetic evaluation result sample corresponding to the sample user identifier; and obtaining a user character characteristic sample corresponding to the sample user identification based on the personalized aesthetic evaluation result sample and the preference user character characteristic sample.
In one embodiment, the training module is further to: acquiring an image aesthetic attribute feature extraction network and a preference user character feature extraction network; inputting the third image sample into an image aesthetic property feature extraction network, performing feature extraction on the third image sample through a basic sub-network of the image aesthetic property feature extraction network, and outputting the image aesthetic property feature sample through an output sub-network of the image aesthetic property feature extraction network; inputting the third image sample into a preferred user character feature extraction network, performing feature extraction on the third image sample through a basic sub-network of the preferred user character feature extraction network, and outputting a preferred user character feature sample through an output sub-network of the preferred user character feature extraction network.
In one embodiment, the sample user identification corresponds to at least two third image samples; the personalized aesthetic evaluation result sample is a personalized aesthetic evaluation score sample; the training module is further configured to: obtaining a personalized aesthetic evaluation maximum value sample and a personalized aesthetic evaluation minimum value sample of the at least two third image samples; for one third image sample, obtaining the preference degree of the sample user identifier to the third image sample based on the personalized aesthetic evaluation score sample, the personalized aesthetic evaluation maximum value sample and the personalized aesthetic evaluation minimum value sample which correspond to the sample user identifier; and obtaining the user character feature sample corresponding to the sample user identification based on the preference degree of the sample user identification to each third image sample and the preference user character feature sample corresponding to each third image sample.
In one embodiment, the feature dimension of the preferred user personality feature sample is a plurality; the training module is further configured to: and according to the preference degree, adjusting the characteristic values of the preference user character characteristic samples in all dimensions to obtain the user character characteristic samples corresponding to the sample user identifications.
In one embodiment, the image to be processed is plural; the personalized aesthetic evaluation result is a personalized aesthetic evaluation score; the image processing apparatus further comprises an application module for: acquiring the personalized aesthetic evaluation score of the target user identification on each image to be processed; selecting a preset number of target images from the images to be processed according to the personalized aesthetic evaluation scores; and outputting the target image to a terminal where the target user identification is located.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, which are not described herein again. The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In the image processing apparatus, the generalized aesthetic evaluation result is a result obtained by evaluating the aesthetic characteristics of the image, which reflects the aesthetic quality of the image on an objective level, and the generalized aesthetic evaluation result is corrected according to the aesthetic deviation corresponding to the user character characteristics, so that the corrected personalized aesthetic evaluation result approaches to the aesthetic evaluation result obtained by the user based on the self-aesthetics, that is, the personalized aesthetic evaluation is performed on the image by combining the subjective aesthetics of the user and the objective aesthetic quality of the image, so that the personalized aesthetic evaluation result conforms to the aesthetic preference of the individual user, and the accuracy of the personalized aesthetic evaluation result is improved.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing image processing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image processing method.
Those skilled in the art will appreciate that the architecture shown in fig. 10 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps in the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. An image processing method, characterized in that the method comprises:
acquiring an image to be processed and a target user identifier;
extracting image aesthetic attribute characteristics corresponding to the image to be processed;
determining a generalized aesthetic evaluation result of the image to be processed according to the image aesthetic attribute characteristics;
according to the image aesthetic attribute characteristics and the user character characteristics corresponding to the target user identification, determining personalized aesthetic evaluation correction parameters corresponding to the target user identification;
and determining a personalized aesthetic evaluation result corresponding to the target user identification based on the generalized aesthetic evaluation result and the personalized aesthetic evaluation correction parameter.
2. The method according to claim 1, wherein the extracting of the image aesthetic property features corresponding to the image to be processed comprises:
acquiring an image aesthetic attribute feature extraction network;
and inputting the image to be processed into the image aesthetic property feature extraction network, and outputting the image aesthetic property feature through the image aesthetic property feature extraction network.
3. The method of claim 2, wherein the image aesthetic attribute feature extraction network is trained with a preferred user personality feature extraction network; the image aesthetic attribute feature extraction network and the preference user character feature extraction network share a basic sub-network and respectively comprise corresponding output sub-networks; the step of co-training the image aesthetic attribute feature extraction network and the preferred user personality feature extraction network comprises:
acquiring a first image sample set, a second image sample set, the image aesthetic attribute feature extraction network and the preference user character feature extraction network; each first image sample in the first image sample set respectively has an image aesthetic attribute feature training label, and each second image sample in the second image sample set respectively has a preference user character feature training label;
inputting the first image sample into the image aesthetic property feature extraction network, performing feature extraction on the first image sample through a basic sub-network of the image aesthetic property feature extraction network, and outputting an image aesthetic property feature prediction result through an output sub-network of the image aesthetic property feature extraction network;
inputting the second image sample into the preferred user character feature extraction network, performing feature extraction on the second image sample through a basic sub-network of the preferred user character feature extraction network, and outputting a preferred user character feature prediction result through an output sub-network of the preferred user character feature extraction network;
and jointly training the image aesthetic attribute feature extraction network and the preference user character feature extraction network based on the image aesthetic attribute feature prediction result and the image aesthetic attribute feature training label as well as the preference user character feature prediction result and the preference user character feature training label.
4. The method according to claim 3, wherein the training the label based on the image aesthetic property feature prediction result and the image aesthetic property feature training result, and the preferred user character feature prediction result and the preferred user character feature training label, co-training the image aesthetic property feature extraction network and the preferred user character feature extraction network comprises:
constructing a first loss function based on the difference between the image aesthetic attribute feature prediction result and the image aesthetic attribute feature training label, and constructing a second loss function based on the difference between the preference user character feature prediction result and the preference user character feature training label;
obtaining a first network gradient parameter by minimizing the first loss function, and obtaining a second network gradient parameter by minimizing the second loss function;
and jointly training the image aesthetic attribute feature extraction network and the preference user character feature extraction network by combining the first network gradient parameter and the second network gradient parameter.
5. The method according to claim 1, wherein the determining of the personalized aesthetic evaluation modification parameter corresponding to the target user identifier according to the image aesthetic attribute feature and the user character feature corresponding to the target user identifier comprises:
acquiring a personalized aesthetic evaluation correction network;
and obtaining the personalized aesthetic evaluation correction parameters according to the fusion result of the image aesthetic attribute characteristics and the user character characteristics through the personalized aesthetic evaluation correction network.
6. The method of claim 5, wherein the step of training the personalized aesthetic evaluation modification network comprises:
acquiring a third image sample corresponding to a sample user identifier and the personalized aesthetic evaluation correction network; the third image sample has an individualized aesthetic evaluation correction parameter training label corresponding to the sample user identification;
extracting an image aesthetic property characteristic sample corresponding to the third image sample;
obtaining a user character feature sample corresponding to the sample user identification;
obtaining a personalized aesthetic evaluation correction parameter prediction result corresponding to the sample user identifier according to the fusion result of the image aesthetic attribute feature sample and the user character feature sample through the personalized aesthetic evaluation correction network;
and training the personalized aesthetic evaluation correction network based on the personalized aesthetic evaluation correction parameter prediction result and the personalized aesthetic evaluation correction parameter training label.
7. The method of claim 6, wherein the third image sample corresponds to more than one sample user identification;
the step of obtaining the personalized aesthetic evaluation correction parameter training label comprises the following steps:
obtaining personalized aesthetic evaluation result samples of the user identifications of the samples corresponding to the third image sample;
obtaining a personalized aesthetic evaluation mean value sample based on each personalized aesthetic evaluation result sample;
and obtaining personalized aesthetic evaluation correction parameter samples corresponding to the user identifications of the samples based on the personalized aesthetic evaluation result samples and the personalized aesthetic evaluation mean value samples, and taking the personalized aesthetic evaluation correction parameter samples as personalized aesthetic evaluation correction parameter training labels corresponding to the third image samples and the user identifications of the samples.
8. The method of claim 6, wherein said obtaining a sample of user personality characteristics corresponding to said sample user identification comprises:
acquiring a preference user character feature sample corresponding to the third image sample;
obtaining a personalized aesthetic evaluation result sample corresponding to the sample user identifier;
and obtaining a user character feature sample corresponding to the sample user identification based on the personalized aesthetic evaluation result sample and the preference user character feature sample.
9. The method according to claim 8, wherein the step of obtaining the corresponding image aesthetic property feature sample and preferred user character feature sample of the third image sample comprises:
acquiring an image aesthetic attribute feature extraction network and a preference user character feature extraction network;
inputting the third image sample into the image aesthetic property feature extraction network, performing feature extraction on the third image sample through a basic sub-network of the image aesthetic property feature extraction network, and outputting the image aesthetic property feature sample through an output sub-network of the image aesthetic property feature extraction network;
inputting the third image sample into the preferred user character feature extraction network, performing feature extraction on the third image sample through a basic sub-network of the preferred user character feature extraction network, and outputting the preferred user character feature sample through an output sub-network of the preferred user character feature extraction network.
10. The method of claim 8, wherein the sample user identification corresponds to at least two third image samples; the personalized aesthetic evaluation result sample is a personalized aesthetic evaluation score sample;
the obtaining a user character feature sample corresponding to the sample user identifier based on the personalized aesthetic evaluation result sample and the preferred user character feature sample comprises:
obtaining a personalized aesthetic evaluation maximum value sample and a personalized aesthetic evaluation minimum value sample of at least two third image samples;
for one third image sample, obtaining the preference degree of the sample user identifier to the third image sample based on the personalized aesthetic evaluation score sample, the personalized aesthetic evaluation maximum value sample and the personalized aesthetic evaluation minimum value sample corresponding to the sample user identifier;
and obtaining a user character feature sample corresponding to the sample user identifier based on the preference degree of the sample user identifier to each third image sample and the corresponding preference user character feature sample of each third image sample.
11. The method of claim 10, wherein the sample of preferred user characteristics has a plurality of characteristic dimensions;
the obtaining a user personality characteristic sample corresponding to the sample user identifier based on the preference degree and the preference user personality characteristic sample comprises:
and adjusting the characteristic value of the preference user character characteristic sample in each dimension according to the preference degree to obtain the user character characteristic sample corresponding to the sample user identifier.
12. The method according to claim 1, wherein the image to be processed is plural; the personalized aesthetic evaluation result is a personalized aesthetic evaluation score;
the method further comprises the following steps:
acquiring the personalized aesthetic evaluation score of the target user identification on each image to be processed;
selecting a preset number of target images from the images to be processed according to the personalized aesthetic evaluation scores;
and outputting the target image to a terminal where the target user identification is located.
13. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring the image to be processed and the target user identification;
the extraction module is used for extracting image aesthetic attribute characteristics corresponding to the image to be processed;
the generalized aesthetic evaluation module is used for determining a generalized aesthetic evaluation result of the image to be processed according to the image aesthetic attribute characteristics;
the correction parameter determining module is used for determining personalized aesthetic evaluation correction parameters corresponding to the target user identification according to the image aesthetic attribute characteristics and the user character characteristics corresponding to the target user identification;
and the personalized aesthetic evaluation module is used for determining a personalized aesthetic evaluation result corresponding to the target user identifier based on the generalized aesthetic evaluation result and the personalized aesthetic evaluation correction parameter.
14. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 12.
15. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 12.
CN202011015545.1A 2020-09-24 2020-09-24 Image processing method, image processing device, computer equipment and computer readable storage medium Pending CN112102304A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011015545.1A CN112102304A (en) 2020-09-24 2020-09-24 Image processing method, image processing device, computer equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011015545.1A CN112102304A (en) 2020-09-24 2020-09-24 Image processing method, image processing device, computer equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112102304A true CN112102304A (en) 2020-12-18

Family

ID=73755539

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011015545.1A Pending CN112102304A (en) 2020-09-24 2020-09-24 Image processing method, image processing device, computer equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112102304A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7078307B1 (en) 2022-01-14 2022-05-31 望 窪田 Individualization of learning model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7078307B1 (en) 2022-01-14 2022-05-31 望 窪田 Individualization of learning model
JP2023103675A (en) * 2022-01-14 2023-07-27 望 窪田 Individualization of learning model

Similar Documents

Publication Publication Date Title
US11334628B2 (en) Dressing recommendation method and dressing recommendation apparatus
CN110489582B (en) Method and device for generating personalized display image and electronic equipment
CN112330684B (en) Object segmentation method and device, computer equipment and storage medium
US11966829B2 (en) Convolutional artificial neural network based recognition system in which registration, search, and reproduction of image and video are divided between and performed by mobile device and server
CN110765882A (en) Video tag determination method, device, server and storage medium
CN113641835B (en) Multimedia resource recommendation method and device, electronic equipment and medium
CN116310318B (en) Interactive image segmentation method, device, computer equipment and storage medium
CN116580257A (en) Feature fusion model training and sample retrieval method and device and computer equipment
CN110866469A (en) Human face facial features recognition method, device, equipment and medium
CN113536105A (en) Recommendation model training method and device
CN112131261A (en) Community query method and device based on community network and computer equipment
CN111860484B (en) Region labeling method, device, equipment and storage medium
CN113204659A (en) Label classification method and device for multimedia resources, electronic equipment and storage medium
CN113836303A (en) Text type identification method and device, computer equipment and medium
CN112116589A (en) Method, device and equipment for evaluating virtual image and computer readable storage medium
CN117726884B (en) Training method of object class identification model, object class identification method and device
CN117078790A (en) Image generation method, device, computer equipment and storage medium
CN115984930A (en) Micro expression recognition method and device and micro expression recognition model training method
CN111598000A (en) Face recognition method, device, server and readable storage medium based on multiple tasks
CN111401193A (en) Method and device for obtaining expression recognition model and expression recognition method and device
CN115130493A (en) Face deformation recommendation method, device, equipment and medium based on image recognition
CN112102304A (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN116701706B (en) Data processing method, device, equipment and medium based on artificial intelligence
WO2024051146A1 (en) Methods, systems, and computer-readable media for recommending downstream operator
CN116956183A (en) Multimedia resource recommendation method, model training method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination