CN111062435A - Image analysis method and device and electronic equipment - Google Patents

Image analysis method and device and electronic equipment Download PDF

Info

Publication number
CN111062435A
CN111062435A CN201911286023.2A CN201911286023A CN111062435A CN 111062435 A CN111062435 A CN 111062435A CN 201911286023 A CN201911286023 A CN 201911286023A CN 111062435 A CN111062435 A CN 111062435A
Authority
CN
China
Prior art keywords
analyzed
image
role
matched
keywords
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911286023.2A
Other languages
Chinese (zh)
Inventor
金超逸
周悦成
朱佳静
董桐辉
陆祁
周寻
孙斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201911286023.2A priority Critical patent/CN111062435A/en
Publication of CN111062435A publication Critical patent/CN111062435A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The embodiment of the application provides an image analysis method, an image analysis device and electronic equipment, wherein the image to be analyzed and role keywords of each person to be matched are obtained, and the role keywords represent at least one of the image, the quality and the character of a target role; respectively calculating the matching degree of each image to be analyzed and the role keywords through a pre-trained matching model; and sequencing the personnel to be matched according to the matching degrees, thereby recommending the personnel to be matched aiming at the target role. And analyzing the images to be analyzed of the persons to be matched, determining the matching degree of the images to be analyzed and the role keywords, and sequencing the persons to be matched according to the matching degree, thereby recommending the persons to be matched aiming at the target role and realizing the automatic artist recommendation aiming at the specified role. And the role recommendation is carried out according to the image of the person to be matched, and compared with the role recommendation carried out according to the description text of the person to be matched, the credibility of the role recommendation is higher and the recommendation result is more visual.

Description

Image analysis method and device and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image analysis method and apparatus, and an electronic device.
Background
Along with the improvement of living standard of people, the entertainment industry has also been developed rapidly. Aiming at the movie and television industry, reasonable selection of actor roles is the key for ensuring the quality of movie and television plays, in the existing corner selection process, the corners are manually selected by staff such as director, prison and the like, but the manual corner selection has large workload, so that the artist can be automatically recommended aiming at the specified roles.
Disclosure of Invention
The embodiment of the application aims to provide an image analysis method, an image analysis device and electronic equipment, so that artists can be automatically recommended for specified roles. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides an image analysis method, where the method includes:
acquiring images to be analyzed and role keywords of each person to be matched, wherein the role keywords represent at least one of the image, the quality and the character of a target role;
respectively calculating the matching degree of each image to be analyzed and the role keywords through a pre-trained matching model, wherein the pre-trained matching model is obtained by utilizing a sample image marked with a preset dimension direction for training;
and sequencing the personnel to be matched according to the matching degrees, so as to recommend the personnel to be matched to the target role.
Optionally, the calculating, through a pre-trained matching model, the matching degree between each image to be analyzed and the role keyword respectively includes:
respectively extracting personnel characteristics of the personnel to be matched in each image to be analyzed through a pre-trained matching model;
analyzing the personnel characteristics, determining the direction of each image to be analyzed in each preset dimension, and obtaining the actual direction of each image to be analyzed;
and determining the matching degree of each image to be analyzed and the role keywords according to the actual direction of each image to be analyzed, the role keywords and a preset corresponding relation, wherein the preset corresponding relation represents the corresponding relation between the keywords and the preset dimension direction.
Optionally, the determining, according to the actual direction of each image to be analyzed, the role keyword, and a preset corresponding relationship, a matching degree between each image to be analyzed and the role keyword includes:
determining the direction of the role key words in each preset dimension according to the preset corresponding relation to obtain the reference direction of the role key words;
and aiming at each image to be analyzed, comparing the actual direction of the image to be analyzed with the reference direction of the character keyword, and determining the score of the image to be analyzed as the matching degree of the image to be analyzed and the character keyword.
Optionally, the determining, according to the actual direction of each image to be analyzed, the role keyword, and a preset corresponding relationship, a matching degree between each image to be analyzed and the role keyword includes:
determining keywords corresponding to the images to be analyzed according to the preset corresponding relation and the actual direction of the images to be analyzed;
and matching keywords corresponding to the images to be analyzed with the role keywords respectively to obtain the matching degree of the images to be analyzed and the role keywords.
Optionally, the sorting the persons to be matched according to the matching degrees includes:
and sequencing the personnel to be matched according to the descending order of the maximum matching degree corresponding to the personnel to be matched.
In a second aspect, an embodiment of the present application provides an image analysis apparatus, including:
the information acquisition module is used for acquiring images to be analyzed and role keywords of each person to be matched, wherein the role keywords represent at least one of the image, the quality and the character of a target role;
the matching degree calculation module is used for calculating the matching degree of each image to be analyzed and the role keywords respectively through a pre-trained matching model, wherein the pre-trained matching model is obtained by utilizing a sample image marked with a preset dimension direction through training;
and the personnel sorting module is used for sorting the personnel to be matched according to the matching degrees, so that the personnel to be matched are recommended for the target role.
Optionally, the matching degree calculating module includes:
the personnel feature extraction submodule is used for respectively extracting the personnel features of the personnel to be matched in the images to be analyzed through a pre-trained matching model;
the actual direction determining submodule is used for analyzing the characteristics of the personnel, determining the direction of each image to be analyzed in each preset dimension, and obtaining the actual direction of each image to be analyzed;
and the matching degree determining sub-module is used for determining the matching degree of each image to be analyzed and the role keywords according to the actual direction of each image to be analyzed, the role keywords and a preset corresponding relation, wherein the preset corresponding relation represents the corresponding relation between the keywords and the preset dimension direction.
Optionally, the matching degree determining sub-module is specifically configured to: determining the direction of the role key words in each preset dimension according to the preset corresponding relation to obtain the reference direction of the role key words; and aiming at each image to be analyzed, comparing the actual direction of the image to be analyzed with the reference direction of the character keyword, and determining the score of the image to be analyzed as the matching degree of the image to be analyzed and the character keyword.
Optionally, the matching degree determining sub-module is specifically configured to: determining keywords corresponding to the images to be analyzed according to the preset corresponding relation and the actual direction of the images to be analyzed; and matching keywords corresponding to the images to be analyzed with the role keywords respectively to obtain the matching degree of the images to be analyzed and the role keywords.
Optionally, the people sorting module is specifically configured to: and sequencing the personnel to be matched according to the descending order of the maximum matching degree corresponding to the personnel to be matched, so as to recommend the personnel to be matched aiming at the target role.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to implement the image analysis method according to any one of the first aspect described above when executing the program stored in the memory.
In yet another aspect of this embodiment, there is also provided a computer-readable storage medium having stored therein instructions, which when executed on a computer, cause the computer to perform any of the image analysis methods described above.
In yet another aspect of this application, the present invention further provides a computer program product containing instructions, which when run on a computer, causes the computer to execute any of the image analysis methods described above.
The image analysis method, the image analysis device and the electronic equipment provided by the embodiment of the application acquire the image to be analyzed and the role keywords of each person to be matched, wherein the role keywords represent at least one of the image, the quality and the character of a target role; respectively calculating the matching degree of each image to be analyzed and the role keywords through a pre-trained matching model; and sequencing the personnel to be matched according to the matching degrees, thereby recommending the personnel to be matched aiming at the target role. And analyzing the images to be analyzed of the persons to be matched, determining the matching degree of the images to be analyzed and the role keywords, and sequencing the persons to be matched according to the matching degree, thereby recommending the persons to be matched aiming at the target role and realizing the automatic artist recommendation aiming at the specified role. Compared with the role recommendation according to the description text of the person to be matched, the role recommendation method has the advantages that the credibility of the role recommendation according to the image is higher, the matching mode and the result are more visual, and cheating conditions such as modifying the description text of the person to be matched can be reduced. Of course, not all advantages described above need to be achieved at the same time in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a first schematic diagram of an image analysis method according to an embodiment of the present application;
FIG. 2 is a second schematic diagram of an image analysis method according to an embodiment of the present application;
FIG. 3 is a third schematic diagram of an image analysis method according to an embodiment of the present application;
FIG. 4 is a first schematic diagram of an image analysis apparatus according to an embodiment of the present application;
FIG. 5 is a second schematic diagram of an image analysis apparatus according to an embodiment of the present application;
fig. 6 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the existing corner selection process, the corners are manually selected by staff such as director, prisoner and the like, but the manual corner selection has large workload, so that the artist can be automatically recommended to the specified role.
In view of this, an embodiment of the present application provides an image analysis method, including:
acquiring images to be analyzed and role keywords of each person to be matched, wherein the role keywords represent one or more of the image, the quality and the character of a target role;
respectively calculating the matching degree of each image to be analyzed and the role keywords through a pre-trained matching model;
and sequencing the personnel to be matched according to the matching degrees.
Compared with characters, the image can more directly show the image quality of the artist and is more intuitive. In the embodiment of the application, the matching degree of the image to be analyzed of the person to be matched and the role keywords is calculated, and the person to be matched is recommended according to the matching degree in a sequencing mode, so that the artist can be automatically recommended for the specified role, and the manual workload can be reduced.
In order to more clearly illustrate the present application, a detailed description is given below.
Referring to fig. 1, fig. 1 is a first schematic diagram of an image analysis method according to an embodiment of the present application, where the method includes:
s101, obtaining images to be analyzed and role keywords of each person to be matched, wherein the role keywords represent at least one of the image, the quality and the character of a target role.
The image analysis method of the embodiment of the application can be implemented by electronic equipment, and specifically, the electronic equipment can be an intelligent camera, a personal computer or a server.
The image to be analyzed is an image including the personnel to be matched, and each personnel to be matched provides at least one image to be analyzed. The role keywords are keywords of a target role of the to-be-selected corner, the role keywords may include a plurality of keywords, the role keywords represent at least one of the image, the quality and the character of the target role, and the specific content of the role keywords may be set according to the image, the quality and the character of the target role. For example, the character keywords may include shikimic, goodness, impulsivity, smart, and the like.
In one possible approach, extraction of role keywords by role passes can be employed. Character biography is word-refined text describing character features where key words, phrases, etc. may exist that describe the character's key features. Key words, phrases and the like can be extracted from the character packet by a related bag-of-words model, and invalid descriptions which are not adjective characters and express negative meanings are removed by using grammar structure knowledge, so that the character keywords are obtained.
And S102, respectively calculating the matching degree of each image to be analyzed and the character keywords through a pre-trained matching model, wherein the pre-trained matching model is obtained by utilizing a sample image marked with a preset dimensional direction for training.
The pre-trained matching model may be a machine learning model. The preset dimensions may be set according to actual conditions, and in a possible implementation method, the preset dimensions may include fifteen dimensions, which may specifically be: positive-negative, mature-young, strong-mild, complex-simple, smart-blunt, moving-static, affinity-distance, holding-releasing, threatening-no threat, powerful-weak, fine-rough, convergent-exposed, wide-narrow, stature fit-not, tall-short. Of course, some of the fifteen dimensions may be used, or other dimensions may be added, etc., depending on the actual situation.
The training process of the matching model may include: acquiring a sample image marked with a preset dimension direction, wherein the preset dimension direction points to no threat in a 'threat-no threat' dimension; a sample image may be labeled with a dimension direction for each dimension, or may be labeled with a dimension direction for one or more dimensions of the dimensions. Wherein the preset dimension direction in the sample image may be manually marked. And training the machine learning model by using the sample image so as to obtain a pre-trained matching model.
And analyzing each image to be analyzed respectively by utilizing a pre-trained matching model to obtain the direction of each image to be analyzed on a preset dimension, so as to obtain the corresponding semantics of each image to be analyzed, and further semantically matching with the character keywords to obtain the matching degree.
And S103, sequencing the people to be matched according to the matching degrees, and recommending the people to be matched according to the target role.
And sequencing and recommending the personnel to be matched according to the matching degrees. For example, the average value of the matching degrees of the persons to be matched is respectively calculated, and the persons to be matched are ranked and recommended according to the descending order of the average values of the matching degrees of the persons to be matched.
In a possible implementation manner, the sorting the persons to be matched according to the matching degrees includes: and sequencing the personnel to be matched according to the descending order of the maximum matching degree corresponding to the personnel to be matched.
One person to be matched may have a plurality of images to be analyzed, so that one person to be matched may have a plurality of matching degrees, the maximum matching degrees corresponding to the persons to be matched are respectively selected, and the persons to be matched are sorted according to the descending order of the maximum matching degrees corresponding to the persons to be matched, so that role recommendation is performed. When the maximum matching degrees of two or more persons to be matched are the same, for the persons to be matched with the same maximum matching degree (hereinafter referred to as target persons to be matched), the average value of Top N (the Top N) matching degrees of the target persons to be matched can be respectively calculated, and the target persons to be matched are sorted according to the descending order of the average values, wherein N is an integer greater than 1.
In order to increase the intuitiveness of the recommendation, in one possible implementation manner, the images to be analyzed of the persons to be matched are displayed or linked and displayed in the sequence of the persons to be matched, which is obtained by sequencing.
One person to be matched may have a plurality of images to be analyzed, and for any person to be matched, the images to be analyzed of the person to be matched are sorted according to the descending order of the matching degree of the person to be matched, so as to obtain an image set of the person to be matched. Displaying the image set of each person to be matched in the sequence of the persons to be matched, which is obtained by sequencing, in the sequence of the persons to be matched; or adding links on the persons to be matched in the person sequence to be matched, wherein the links are used for displaying the image sets corresponding to the persons to be matched in a linked manner; of course, part of the images to be analyzed of each person to be matched may also be displayed in the sequence of the persons to be matched, for example, the first image to be analyzed in the image set of the person to be matched is displayed, and the other images to be analyzed are displayed in a linked manner.
In the embodiment of the application, the images to be analyzed of the persons to be matched are analyzed, the matching degree of the images to be analyzed and the role keywords is determined, and the persons to be matched are sorted according to the matching degree, so that the persons to be matched are recommended for the target role, and the artist is automatically recommended for the specified role. Compared with the role recommendation according to the description text of the person to be matched, the role recommendation method has the advantages that the credibility of the role recommendation according to the image is higher, the matching mode and the result are more visual, and cheating conditions such as modifying the description text of the person to be matched can be reduced.
In a possible implementation manner, referring to fig. 2, the calculating, by using a pre-trained matching model, a matching degree between each image to be analyzed and the character keyword respectively includes:
and S201, respectively extracting the personnel characteristics of the personnel to be matched in each image to be analyzed through a pre-trained matching model.
The pre-trained matching model can be a neural network model based on deep learning and the like, and the personnel characteristics of the personnel to be matched in each image to be analyzed are respectively extracted through the pre-trained matching model.
S202, analyzing the personnel characteristics, determining the direction of each image to be analyzed in each preset dimension, and obtaining the actual direction of each image to be analyzed.
The preset dimensions may be set according to actual conditions, and in a possible implementation method, the preset dimensions may include fifteen dimensions, which may specifically be: positive-negative, mature-young, strong-mild, complex-simple, smart-blunt, moving-static, affinity-distance, holding-releasing, threatening-no threat, powerful-weak, fine-rough, convergent-exposed, wide-narrow, stature fit-not, tall-short. Of course, some of the fifteen dimensions may be used, or other dimensions may be added, etc., depending on the actual situation.
And analyzing the characteristics of each person through a pre-trained matching model, thereby determining the direction of each image to be analyzed on each preset dimension, namely the actual direction of each image to be analyzed. The orientation of the image to be analyzed in the respective predetermined dimension can be understood, for example, as whether the image to be analyzed is positive-negative in the sense of specifying positive, negative, or between positive and negative.
And S203, determining the matching degree of each image to be analyzed and the role keywords according to the actual direction of each image to be analyzed, the role keywords and a preset corresponding relation, wherein the preset corresponding relation represents the corresponding relation between the keywords and a preset dimension direction.
The preset correspondence represents a correspondence between the keyword and a preset dimension direction, and may be set according to a semantic between an actual keyword semantic and the preset dimension direction, for example, in three dimensions of hold-hold, positive-negative, strong-peace, if the direction points to hold, negative, peace, it may correspond to a keyword melancholy or depression, etc.
Generally, there may be hundreds or even thousands of keywords describing a character, and if a classifier is trained for each keyword, the complexity of the matching model is very high, and it becomes almost impossible to implement, so in the embodiment of the present application, each keyword is represented by using a preset dimension and a preset corresponding relationship, for example, fifteen preset dimensions are taken as an example, and when each dimension includes three directions (three values), there are tens of millions of combinations of fifteen preset dimensions, which are enough to express the keyword of the character. By presetting the dimensions and the corresponding relation, the complexity of the matching model can be greatly reduced, and the usability of the image analysis method of the embodiment of the application is increased.
The keywords corresponding to the image to be analyzed can be determined according to the actual direction of the image to be analyzed, so that the matching degree can be calculated. In one possible embodiment, the determining the matching degree between each image to be analyzed and the character keyword according to the actual direction of each image to be analyzed, the character keyword, and a preset corresponding relationship includes:
step one, determining keywords corresponding to each image to be analyzed according to the preset corresponding relation and the actual direction of each image to be analyzed.
And respectively mapping the actual direction of each image to be analyzed into corresponding keywords according to the actual direction of each image to be analyzed and a preset corresponding relation, so as to obtain the keywords corresponding to each image to be analyzed.
And step two, matching the keywords corresponding to the images to be analyzed with the role keywords respectively to obtain the matching degree of the images to be analyzed and the role keywords.
Specifically, for each image to be analyzed, the number of role keywords included in the keywords corresponding to the image to be analyzed and the ratio of the number of the role keywords to the total number of the role keywords may be determined as the matching degree between the image to be analyzed and the role keywords. For example, an image to be analyzed corresponds to 40 keywords, the number of the character keywords included in the 40 keywords is 18, the total number of the character keywords is 20, and the matching degree between the image to be analyzed and the character keywords is 18/20 ═ 0.9.
Taking fifteen preset dimensions as an example, when each dimension includes three directions (three values), there are tens of millions of combinations of the fifteen preset dimensions, because different combinations of directions of the preset dimensions may correspond to different keywords, resulting in a large calculation amount, in a possible embodiment, referring to fig. 3, the determining a matching degree of each image to be analyzed and the character keyword according to an actual direction of each image to be analyzed, the character keyword, and a preset corresponding relationship includes:
s301, determining the direction of the character keyword in each preset dimension according to the preset corresponding relation to obtain the reference direction of the character keyword.
And respectively determining the direction of each role keyword on a response preset dimension according to a preset corresponding relation, and taking the direction as the reference direction of each role keyword.
S302, aiming at each image to be analyzed, comparing the actual direction of the image to be analyzed with the reference direction of the character keyword, and determining the score of the image to be analyzed as the matching degree of the image to be analyzed and the character keyword.
In the same preset dimension, if the actual direction of the image to be analyzed is the same as the reference direction of the character keyword, obtaining a high score; and if the actual direction of the image to be analyzed in the same preset dimension is opposite to the reference direction of the character keyword, low score, no score or deduction and the like are obtained. For example, in the same preset dimension, if the actual direction of the image to be analyzed is the same as the reference direction of the character keyword, a score in the preset dimension is obtained; in the same preset dimension, if the actual direction of the image to be analyzed is between the same direction and the opposite direction of the reference direction of the character keyword, 0 score is obtained in the preset dimension; and if the actual direction of the image to be analyzed is opposite to the reference direction of the character keyword, reversing the score of the preset dimension. According to the method, the score of each image to be analyzed aiming at the character keyword can be calculated and used as the matching degree of the image to be analyzed and the character keyword.
In a possible implementation manner, a corresponding weight coefficient may be set for each preset dimension, where the weight coefficient may be set according to a degree of the preset dimension required to be highlighted by the target character, for example, the preset dimension required to be highlighted by the target character is: and moving-static, the weight coefficient of the moving-static dimension can be increased appropriately.
In the embodiment of the application, the directions of the role keywords in the preset dimensions are determined, and compared with the method for determining the keywords according to the actual direction of the image to be analyzed, the calculation amount can be greatly reduced, the complexity of the matching model is reduced, and the practicability of the image analysis method of the embodiment of the application is improved.
In one possible embodiment, the step of pre-training the matching model comprises:
step one, obtaining a sample image marked with a preset dimension direction.
The sample image may be manually calibrated. The method comprises the steps of obtaining semantics of directions of all preset dimensions in advance, and obtaining a preset corresponding relation, wherein the preset corresponding relation represents the corresponding relation between a keyword and the directions of the preset dimensions. And marking the direction of the sample image on a preset dimension according to the keyword corresponding to the sample image.
And step two, inputting each sample image into a preset deep learning model for training.
In the preset deep learning model, one classifier can be trained for each preset dimension, each classifier can share one feature extraction network, the feature extraction network is used for extracting personnel features in the sample image, and parameters of each preset dimension can be independently adjusted, so that the training accuracy of each preset dimension is ensured.
And step three, obtaining a pre-trained matching model when the preset condition is met.
The preset condition may be set according to an actual situation, for example, when the set training times are reached, it is determined that the preset condition is satisfied; or when the classification result of each preset dimension is converged, judging that the preset condition is met; or converging the loss function of the whole model, and judging that the preset condition is met.
An embodiment of the present application further provides an image analysis apparatus, referring to fig. 4, the apparatus including:
the information acquisition module 401 is configured to acquire an image to be analyzed and a role keyword of each person to be matched, where the role keyword represents at least one of an image, a quality, and a character of a target role;
a matching degree calculation module 402, configured to calculate, through a pre-trained matching model, matching degrees between the images to be analyzed and the role keywords, respectively, where the pre-trained matching model is obtained by training a sample image labeled with a preset dimensional direction;
and a personnel sorting module 403, configured to sort the personnel to be matched according to the matching degrees, so as to recommend the personnel to be matched for the target role.
In a possible implementation, referring to fig. 5, the matching degree calculating module 402 includes:
a person feature extraction submodule 501, configured to extract, through a pre-trained matching model, person features of persons to be matched in each image to be analyzed;
an actual direction determining submodule 502, configured to analyze each person feature, determine a direction of each image to be analyzed in each preset dimension, and obtain an actual direction of each image to be analyzed;
the matching degree determining sub-module 503 is configured to determine a matching degree between each image to be analyzed and the role keyword according to the actual direction of each image to be analyzed, the role keyword, and a preset corresponding relationship, where the preset corresponding relationship represents a corresponding relationship between a keyword and a preset dimension direction.
In a possible implementation manner, the matching degree determining sub-module 503 is specifically configured to: determining the direction of the character keywords in each preset dimension according to the preset corresponding relation to obtain the reference direction of the character keywords; and aiming at each image to be analyzed, comparing the actual direction of the image to be analyzed with the reference direction of the character keyword, and determining the score of the image to be analyzed as the matching degree of the image to be analyzed and the character keyword.
In a possible implementation manner, the matching degree determining sub-module 503 is specifically configured to: determining keywords corresponding to the images to be analyzed according to the preset corresponding relation and the actual direction of the images to be analyzed; and matching the keywords corresponding to the images to be analyzed with the role keywords respectively to obtain the matching degree of the images to be analyzed and the role keywords.
In a possible implementation manner, the people sorting module 403 is specifically configured to: and sequencing the personnel to be matched according to the descending order of the maximum matching degree corresponding to the personnel to be matched, so as to recommend the personnel to be matched according to the target role.
An embodiment of the present application further provides an electronic device, including: a processor and a memory;
the memory is used for storing computer programs;
when the processor is used for executing the computer program stored in the memory, the following steps are realized:
acquiring images to be analyzed and role keywords of each person to be matched, wherein the role keywords represent at least one of the image, the quality and the character of a target role;
respectively calculating the matching degree of each image to be analyzed and the character keywords through a pre-trained matching model, wherein the pre-trained matching model is obtained by utilizing a sample image marked with a preset dimension direction for training;
and sequencing the people to be matched according to the matching degrees, so as to recommend the people to be matched to the target role.
In the embodiment of the application, the images to be analyzed of the persons to be matched are analyzed, the matching degree of the images to be analyzed and the role keywords is determined, and the persons to be matched are sorted according to the matching degree, so that the persons to be matched are recommended for the target role, and the artist is automatically recommended for the specified role. Compared with the role recommendation according to the description text of the person to be matched, the role recommendation method has the advantages that the credibility of the role recommendation according to the image is higher, the matching mode and the result are more visual, and cheating conditions such as modifying the description text of the person to be matched can be reduced.
Optionally, referring to fig. 6, the electronic device according to the embodiment of the present application further includes a communication interface 602 and a communication bus 604, where the processor 601, the communication interface 602, and the memory 603 complete communication with each other through the communication bus 604.
Optionally, the processor is configured to implement any of the image analysis methods when the processor is used to execute the computer program stored in the memory.
The communication bus mentioned in the electronic device may be a PCI (Peripheral component interconnect) bus, an EISA (Extended Industry standard architecture) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a RAM (Random Access Memory) or an NVM (Non-Volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also a DSP (Digital Signal Processing), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the following steps:
acquiring images to be analyzed and role keywords of each person to be matched, wherein the role keywords represent at least one of the image, the quality and the character of a target role;
respectively calculating the matching degree of each image to be analyzed and the character keywords through a pre-trained matching model, wherein the pre-trained matching model is obtained by utilizing a sample image marked with a preset dimension direction for training;
and sequencing the people to be matched according to the matching degrees, so as to recommend the people to be matched to the target role.
Optionally, the computer program, when executed by a processor, is further capable of implementing any of the image analysis methods described above.
In the embodiment of the application, the images to be analyzed of the persons to be matched are analyzed, the matching degree of the images to be analyzed and the role keywords is determined, and the persons to be matched are sorted according to the matching degree, so that the persons to be matched are recommended for the target role, and the artist is automatically recommended for the specified role. Compared with the role recommendation according to the description text of the person to be matched, the role recommendation method has the advantages that the credibility of the role recommendation according to the image is higher, the matching mode and the result are more visual, and cheating conditions such as modifying the description text of the person to be matched can be reduced.
In a further embodiment provided by the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the image analysis method as described in any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber, DSL (digital subscriber line)) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., SSD (Solid State Disk)), among others.
It should be noted that, in this document, the technical features in the various alternatives can be combined to form the scheme as long as the technical features are not contradictory, and the scheme is within the scope of the disclosure of the present application. Relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the apparatus, the electronic device, and the storage medium, since they are substantially similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the protection scope of the present application.

Claims (11)

1. A method of image analysis, the method comprising:
acquiring images to be analyzed and role keywords of each person to be matched, wherein the role keywords represent at least one of the image, the quality and the character of a target role;
respectively calculating the matching degree of each image to be analyzed and the role keywords through a pre-trained matching model, wherein the pre-trained matching model is obtained by utilizing a sample image marked with a preset dimension direction for training;
and sequencing the personnel to be matched according to the matching degrees, so as to recommend the personnel to be matched to the target role.
2. The method according to claim 1, wherein the calculating, through a pre-trained matching model, the degree of matching between each image to be analyzed and the character keyword respectively comprises:
respectively extracting personnel characteristics of the personnel to be matched in each image to be analyzed through a pre-trained matching model;
analyzing the personnel characteristics, determining the direction of each image to be analyzed in each preset dimension, and obtaining the actual direction of each image to be analyzed;
and determining the matching degree of each image to be analyzed and the role keywords according to the actual direction of each image to be analyzed, the role keywords and a preset corresponding relation, wherein the preset corresponding relation represents the corresponding relation between the keywords and the preset dimension direction.
3. The method according to claim 2, wherein the determining the matching degree between each image to be analyzed and the character keyword according to the actual direction of each image to be analyzed, the character keyword and a preset corresponding relationship comprises:
determining the direction of the role key words in each preset dimension according to the preset corresponding relation to obtain the reference direction of the role key words;
and aiming at each image to be analyzed, comparing the actual direction of the image to be analyzed with the reference direction of the character keyword, and determining the score of the image to be analyzed as the matching degree of the image to be analyzed and the character keyword.
4. The method according to claim 2, wherein the determining the matching degree between each image to be analyzed and the character keyword according to the actual direction of each image to be analyzed, the character keyword and a preset corresponding relationship comprises:
determining keywords corresponding to the images to be analyzed according to the preset corresponding relation and the actual direction of the images to be analyzed;
and matching keywords corresponding to the images to be analyzed with the role keywords respectively to obtain the matching degree of the images to be analyzed and the role keywords.
5. The method according to claim 1, wherein said ranking each of said people to be matched according to each of said degrees of match comprises:
and sequencing the personnel to be matched according to the descending order of the maximum matching degree corresponding to the personnel to be matched.
6. An image analysis apparatus, characterized in that the apparatus comprises:
the information acquisition module is used for acquiring images to be analyzed and role keywords of each person to be matched, wherein the role keywords represent at least one of the image, the quality and the character of a target role;
the matching degree calculation module is used for calculating the matching degree of each image to be analyzed and the role keywords respectively through a pre-trained matching model, wherein the pre-trained matching model is obtained by utilizing a sample image marked with a preset dimension direction through training;
and the personnel sorting module is used for sorting the personnel to be matched according to the matching degrees, so that the personnel to be matched are recommended for the target role.
7. The apparatus of claim 6, wherein the matching degree calculating module comprises:
the personnel feature extraction submodule is used for respectively extracting the personnel features of the personnel to be matched in the images to be analyzed through a pre-trained matching model;
the actual direction determining submodule is used for analyzing the characteristics of the personnel, determining the direction of each image to be analyzed in each preset dimension, and obtaining the actual direction of each image to be analyzed;
and the matching degree determining sub-module is used for determining the matching degree of each image to be analyzed and the role keywords according to the actual direction of each image to be analyzed, the role keywords and a preset corresponding relation, wherein the preset corresponding relation represents the corresponding relation between the keywords and the preset dimension direction.
8. The apparatus according to claim 7, wherein the match-degree determination submodule is specifically configured to: determining the direction of the role key words in each preset dimension according to the preset corresponding relation to obtain the reference direction of the role key words; and aiming at each image to be analyzed, comparing the actual direction of the image to be analyzed with the reference direction of the character keyword, and determining the score of the image to be analyzed as the matching degree of the image to be analyzed and the character keyword.
9. The apparatus according to claim 7, wherein the match-degree determination submodule is specifically configured to: determining keywords corresponding to the images to be analyzed according to the preset corresponding relation and the actual direction of the images to be analyzed; and matching keywords corresponding to the images to be analyzed with the role keywords respectively to obtain the matching degree of the images to be analyzed and the role keywords.
10. The apparatus of claim 6, wherein the people ranking module is specifically configured to: and sequencing the personnel to be matched according to the descending order of the maximum matching degree corresponding to the personnel to be matched, so as to recommend the personnel to be matched aiming at the target role.
11. An electronic device comprising a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to implement the image analysis method according to any one of claims 1 to 5 when executing the program stored in the memory.
CN201911286023.2A 2019-12-13 2019-12-13 Image analysis method and device and electronic equipment Pending CN111062435A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911286023.2A CN111062435A (en) 2019-12-13 2019-12-13 Image analysis method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911286023.2A CN111062435A (en) 2019-12-13 2019-12-13 Image analysis method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111062435A true CN111062435A (en) 2020-04-24

Family

ID=70301583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911286023.2A Pending CN111062435A (en) 2019-12-13 2019-12-13 Image analysis method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111062435A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111966850A (en) * 2020-07-21 2020-11-20 珠海格力电器股份有限公司 Picture screening method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105303149A (en) * 2014-05-29 2016-02-03 腾讯科技(深圳)有限公司 Figure image display method and apparatus
US20180365518A1 (en) * 2016-03-29 2018-12-20 Tencent Technology (Shenzhen) Company Limited Target object presentation method and apparatus
CN109325115A (en) * 2018-08-16 2019-02-12 中国传媒大学 A kind of role analysis method and analysis system
CN109409196A (en) * 2018-08-30 2019-03-01 深圳壹账通智能科技有限公司 Personality prediction technique based on face, device, electronic equipment
CN109657542A (en) * 2018-11-09 2019-04-19 深圳壹账通智能科技有限公司 Personality prediction technique, device, computer equipment and the computer storage medium of interviewee
WO2019085330A1 (en) * 2017-11-02 2019-05-09 平安科技(深圳)有限公司 Personal character analysis method, device, and storage medium
CN110390254A (en) * 2019-05-24 2019-10-29 平安科技(深圳)有限公司 Character analysis method, apparatus, computer equipment and storage medium based on face

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105303149A (en) * 2014-05-29 2016-02-03 腾讯科技(深圳)有限公司 Figure image display method and apparatus
US20180365518A1 (en) * 2016-03-29 2018-12-20 Tencent Technology (Shenzhen) Company Limited Target object presentation method and apparatus
WO2019085330A1 (en) * 2017-11-02 2019-05-09 平安科技(深圳)有限公司 Personal character analysis method, device, and storage medium
CN109325115A (en) * 2018-08-16 2019-02-12 中国传媒大学 A kind of role analysis method and analysis system
CN109409196A (en) * 2018-08-30 2019-03-01 深圳壹账通智能科技有限公司 Personality prediction technique based on face, device, electronic equipment
CN109657542A (en) * 2018-11-09 2019-04-19 深圳壹账通智能科技有限公司 Personality prediction technique, device, computer equipment and the computer storage medium of interviewee
CN110390254A (en) * 2019-05-24 2019-10-29 平安科技(深圳)有限公司 Character analysis method, apparatus, computer equipment and storage medium based on face

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111966850A (en) * 2020-07-21 2020-11-20 珠海格力电器股份有限公司 Picture screening method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US20220156464A1 (en) Intelligently summarizing and presenting textual responses with machine learning
US10671851B2 (en) Determining recommended object
CN112347244B (en) Yellow-based and gambling-based website detection method based on mixed feature analysis
US20170200066A1 (en) Semantic Natural Language Vector Space
US10503738B2 (en) Generating recommendations for media assets to be displayed with related text content
AU2015310494A1 (en) Sentiment rating system and method
CN107562939B (en) Vertical domain news recommendation method and device and readable storage medium
CN110991187A (en) Entity linking method, device, electronic equipment and medium
CN105630975B (en) Information processing method and electronic equipment
CN108959329B (en) Text classification method, device, medium and equipment
CN111767713A (en) Keyword extraction method and device, electronic equipment and storage medium
CN110688452A (en) Text semantic similarity evaluation method, system, medium and device
CN107766316B (en) Evaluation data analysis method, device and system
CN112732974A (en) Data processing method, electronic equipment and storage medium
CN116109373A (en) Recommendation method and device for financial products, electronic equipment and medium
CN107908649B (en) Text classification control method
CN113869063A (en) Data recommendation method and device, electronic equipment and storage medium
CN112163415A (en) User intention identification method and device for feedback content and electronic equipment
CN111062435A (en) Image analysis method and device and electronic equipment
CN112182451A (en) Webpage content abstract generation method, equipment, storage medium and device
CN110019813B (en) Life insurance case searching method, searching device, server and readable storage medium
CN110837732A (en) Method and device for identifying intimacy between target people, electronic equipment and storage medium
Wijewickrema Impact of an ontology for automatic text classification
CN111050194B (en) Video sequence processing method, video sequence processing device, electronic equipment and computer readable storage medium
CN110990709A (en) Role automatic recommendation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination