CN110555374A - resource sharing method and device, computer equipment and storage medium - Google Patents
resource sharing method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN110555374A CN110555374A CN201910677578.3A CN201910677578A CN110555374A CN 110555374 A CN110555374 A CN 110555374A CN 201910677578 A CN201910677578 A CN 201910677578A CN 110555374 A CN110555374 A CN 110555374A
- Authority
- CN
- China
- Prior art keywords
- resource
- emotion
- resource request
- micro
- request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000008451 emotion Effects 0.000 claims abstract description 244
- 230000002996 emotional effect Effects 0.000 claims abstract description 146
- 238000004458 analytical method Methods 0.000 claims abstract description 91
- 238000010195 expression analysis Methods 0.000 claims abstract description 58
- 230000011218 segmentation Effects 0.000 claims abstract description 20
- 230000001815 facial effect Effects 0.000 claims description 45
- 238000012549 training Methods 0.000 claims description 38
- 238000013468 resource allocation Methods 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 19
- 238000012163 sequencing technique Methods 0.000 claims description 18
- 208000035126 Facies Diseases 0.000 claims description 12
- 238000012216 screening Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 abstract description 16
- 238000013473 artificial intelligence Methods 0.000 abstract 1
- 238000010801 machine learning Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000014509 gene expression Effects 0.000 description 6
- 238000000638 solvent extraction Methods 0.000 description 6
- 210000004709 eyebrow Anatomy 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000008921 facial expression Effects 0.000 description 3
- 230000007480 spreading Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 206010042635 Suspiciousness Diseases 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 230000007850 degeneration Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/06—Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Image Analysis (AREA)
Abstract
The application relates to machine learning in artificial intelligence, and provides a resource sharing method, a resource sharing device, computer equipment and a storage medium, wherein the resource sharing method comprises the following steps: acquiring resource request face pictures uploaded by at least two resource requesting parties and shot according to reference face pictures issued by a resource issuing party; performing feature extraction on the resource request face picture to obtain resource request micro-expression features, inputting the resource request micro-expression features into a trained micro-expression analysis model for analysis, and obtaining resource request emotion scores corresponding to the resource request party under each candidate emotion state type; and extracting features of the reference face picture to obtain reference micro-expression features, inputting the reference micro-expression features into a trained micro-expression analysis model for analysis to obtain a reference emotional state type corresponding to the resource publisher, and obtaining target resource request emotional scores corresponding to the resource requesters in the reference emotional state type, so that resource segmentation is performed on the resources to be shared of the resource publisher, and resource sharing results of the resource requesters are determined.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for resource sharing, a computer device, and a storage medium.
Background
with the development of computer technology, people live, work, and learn through various computer applications. When sharing resources, people can share the resources not only by a offline mode but also through a network. The shared resources include, but are not limited to, virtual red packs, electronic coupons, loyalty coupons, points, electronic vouchers, gaming chips, virtual items, and the like.
At present, in the conventional resource sharing method, resources can be obtained only when a resource requester completely matches resource sharing conditions, so that the number of resource sharing objects is limited, and the resources cannot be more finely distributed.
Disclosure of Invention
Therefore, in order to solve the above technical problems, it is necessary to provide a resource sharing method and apparatus based on micro expressions, a computer device, and a storage medium, so as to improve the possibility of resource allocation by a resource requester and improve the resource sharing fineness.
a method of resource sharing, the method comprising:
Acquiring resource request face pictures uploaded by a resource requesting party, wherein the resource request face pictures are face pictures shot according to reference face pictures issued by a resource issuing party, and the resource requesting party comprises at least two face pictures;
extracting the characteristics of the resource request face picture to obtain resource request micro-expression characteristics, inputting the resource request micro-expression characteristics into a trained micro-expression analysis model for analysis, and obtaining resource request emotion scores corresponding to the resource requester under each candidate emotion state type;
Extracting features of the reference face picture to obtain reference micro-expression features, inputting the reference micro-expression features into the trained micro-expression analysis model for analysis to obtain a reference emotional state type corresponding to the resource publisher, wherein the reference emotional state type is an emotional state type determined according to an emotional score corresponding to the reference face picture from candidate emotional state types;
and acquiring a target resource request emotion score corresponding to each resource request party under the reference emotion state type, performing resource segmentation on the resource to be shared of the resource issuing party according to the target resource request emotion score, and determining a resource sharing result matched with each resource request party.
In one embodiment, the resource partitioning of the resource to be shared of the resource publisher according to the target resource request emotion score and the determining of the resource sharing result matched by each resource requestor includes:
Counting target resource request emotion scores corresponding to the resource requesters in the reference emotion state type to obtain total resource request emotion scores, and calculating resource request emotion similarity corresponding to the resource requesters according to the target resource request emotion scores corresponding to the resource requesters in the reference emotion state type and the total resource request emotion scores;
Sequencing the resource requesters according to the sequence of the emotion scores of the target resource requests from large to small, and obtaining the resource allocation sequence of each resource requester according to the sequencing result;
Sequentially calculating resource sharing results matched with each resource requester according to the resource allocation sequence and the resource request emotion similarity corresponding to each resource requester, and accumulating the resource sharing results to obtain the current allocated resources;
Determining a resource sharing result corresponding to the last resource request party as a difference value between the resource line to be shared and the currently allocated resource line;
When the resource limit corresponding to the last resource requester is greater than the resource limit corresponding to the second last resource requester, the resource limit corresponding to the second last resource requester is adjusted to make the resource limit corresponding to the second last resource requester greater than or equal to the resource limit corresponding to the last resource requester.
In one embodiment, the method further comprises:
inputting the resource request face picture into a trained face analysis model for analysis to obtain first face feature point attribute information corresponding to each resource request party;
Inputting the first facies feature point attribute information into a trained personality characteristic analysis model for analysis to obtain first personality characteristic scores corresponding to the resource requesters under the candidate personality characteristics, and outputting target personality types corresponding to the resource requesters according to the first personality characteristic scores;
Inputting the reference human face picture into the trained facial analysis model for analysis to obtain second facial feature point attribute information corresponding to the resource publisher;
Inputting the second facies feature point attribute information into the trained personality characteristic analysis model for analysis to obtain second personality characteristic scores corresponding to the resource publisher under each candidate personality characteristic, and outputting a reference personality type corresponding to the resource publisher according to the second personality characteristic scores;
screening to obtain a resource requester matched with the reference personality type as a target resource requester according to the target personality type corresponding to each resource requester;
The obtaining of the target resource request emotion score corresponding to each resource requester under the reference emotion state type includes:
And acquiring a target resource request emotion score corresponding to each target resource request party under the reference emotion state type.
In one embodiment, the method further comprises:
Acquiring a first personality characteristic score corresponding to each candidate personality characteristic of each resource request party;
acquiring a second personality characteristic score corresponding to each candidate personality characteristic of the resource distributor, and taking the candidate personality characteristic with the highest second personality characteristic score as a reference personality type corresponding to the resource distributor;
Acquiring target personality characteristic scores corresponding to the resource requesters under the reference personality type, and calculating comprehensive scores corresponding to the resource requesters according to the target personality characteristic scores and the target resource request emotion scores;
And performing resource segmentation on the resources to be shared of the resource issuers according to the comprehensive scores, and determining resource sharing results matched with each resource requester.
In one embodiment, the training step of the micro-expression analysis model includes:
Acquiring training face pictures of users with different personality characteristics and corresponding micro-expression labels, wherein the micro-expression labels comprise a plurality of candidate emotion state types and corresponding standard emotion scores;
extracting the features of the training face picture to obtain training micro-expression features, and inputting the training micro-expression features into a micro-expression analysis model to obtain predicted emotion scores corresponding to each candidate emotion state type predicted by the micro-expression analysis model;
And calculating a prediction loss value according to the standard emotion score and the prediction emotion score, adjusting parameters of the micro-episodic analysis model according to the prediction loss value, and obtaining the trained micro-episodic analysis model when the prediction loss value reaches a convergence condition.
In one embodiment, the method further comprises:
Acquiring an emotional state type with the highest resource request emotional score corresponding to the resource request face picture as a current emotional state type;
Acquiring face phase feature point attribute information corresponding to the resource request face picture;
And returning the current emotional state type and the facial phase characteristic point attribute information corresponding to the resource request face picture to a terminal corresponding to a corresponding resource request party, so that the terminal displays the corresponding current emotional state type and the facial phase characteristic point attribute information on the resource request face picture.
an apparatus for resource sharing, the apparatus comprising:
The system comprises an acquisition module, a resource request module and a resource request module, wherein the acquisition module is used for acquiring resource request face pictures uploaded by a resource request party, the resource request face pictures are face pictures shot according to reference face pictures issued by a resource issuing party, and the resource request party comprises at least two face pictures;
The resource request emotion score determining module is used for extracting the characteristics of the resource request face picture to obtain resource request micro-expression characteristics, inputting the resource request micro-expression characteristics into a trained micro-expression analysis model for analysis, and obtaining resource request emotion scores corresponding to the resource requesting party under each candidate emotion state type;
the reference emotional state type determining module is used for extracting features of the reference face picture to obtain reference micro-expression features, inputting the reference micro-expression features into the trained micro-expression analysis model for analysis, and obtaining a reference emotional state type corresponding to the resource publisher, wherein the reference emotional state type is an emotional state type determined from candidate emotional state types according to the emotional score corresponding to the reference face picture;
And the resource sharing module is used for acquiring a target resource request emotion score corresponding to each resource requester under the reference emotion state type, performing resource segmentation on the resource to be shared of the resource publisher according to the target resource request emotion score, and determining a resource sharing result matched with each resource requester.
In one embodiment, the resource sharing module is further configured to count target resource request emotion scores corresponding to the resource requesters in the reference emotion state type to obtain a total resource request emotion score, and calculate resource request emotion similarity corresponding to each resource requester according to the target resource request emotion scores corresponding to the resource requesters in the reference emotion state type and the total resource request emotion score; sequencing the resource requesters according to the sequence of the emotion scores of the target resource requests from large to small, and obtaining the resource allocation sequence of each resource requester according to the sequencing result; and sequentially calculating resource sharing results matched with each resource requester according to the resource allocation sequence and the resource request emotion similarity corresponding to each resource requester, accumulating the resource sharing results to obtain the current allocated resource, determining the resource sharing result corresponding to the last resource requester as the difference value between the resource quota to be shared and the current allocated resource quota, and adjusting the resource quota corresponding to the second last resource requester when the resource quota corresponding to the last resource requester is greater than the resource quota corresponding to the second last resource requester so that the resource quota corresponding to the second last resource requester is greater than or equal to the resource quota corresponding to the last resource requester.
a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring resource request face pictures uploaded by a resource requesting party, wherein the resource request face pictures are face pictures shot according to reference face pictures issued by a resource issuing party, and the resource requesting party comprises at least two face pictures;
Extracting the characteristics of the resource request face picture to obtain resource request micro-expression characteristics, inputting the resource request micro-expression characteristics into a trained micro-expression analysis model for analysis, and obtaining resource request emotion scores corresponding to the resource requester under each candidate emotion state type;
Extracting features of the reference face picture to obtain reference micro-expression features, inputting the reference micro-expression features into the trained micro-expression analysis model for analysis to obtain a reference emotional state type corresponding to the resource publisher, wherein the reference emotional state type is an emotional state type determined according to an emotional score corresponding to the reference face picture from candidate emotional state types;
and acquiring a target resource request emotion score corresponding to each resource request party under the reference emotion state type, performing resource segmentation on the resource to be shared of the resource issuing party according to the target resource request emotion score, and determining a resource sharing result matched with each resource request party.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
Acquiring resource request face pictures uploaded by a resource requesting party, wherein the resource request face pictures are face pictures shot according to reference face pictures issued by a resource issuing party, and the resource requesting party comprises at least two face pictures;
extracting the characteristics of the resource request face picture to obtain resource request micro-expression characteristics, inputting the resource request micro-expression characteristics into a trained micro-expression analysis model for analysis, and obtaining resource request emotion scores corresponding to the resource requester under each candidate emotion state type;
Extracting features of the reference face picture to obtain reference micro-expression features, inputting the reference micro-expression features into the trained micro-expression analysis model for analysis to obtain a reference emotional state type corresponding to the resource publisher, wherein the reference emotional state type is an emotional state type determined according to an emotional score corresponding to the reference face picture from candidate emotional state types;
and acquiring a target resource request emotion score corresponding to each resource request party under the reference emotion state type, performing resource segmentation on the resource to be shared of the resource issuing party according to the target resource request emotion score, and determining a resource sharing result matched with each resource request party.
according to the resource sharing method, the resource sharing device, the computer equipment and the storage medium, the resource request face pictures uploaded by the resource requesting party are obtained, the resource request face pictures are face pictures shot according to the reference face pictures issued by the resource issuing party, the resource requesting party comprises at least two resource request face pictures, the resource request face pictures are subjected to feature extraction to obtain resource request micro-expression features, the resource request micro-expression features are input into a trained micro-expression analysis model to be analyzed, and the resource request emotion scores corresponding to the resource requesting party under each candidate emotion state type are obtained; extracting the features of a reference face picture to obtain reference micro-expression features, inputting the reference micro-expression features into a trained micro-expression analysis model for analysis to obtain reference emotional state types corresponding to resource publishers, wherein the reference emotional state types are determined from candidate emotional state types according to emotional scores corresponding to the reference face picture, target resource request emotional scores corresponding to the reference emotional state types of each resource requestor are obtained, resource partitioning is performed on resources to be shared of the resource publishers according to the target resource request emotional scores, and resource sharing results matched with each resource requestor are determined, even if the target emotion provided by the resource requestor is not matched with the reference emotional state of the resource distributor, scores still exist under the reference emotional state types, or corresponding resources can be obtained, the method and the device improve the possibility of the resource request party for distributing the resources, improve the resource sharing fineness, improve the participation activity of the resource request party and improve the user coverage and the spreading performance of the application.
drawings
FIG. 1 is a diagram of an application environment in which a method for resource sharing is implemented, according to an embodiment;
FIG. 2 is a flow diagram illustrating a method for resource sharing in one embodiment;
FIG. 3 is a diagram of a resource publisher terminal interface in one embodiment;
FIG. 4 is a diagram illustrating a resource requestor terminal interface in one embodiment;
FIG. 5 is a diagram of a resource sharing results interface, according to an embodiment;
FIG. 6 is a block diagram of an apparatus for resource sharing in one embodiment;
FIG. 7 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The resource sharing method provided by the application can be applied to the application environment shown in fig. 1. As shown in fig. 1, the application environment includes a terminal 110, a server 120, a terminal 130, and a terminal 140, and communication between the terminal 110, the terminal 130, the terminal 140, and the server 120 can be performed through a network, and the communication network can be a wireless or wired communication network, such as an IP network, a cellular mobile communication network, and the like, wherein the number of devices of the terminal and the server is not limited.
the method for resource sharing may be executed on the server 120, where the terminal 110 is a resource distributor, the terminals 130 and 140 are resource requesters, and the number of the resource requesters is at least 2, and may be multiple. When the resource distributor 110 issues the resource, the reference face picture is provided, the resource requester obtains the resource request face picture shot according to the reference face picture, and sends the resource request to the server, the resource request carries the resource request face picture, the server 120 performs feature extraction on the resource request face picture to obtain the resource request micro-expression feature, the resource request micro-expression feature is input into the trained micro-expression analysis model for analysis, and the resource request emotion scores corresponding to the resource requesters 130 and 140 under each candidate emotion state type are obtained. The server 120 extracts the features of the reference face picture to obtain reference micro-expression features, inputs the reference micro-expression features into a trained micro-expression analysis model for analysis to obtain reference emotional state types corresponding to the resource publishers, wherein the reference emotional state types are determined from the candidate emotional state types according to the emotional scores corresponding to the reference face picture, obtains target resource request emotional scores corresponding to the resource requesters in the reference emotional state types, performs resource segmentation on the resources to be shared of the resource publishers 110 according to the target resource request emotional scores, determines resource sharing results matched with the resource requesters 130 and 140, and calculates the emotional scores of the resource requesters in the candidate emotional state types even if the target emotions provided by the resource requesters are not matched with the reference emotional states of the resource distributors, however, the score is still obtained under the reference emotional state type, or the corresponding resources can be obtained, so that the possibility of obtaining the resources by the resource requester is improved, the resource sharing fineness is improved, the participation activity of the resource requester can also be improved, and the user coverage and the spreading performance of the application are improved.
in an embodiment, as shown in fig. 2, a method for resource sharing is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
step 210, obtaining resource request face pictures uploaded by a resource requesting party, wherein the resource request face pictures are face pictures shot according to reference face pictures issued by a resource issuing party, and the resource requesting party comprises at least two.
The resource request party sends a resource request to the server, and the resource request comprises a resource request party identifier and a resource request face picture. The resource request is used for requesting to share the resource issued by the resource issuer, and the shared resource includes but is not limited to virtual red packet, electronic ticket, point, electronic voucher, game currency, virtual article and the like. The resource sharing reference face picture is a face image shot by a resource publisher and serves as a reference image for resource sharing, and the closer the emotion between the resource request face picture uploaded by the resource requester and the reference image is, the higher the value of the divided resource is.
Specifically, the resource requester can receive the resource sharing notification issued by the resource issuer through the group, the two-party conversation interface, the video call interface and the voice call interface, and obtain the displayed reference face picture,
and step 220, extracting the characteristics of the resource request face picture to obtain resource request micro-expression characteristics, inputting the resource request micro-expression characteristics into a trained micro-expression analysis model for analysis, and obtaining resource request emotion scores corresponding to the resource request parties in each candidate emotion state type.
wherein, the micro-expression characteristic is used for reflecting the characteristics of the face of the client image, and comprises the following steps: upper facial features, lower facial features, mouth features, and other features, etc. Each face key point can be determined through an image feature key point extraction algorithm, then face key positions are obtained according to the positions of the face key points, and resource request micro-expression features are obtained according to feature information of each key position. The micro expression characteristics can be represented by key point vector information, for example, the position information of each key point at the corner of the mouth is used as the corner of the mouth micro expression characteristics, and if the micro expression analysis model analyzes that the corner of the mouth is raised according to the position information, a higher emotional score is given to the type of the emotional state with joy.
specifically, the micro-expression analysis model may be obtained by training according to a human face picture of big data and a standard micro-expression label, where an output of the micro-expression analysis model is a resource request emotion score corresponding to each candidate emotion state type. Or may be calculated by other model solving algorithms, and is not limited. And when the resource request face pictures comprise a plurality of face pictures, obtaining the corresponding resource request emotion scores of the resource request face pictures under the candidate emotion state types according to the output of the micro-expression analysis model. The candidate emotional state types can be types defined in model training, and include multiple types, such as happiness, injury, anger, startle and the like.
And 230, extracting the features of the reference face picture to obtain reference micro-expression features, inputting the reference micro-expression features into the trained micro-expression analysis model for analysis to obtain a reference emotional state type corresponding to the resource publisher, wherein the reference emotional state type is determined from the candidate emotional state types according to the emotional score corresponding to the reference face picture.
specifically, the feature extraction algorithms adopted when the feature extraction is performed on the reference face picture and the feature extraction is performed on the resource request face picture can be the same or different, and if the feature extraction algorithms are the same, the consistency of the algorithms for extracting the micro-expression features can be ensured, and the reliability of subsequent comparison is further improved. The reference micro-expression characteristics are input into a trained micro-expression analysis model, and the trained micro-expression analysis model is the same as the micro-expression analysis model in the step 220, so that the comparability of emotion scores is ensured. The emotional state type with the highest emotional score corresponding to the reference face picture may be used as the reference emotional state type. The emotional state type with the highest emotional score represents the emotional state that the resource publisher wants the resource sharer to provide when sharing the resource.
and 240, acquiring target resource request emotion scores corresponding to the resource requesters under the reference emotion state types, performing resource segmentation on the resources to be shared of the resource publishers according to the target resource request emotion scores, and determining resource sharing results matched with the resource requesters.
specifically, the higher the target resource request emotion score corresponding to the resource requester in the reference emotion state type is, the closer the emotion state provided by the resource requester is to the emotion state corresponding to the reference face picture provided by the resource publisher when the resource publisher publishes the resource is, so that more resources can be divided. Meanwhile, even if the target emotion provided by the resource requester is not matched with the reference emotional state of the resource distributor, the score is still obtained under the reference emotional state type, and the corresponding resource can be scored, so that the possibility of the resource requester for scoring the resource is improved, wherein the target emotion provided by the resource requester refers to the emotional state type with the highest emotional score of the resource requester. The resource sharing method can count the emotion scores of the target resource requests to obtain total resource request emotion scores, and perform resource partitioning on resources to be shared according to the proportion of the emotion scores of the target resource requests of the resource requesters in the total resource request emotion scores. During the division, the emotion score requested by the target resource can be shifted, so that the reasonable allocation of the resource to be shared is improved.
In one embodiment, whether the number of the resource requesters reaches a preset number set by the resource publisher is judged, if so, the resource to be shared of the resource publisher is segmented, and if not, the number of the resource requesters is continuously waited to be increased. And when the preset waiting time is out, even if the number of the resource requesters does not reach the preset number set by the resource publisher, the resources to be shared of the resource publisher are subjected to resource segmentation.
The resource sharing method includes the steps that a resource request face picture uploaded by a resource requesting party is obtained, the resource request face picture is a face picture shot according to a reference face picture issued by a resource issuing party, the resource requesting party comprises at least two face pictures, feature extraction is conducted on the resource request face picture to obtain resource request micro-expression features, the resource request micro-expression features are input into a trained micro-expression analysis model to be analyzed, and resource request emotion scores corresponding to the resource requesting party under each candidate emotion state type are obtained; extracting the features of a reference face picture to obtain reference micro-expression features, inputting the reference micro-expression features into a trained micro-expression analysis model for analysis to obtain reference emotional state types corresponding to resource publishers, wherein the reference emotional state types are determined from candidate emotional state types according to emotional scores corresponding to the reference face picture, target resource request emotional scores corresponding to the reference emotional state types of each resource requestor are obtained, resource partitioning is performed on resources to be shared of the resource publishers according to the target resource request emotional scores, and resource sharing results matched with each resource requestor are determined, even if the target emotion provided by the resource requestor is not matched with the reference emotional state of the resource distributor, scores still exist under the reference emotional state types, or corresponding resources can be obtained, the method and the device improve the possibility of the resource request party for distributing the resources, improve the resource sharing fineness, improve the participation activity of the resource request party and improve the user coverage and the spreading performance of the application.
In one embodiment, step 240 comprises: counting target resource request emotion scores corresponding to the resource requesters in the reference emotion state type to obtain total resource request emotion scores, calculating resource request emotion similarity corresponding to the resource requesters according to the target resource request emotion scores and the total resource request emotion scores corresponding to the resource requesters in the reference emotion state type, sequencing the resource requesters according to the sequence of the target resource request emotion scores from large to small, obtaining resource allocation sequence of the resource requesters according to the sequencing result, calculating resource sharing results matched with the resource requesters according to the resource allocation sequence and the resource request emotion similarity corresponding to the resource requesters, accumulating the resource sharing results to obtain currently allocated resources, and determining the resource sharing result corresponding to the last resource requester as the difference value between the resource amount to be shared and the currently allocated resource amount, when the resource limit corresponding to the last resource requester is greater than the resource limit corresponding to the second last resource requester, the resource limit corresponding to the second last resource requester is adjusted so that the resource limit corresponding to the second last resource requester is greater than or equal to the resource limit corresponding to the last resource requester.
specifically, if the resource requesters are A, B and C respectively, and the reference emotional state type is r, then counting the target resource request emotional scores corresponding to the resource requesters in the reference emotional state type to obtain a total resource request emotional score Sum (r.configvalue) ═ Sum (a.r.configvalue + b.r.configvalue + c.r.configvalue), and calculating the resource request emotional similarity corresponding to each resource requester according to the target resource request emotional scores and the total resource request emotional scores corresponding to the resource requesters in the reference emotional state type, where a specific calculation formula may be customized. In one embodiment, the resource requestor P corresponds to a resource request emotion similarity p.similar ═ (p.r.configvalue × 1000)/(Sum (r.configvalue) × 1000). And sequencing the resource requesters according to the sequence of the target resource request emotion scores from large to small, and obtaining the resource allocation sequence of each resource requester according to the sequencing result, wherein the allocation sequence is A, B and C if A.r.configvalue > B.r.configvalue > C.r.configvalue. Sequentially calculating the resource sharing result matched with each resource requester according to the resource allocation sequence and the resource request emotion similarity corresponding to each resource requester,
similar sumMoney, wherein sumMoney is the total amount of the resource issued by the resource issuer. It can be understood that the formula can be modified and customized.
because rounding-off may exist during calculation, the resource requesting party with high score is firstly allocated, then the last remaining quota is allocated to the resource requesting party with the lowest score, and the resource quota corresponding to each resource requesting party is adjusted according to the score, so that allocation is more reasonable. When the resource limit corresponding to the second last resource requester is adjusted to be greater than or equal to the resource limit corresponding to the last resource requester, the adjustment algorithm may be customized, for example, the resource limit corresponding to the second last resource requester and the resource limit corresponding to the last resource requester are added, and then averaged, and the averaged resource limit is used as the target resource limit corresponding to the second last resource requester and the last resource requester. The last bit of the resource will not affect the resource limit of the previous bit after being adjusted.
In the embodiment, the resource allocation is performed by the target resource request emotion scores corresponding to the resource requesters under the reference emotion state types, and the allocation result is adjusted, so that the rationality and the reliability of the resource allocation are further improved.
In one embodiment, the method further comprises: inputting a resource request human face picture into a trained facial analysis model for analysis to obtain first facial feature point attribute information corresponding to each resource request party, inputting the first facial feature point attribute information into the trained personality characteristic analysis model for analysis to obtain first personality characteristic scores corresponding to each resource request party under each candidate personality characteristic, outputting a target personality type corresponding to each resource request party according to the first personality characteristic scores, inputting a reference human face picture into the trained facial analysis model for analysis to obtain second facial feature point attribute information corresponding to the resource request party, inputting the second facial feature point attribute information into the trained personality characteristic analysis model for analysis to obtain second personality characteristic scores corresponding to each candidate personality characteristic of the resource issuing party, and outputting a reference personality type corresponding to the resource issuing party according to the second personality characteristic scores, and screening to obtain the resource requesters matched with the reference personality type as target resource requesters according to the target personality types corresponding to the resource requesters. The step 240 of obtaining the target resource request emotion score corresponding to each resource requester under the reference emotion state type includes: and acquiring the corresponding target resource request emotion scores of all the target resource requesters under the reference emotion state types.
Specifically, the facial feature point attribute information is feature information calculated from each part of the face, and the type of the facial feature point attribute information can be customized as required. In one embodiment, the facial facies analysis model inputs a base64 character string converted from a single picture and outputs 22 types of facial facies feature point attribute information, including the interocular distance, the eyebrow size, the distance between two eyes, the eye size, the eye shape, the nose size, the mouth size, the lip thickness, the left and right face length, the hairline height, the nostril open hole, the nose bridge width, the nose wing thickness, the eyebrow shape, the mountain root width, the eyebrow head and eyebrow tail thickness comparison, the eyebrow tail shape, the chin shape, the down stop shape, the left and right face size, and the like.
the personality characteristic analysis model is used for determining corresponding personality characteristic scores of the resource request face pictures under various candidate personality characteristics according to the facies characteristic point attribute information, wherein the types of the candidate personality characteristics can be customized. In one embodiment, the personality trait analysis model inputs a base64 string transformed from a single picture and outputs 16 personality traits and corresponding scores, including happily, clever, emotional stability, dominance, volatility, non-gender, dare, sensitivity, suspiciousness, practicality, decency, tranquility, degeneration, independence, continence, and peace.
And outputting the target personality type corresponding to each resource requester according to the first personality characteristic score corresponding to each resource requester under each candidate personality characteristic, wherein in one embodiment, the candidate personality characteristic with the highest first personality characteristic score is used as the target personality type corresponding to the resource requester. And calculating a reference personality type corresponding to the resource publisher according to the same method, wherein the reference personality type represents a personality type corresponding to a resource request face picture provided by the resource publisher when the resource publisher expects the resource sharing party to share the resource. The resource sharing method has the advantages that all resource requesters are screened according to the reference personality types corresponding to the resource distributors, only the matched resource requesters can share resources, the difficulty coefficient of resource sharing is improved, the honor of resource sharing acquirers is improved, meanwhile, due to the fact that a part of the resource requesters are filtered, the amount of the resource requesters is increased, and the enthusiasm of the high-quality resource requesters for completing resource sharing tasks is further improved with high resource distribution reasonability.
in one embodiment, the method further comprises: the resource sharing method comprises the steps of obtaining first personality characteristic scores corresponding to resource requesters under the candidate personality characteristics, obtaining second personality characteristic scores corresponding to the resource publishers under the candidate personality characteristics, taking the candidate personality characteristics with the highest second personality characteristic scores as a reference personality type corresponding to the resource publishers, obtaining target personality characteristic scores corresponding to the resource requesters under the reference personality type, calculating comprehensive scores corresponding to the resource requesters according to the target personality characteristic scores and the target resource request emotion scores, conducting resource segmentation on resources to be shared of the resource publishers according to the comprehensive scores, and determining resource sharing results matched with the resource requesters.
Specifically, the personality characteristic score may be calculated by a trained personality characteristic analysis model, and the personality characteristic analysis model may obtain a corresponding personality characteristic score by analyzing a face picture or features extracted from the face picture. In one embodiment, the input of the personality characteristic analysis model is facial feature point attribute information corresponding to the human face picture, wherein the facial feature point attribute information is obtained by inputting the human face picture into a trained facial feature analysis model for analysis.
if the reference personality type corresponding to the resource publisher is dominant, acquiring personality characteristic scores of the resource requester in the dominance, calculating comprehensive scores corresponding to the resource requesters by synthesizing the target personality characteristic scores and the target resource request emotion scores, and performing resource segmentation on the resources to be shared of the resource publisher according to the comprehensive scores, wherein the personality characteristic scores represent fixed attribute information of the resource requester, facial features cannot be changed, the emotion scores represent dynamic information of the resource requester, and different expressions can be made to change the emotion scores, so that a resource allocation result is more reasonable.
In one embodiment, the training step of the micro-episodic analysis model comprises: the method comprises the steps of obtaining training face pictures of users with different personality characteristics and corresponding micro-expression labels, wherein the micro-expression labels comprise a plurality of candidate emotion state types and corresponding standard emotion scores, extracting features of the training face pictures to obtain training micro-expression features, inputting the training micro-expression features into a micro-expression analysis model to obtain predicted emotion scores corresponding to the candidate emotion state types predicted by the micro-expression analysis model, calculating a predicted loss value according to the standard emotion scores and the predicted emotion scores, adjusting parameters of the micro-expression analysis model according to the predicted loss value, and obtaining the trained micro-expression analysis model when the predicted loss value reaches a convergence condition.
Specifically, different user face pictures can be collected, each user face picture is input into the personality characteristic analysis model to be judged to obtain the corresponding personality characteristic, the user face pictures with different personality characteristics are obtained to serve as training face pictures, and the completeness of training data is guaranteed. The micro-expression labels are standard emotion scores corresponding to the training face pictures under different candidate emotion state types, and can be obtained through manual labeling or standard determination. After the training micro-expression characteristics are extracted from the training face picture, the training micro-expression characteristics are input into the micro-expression analysis model to obtain predicted emotion scores corresponding to all candidate emotion state types predicted by the micro-expression analysis model, standard emotion scores corresponding to the candidate emotion state types are compared with matched predicted emotion scores, differences are calculated, the specific calculation method can be customized, the differences corresponding to all candidate emotion state types are counted to obtain predicted loss values, parameters of the micro-expression analysis model are continuously adjusted to enable the predicted loss values to reach convergence conditions, the convergence conditions can be customized, and if the convergence conditions are that the predicted loss values are smaller than preset thresholds, the trained micro-expression analysis model is obtained.
in one embodiment, the method further comprises: acquiring an emotional state type with the highest resource request emotional score corresponding to the resource request face picture as a current emotional state type; acquiring face phase feature point attribute information corresponding to a resource request face picture; and returning the current emotional state type and the facial phase characteristic point attribute information corresponding to the resource request face picture to the terminal corresponding to the corresponding resource request party, so that the terminal displays the corresponding current emotional state type and the facial phase characteristic point attribute information on the resource request face picture.
Specifically, the emotion state type with the highest resource request emotion score corresponding to the currently acquired resource request face picture can be calculated in real time to obtain the current emotion state type, and the facial feature point attribute information corresponding to the resource request face picture can be calculated in real time to return the current emotion state type and the facial feature point attribute information to the terminal corresponding to the corresponding resource request party, so that the terminal displays the current emotion state type and the facial feature point attribute information in real time, the resource request party can adjust the facial expression according to the information displayed in real time, and the resource request face picture is closer to the reference emotion state of the reference face picture. The reference emotional state type of the reference face picture can be displayed on a display interface corresponding to the resource requester, so that the resource requester can adjust the facial expression according to the reference emotional state type conveniently.
in the embodiment, the corresponding current emotion state type and facial facies feature point attribute information are displayed on the resource request face picture, so that a resource requesting party can adjust the facial expression in time, the current emotion state type is closer to the reference emotion state type, and the resource sharing amount is increased.
In a specific embodiment, as shown in fig. 3, a resource publishing interface displayed for a terminal corresponding to a resource publisher receives resource quota to be shared and a preset resource partition number set by the resource publisher through the resource publishing interface, the resource publisher takes a picture as a reference face picture, sends the reference face picture to a server, the server performs feature extraction on the reference face picture to obtain a reference micro-expression feature, and inputs the reference micro-expression feature into the trained micro-expression analysis model for analysis to obtain a reference emotional state type and corresponding information corresponding to the resource publisher. And displaying information corresponding to the reference face picture on a resource publishing interface, wherein the information comprises expression characteristic information, facial phase information and emotional state information. The resource publisher sends the resource publishing information to each friend or displays the resource publishing information in a platform, wherein the platform comprises a group, a dynamic information interface and the like.
as shown in fig. 4, a resource requesting party can check the information of other resource requesting parties who have uploaded the face picture of the resource request, trigger picture shooting and generate a resource request through a resource request key, a terminal corresponding to the resource requesting party sends the resource request to a server, the server calculates resource request emotion scores corresponding to each candidate emotion state type for the resource requesting parties according to the face picture of the resource request in the resource request, obtains target resource request emotion scores corresponding to each resource requesting party in reference emotion state types when the total number of the resource requesting parties reaches a preset resource division number, performs resource division on resources to be shared of a resource publishing party according to the target resource request emotion scores, determines resource sharing results matched with each resource requesting party, and displays the information of the resource requesting party on a resource request interface corresponding to each resource requesting party, the expression feature information is included, and the resource sharing results of the resource requesters are displayed on a resource request interface, as shown in fig. 5, so that the resource requesters can conveniently view the resource sharing results of other users.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 6, there is provided an apparatus for resource sharing, including: an obtaining module 310, a resource request emotion score determining module 320, a reference emotion state type determining module 330, and a first resource sharing module 340, wherein:
The obtaining module 310 is configured to obtain a resource request face picture uploaded by a resource requesting party, where the resource request face picture is a face picture taken according to a reference face picture issued by a resource issuing party, and the resource requesting party includes at least two.
and the resource request emotion score determining module 320 is used for extracting the characteristics of the resource request face picture to obtain resource request micro-expression characteristics, inputting the resource request micro-expression characteristics into the trained micro-expression analysis model for analysis, and obtaining the resource request emotion scores corresponding to the resource requesting party in each candidate emotion state type.
and the reference emotional state type determining module 330 is configured to perform feature extraction on the reference face picture to obtain a reference micro-expression feature, and input the reference micro-expression feature into the trained micro-expression analysis model for analysis to obtain a reference emotional state type corresponding to the resource publisher, where the reference emotional state type is an emotional state type determined according to the emotional score corresponding to the reference face picture from the candidate emotional state types.
The first resource sharing module 340 is configured to obtain a target resource request emotion score corresponding to each resource requester in the reference emotion state type, perform resource segmentation on the resource to be shared of the resource publisher according to the target resource request emotion score, and determine a resource sharing result matched with each resource requester.
in an embodiment, the first resource sharing module 340 is further configured to count target resource request emotion scores corresponding to the resource requesters in the reference emotion state type to obtain a total resource request emotion score, and calculate, according to the target resource request emotion scores corresponding to the resource requesters in the reference emotion state type and the total resource request emotion score, a resource request emotion similarity corresponding to the resource requesters; sequencing the resource requesters according to the sequence of the emotion scores of the target resource requests from large to small, and obtaining the resource allocation sequence of each resource requester according to the sequencing result; and sequentially calculating resource sharing results matched with each resource requester according to the resource allocation sequence and the resource request emotion similarity corresponding to each resource requester, accumulating the resource sharing results to obtain the current allocated resource, determining the resource sharing result corresponding to the last resource requester as the difference value between the resource quota to be shared and the current allocated resource quota, and adjusting the resource quota corresponding to the second last resource requester when the resource quota corresponding to the last resource requester is greater than the resource quota corresponding to the second last resource requester so that the resource quota corresponding to the second last resource requester is greater than or equal to the resource quota corresponding to the last resource requester.
in one embodiment, the apparatus further comprises: a target resource request party screening module 350, configured to input a resource request human face picture into a trained facial analysis model for analysis, so as to obtain first facial feature point attribute information corresponding to each resource request party, input the first facial feature point attribute information into the trained personality characteristic analysis model for analysis, so as to obtain a first personality characteristic score corresponding to each candidate personality characteristic of each resource request party, output a target personality type corresponding to each resource request party according to the first personality characteristic score, input a reference human face picture into the trained facial analysis model for analysis, so as to obtain second facial feature point attribute information corresponding to a resource publishing party, input the second facial feature point attribute information into the trained personality characteristic analysis model for analysis, so as to obtain a second personality characteristic score corresponding to each candidate personality characteristic of the resource publishing party, and outputting the reference personality type corresponding to the resource publisher according to the second personality characteristic score, and screening to obtain the resource requestor matched with the reference personality type as a target resource requestor according to the target personality type corresponding to each resource requestor.
the first resource sharing module 340 is further configured to obtain a target resource request emotion score corresponding to each target resource requester under the reference emotion state type.
in one embodiment, the apparatus further comprises: the second resource sharing module 360 is configured to obtain first personality characteristic scores corresponding to the resource requesters under the candidate personality characteristics, obtain second personality characteristic scores corresponding to the resource distributors under the candidate personality characteristics, and use the candidate personality characteristic with the highest second personality characteristic score as the reference personality type corresponding to the resource issuer; acquiring target personality characteristic scores corresponding to the resource requesters under the reference personality type, and calculating comprehensive scores corresponding to the resource requesters according to the target personality characteristic scores and the target resource request emotion scores; and performing resource segmentation on the resources to be shared of the resource publisher according to the comprehensive scores, and determining resource sharing results matched with each resource requestor.
in one embodiment, the apparatus further comprises: the training module 370 is configured to obtain training face pictures of users with different personality characteristics and corresponding micro-expression labels, where the micro-expression labels include multiple candidate emotional state types and corresponding standard emotional scores, perform feature extraction on the training face pictures to obtain training micro-expression features, input the training micro-expression features into the micro-expression analysis model to obtain predicted emotional scores corresponding to the candidate emotional state types predicted by the micro-expression analysis model, calculate a prediction loss value according to the standard emotional scores and the predicted emotional scores, adjust parameters of the micro-expression analysis model according to the prediction loss value, and obtain the trained micro-expression analysis model when the prediction loss value reaches a convergence condition.
In one embodiment, the apparatus further comprises: the information returning module 380 is configured to obtain an emotional state type with the highest resource request emotional score corresponding to the resource request face picture as a current emotional state type, obtain face phase feature point attribute information corresponding to the resource request face picture, and return the current emotional state type and the face phase feature point attribute information corresponding to the resource request face picture to a terminal corresponding to a corresponding resource requesting party, so that the terminal displays the corresponding current emotional state type and the face phase feature point attribute information on the resource request face picture.
for specific limitations of the resource sharing apparatus, reference may be made to the above limitations of the resource sharing method, which is not described herein again. All or part of the modules in the resource sharing device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing relevant data such as face pictures and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of resource sharing.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is provided a computer device comprising a memory storing a computer program and a processor implementing the following steps when the processor executes the computer program: acquiring resource request face pictures uploaded by a resource request party, wherein the resource request face pictures are face pictures shot according to reference face pictures issued by a resource issuing party, the resource request party comprises at least two, the resource request face pictures are subjected to feature extraction to obtain resource request micro-expression features, the resource request micro-expression features are input into a trained micro-expression analysis model to be analyzed to obtain resource request emotion scores corresponding to the resource request party under each candidate emotional state type, the reference face pictures are subjected to feature extraction to obtain reference micro-expression features, the reference micro-expression features are input into the trained micro-expression analysis model to be analyzed to obtain reference emotional state types corresponding to the resource issuing party, and the reference emotional state types are determined according to emotion scores corresponding to the reference face pictures from the candidate emotional state types, and acquiring a target resource request emotion score corresponding to each resource request party under the reference emotion state type, performing resource segmentation on the resource to be shared of the resource issuing party according to the target resource request emotion score, and determining a resource sharing result matched with each resource request party.
In an embodiment, the resource partitioning of the resource to be shared of the resource publisher according to the target resource request emotion score and the determining of the resource sharing result matched by each resource requestor includes: counting target resource request emotion scores corresponding to the resource requesters in the reference emotion state types to obtain total resource request emotion scores, calculating resource request emotion similarity corresponding to the resource requesters according to the target resource request emotion scores and the total resource request emotion scores corresponding to the resource requesters in the reference emotion state types, sequencing the resource requesters according to the sequence of the target resource request emotion scores from large to small, and obtaining resource allocation sequences of the resource requesters according to sequencing results; sequentially calculating resource sharing results matched with the resource requesters according to the resource allocation sequence and the resource request emotion similarity corresponding to the resource requesters, and accumulating the resource sharing results to obtain the currently allocated resources; determining a resource sharing result corresponding to the last resource request party as a difference value between the resource line to be shared and the currently allocated resource line; when the resource limit corresponding to the last resource requester is greater than the resource limit corresponding to the second last resource requester, the resource limit corresponding to the second last resource requester is adjusted so that the resource limit corresponding to the second last resource requester is greater than or equal to the resource limit corresponding to the last resource requester.
in one embodiment, the processor, when executing the computer program, further performs the steps of: inputting the resource request face picture into a trained face analysis model for analysis to obtain first face feature point attribute information corresponding to each resource request party; inputting the attribute information of the first facies feature points into a trained personality characteristic analysis model for analysis to obtain first personality characteristic scores corresponding to the resource requesters under the candidate personality characteristics, and outputting target personality types corresponding to the resource requesters according to the first personality characteristic scores; inputting a reference human face picture into the trained facial analysis model for analysis to obtain second facial feature point attribute information corresponding to the resource publisher; inputting the attribute information of the second facies feature point into a trained personality characteristic analysis model for analysis to obtain a second personality characteristic score corresponding to each candidate personality characteristic of the resource publisher, and outputting a reference personality type corresponding to the resource publisher according to the second personality characteristic score; and screening to obtain the resource requesters matched with the reference personality type as target resource requesters according to the target personality types corresponding to the resource requesters.
The obtaining of the target resource request emotion score corresponding to each resource requester under the reference emotion state type includes: and acquiring a target resource request emotion score corresponding to each target resource request party under the reference emotion state type.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring a first personality characteristic score corresponding to each candidate personality characteristic of each resource request party; acquiring a second personality characteristic score corresponding to each candidate personality characteristic of the resource distributor, and taking the candidate personality characteristic with the highest second personality characteristic score as a reference personality type corresponding to the resource distributor; acquiring target personality characteristic scores corresponding to the resource requesters under the reference personality type, and calculating comprehensive scores corresponding to the resource requesters according to the target personality characteristic scores and the target resource request emotion scores; and performing resource segmentation on the resources to be shared of the resource publisher according to the comprehensive scores, and determining resource sharing results matched with each resource requestor.
In one embodiment, the training of the micro-episodic analysis model comprises: acquiring training face pictures of users with different personality characteristics and corresponding micro-expression labels, wherein the micro-expression labels comprise a plurality of candidate emotion state types and corresponding standard emotion scores; extracting features of the training face picture to obtain training micro-expression features, and inputting the training micro-expression features into a micro-expression analysis model to obtain predicted emotion scores corresponding to candidate emotion state types predicted by the micro-expression analysis model; and calculating a prediction loss value according to the standard emotion score and the prediction emotion score, adjusting parameters of the micro-episodic analysis model according to the prediction loss value, and obtaining the trained micro-episodic analysis model when the prediction loss value reaches a convergence condition.
in one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring an emotional state type with the highest resource request emotional score corresponding to the resource request face picture as a current emotional state type; acquiring face phase feature point attribute information corresponding to a resource request face picture; and returning the current emotional state type and the facial phase characteristic point attribute information corresponding to the resource request face picture to the terminal corresponding to the corresponding resource request party, so that the terminal displays the corresponding current emotional state type and the facial phase characteristic point attribute information on the resource request face picture.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring resource request face pictures uploaded by a resource request party, wherein the resource request face pictures are face pictures shot according to reference face pictures issued by a resource issuing party, the resource request party comprises at least two, the resource request face pictures are subjected to feature extraction to obtain resource request micro-expression features, the resource request micro-expression features are input into a trained micro-expression analysis model to be analyzed to obtain resource request emotion scores corresponding to the resource request party under each candidate emotional state type, the reference face pictures are subjected to feature extraction to obtain reference micro-expression features, the reference micro-expression features are input into the trained micro-expression analysis model to be analyzed to obtain reference emotional state types corresponding to the resource issuing party, and the reference emotional state types are determined according to emotion scores corresponding to the reference face pictures from the candidate emotional state types, and acquiring a target resource request emotion score corresponding to each resource request party under the reference emotion state type, performing resource segmentation on the resource to be shared of the resource issuing party according to the target resource request emotion score, and determining a resource sharing result matched with each resource request party.
In an embodiment, the resource partitioning of the resource to be shared of the resource publisher according to the target resource request emotion score and the determining of the resource sharing result matched by each resource requestor includes: counting target resource request emotion scores corresponding to the resource requesters in the reference emotion state types to obtain total resource request emotion scores, calculating resource request emotion similarity corresponding to the resource requesters according to the target resource request emotion scores and the total resource request emotion scores corresponding to the resource requesters in the reference emotion state types, sequencing the resource requesters according to the sequence of the target resource request emotion scores from large to small, and obtaining resource allocation sequences of the resource requesters according to sequencing results; sequentially calculating resource sharing results matched with the resource requesters according to the resource allocation sequence and the resource request emotion similarity corresponding to the resource requesters, and accumulating the resource sharing results to obtain the currently allocated resources; determining a resource sharing result corresponding to the last resource request party as a difference value between the resource line to be shared and the currently allocated resource line; when the resource limit corresponding to the last resource requester is greater than the resource limit corresponding to the second last resource requester, the resource limit corresponding to the second last resource requester is adjusted so that the resource limit corresponding to the second last resource requester is greater than or equal to the resource limit corresponding to the last resource requester.
In one embodiment, the computer program when executed by the processor further performs the steps of: inputting the resource request face picture into a trained face analysis model for analysis to obtain first face feature point attribute information corresponding to each resource request party; inputting the attribute information of the first facies feature points into a trained personality characteristic analysis model for analysis to obtain first personality characteristic scores corresponding to the resource requesters under the candidate personality characteristics, and outputting target personality types corresponding to the resource requesters according to the first personality characteristic scores; inputting a reference human face picture into the trained facial analysis model for analysis to obtain second facial feature point attribute information corresponding to the resource publisher; inputting the attribute information of the second facies feature point into a trained personality characteristic analysis model for analysis to obtain a second personality characteristic score corresponding to each candidate personality characteristic of the resource publisher, and outputting a reference personality type corresponding to the resource publisher according to the second personality characteristic score; and screening to obtain the resource requesters matched with the reference personality type as target resource requesters according to the target personality types corresponding to the resource requesters.
The obtaining of the target resource request emotion score corresponding to each resource requester under the reference emotion state type includes: and acquiring a target resource request emotion score corresponding to each target resource request party under the reference emotion state type.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a first personality characteristic score corresponding to each candidate personality characteristic of each resource request party; acquiring a second personality characteristic score corresponding to each candidate personality characteristic of the resource distributor, and taking the candidate personality characteristic with the highest second personality characteristic score as a reference personality type corresponding to the resource distributor; acquiring target personality characteristic scores corresponding to the resource requesters under the reference personality type, and calculating comprehensive scores corresponding to the resource requesters according to the target personality characteristic scores and the target resource request emotion scores; and performing resource segmentation on the resources to be shared of the resource publisher according to the comprehensive scores, and determining resource sharing results matched with each resource requestor.
In one embodiment, the training of the micro-episodic analysis model comprises: acquiring training face pictures of users with different personality characteristics and corresponding micro-expression labels, wherein the micro-expression labels comprise a plurality of candidate emotion state types and corresponding standard emotion scores; extracting features of the training face picture to obtain training micro-expression features, and inputting the training micro-expression features into a micro-expression analysis model to obtain predicted emotion scores corresponding to candidate emotion state types predicted by the micro-expression analysis model; and calculating a prediction loss value according to the standard emotion score and the prediction emotion score, adjusting parameters of the micro-episodic analysis model according to the prediction loss value, and obtaining the trained micro-episodic analysis model when the prediction loss value reaches a convergence condition.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring an emotional state type with the highest resource request emotional score corresponding to the resource request face picture as a current emotional state type; acquiring face phase feature point attribute information corresponding to a resource request face picture; and returning the current emotional state type and the facial phase characteristic point attribute information corresponding to the resource request face picture to the terminal corresponding to the corresponding resource request party, so that the terminal displays the corresponding current emotional state type and the facial phase characteristic point attribute information on the resource request face picture.
it will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
the above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. a method of resource sharing, the method comprising:
acquiring resource request face pictures uploaded by a resource requesting party, wherein the resource request face pictures are face pictures shot according to reference face pictures issued by a resource issuing party, and the resource requesting party comprises at least two face pictures;
extracting the characteristics of the resource request face picture to obtain resource request micro-expression characteristics, inputting the resource request micro-expression characteristics into a trained micro-expression analysis model for analysis, and obtaining resource request emotion scores corresponding to the resource requester under each candidate emotion state type;
Extracting features of the reference face picture to obtain reference micro-expression features, inputting the reference micro-expression features into the trained micro-expression analysis model for analysis to obtain a reference emotional state type corresponding to the resource publisher, wherein the reference emotional state type is an emotional state type determined according to an emotional score corresponding to the reference face picture from candidate emotional state types;
And acquiring a target resource request emotion score corresponding to each resource request party under the reference emotion state type, performing resource segmentation on the resource to be shared of the resource issuing party according to the target resource request emotion score, and determining a resource sharing result matched with each resource request party.
2. The method according to claim 1, wherein the resource to be shared of the resource publisher is resource-partitioned according to the target resource request sentiment score, and determining the resource sharing result matched by each resource requestor comprises:
Counting target resource request emotion scores corresponding to the resource requesters in the reference emotion state type to obtain total resource request emotion scores, and calculating resource request emotion similarity corresponding to the resource requesters according to the target resource request emotion scores corresponding to the resource requesters in the reference emotion state type and the total resource request emotion scores;
Sequencing the resource requesters according to the sequence of the emotion scores of the target resource requests from large to small, and obtaining the resource allocation sequence of each resource requester according to the sequencing result;
Sequentially calculating resource sharing results matched with each resource requester according to the resource allocation sequence and the resource request emotion similarity corresponding to each resource requester, and accumulating the resource sharing results to obtain the current allocated resources;
determining a resource sharing result corresponding to the last resource request party as a difference value between the resource line to be shared and the currently allocated resource line;
when the resource limit corresponding to the last resource requester is greater than the resource limit corresponding to the second last resource requester, the resource limit corresponding to the second last resource requester is adjusted to make the resource limit corresponding to the second last resource requester greater than or equal to the resource limit corresponding to the last resource requester.
3. The method of claim 1, further comprising:
Inputting the resource request face picture into a trained face analysis model for analysis to obtain first face feature point attribute information corresponding to each resource request party;
inputting the first facies feature point attribute information into a trained personality characteristic analysis model for analysis to obtain first personality characteristic scores corresponding to the resource requesters under the candidate personality characteristics, and outputting target personality types corresponding to the resource requesters according to the first personality characteristic scores;
inputting the reference human face picture into the trained facial analysis model for analysis to obtain second facial feature point attribute information corresponding to the resource publisher;
Inputting the second facies feature point attribute information into the trained personality characteristic analysis model for analysis to obtain second personality characteristic scores corresponding to the resource publisher under each candidate personality characteristic, and outputting a reference personality type corresponding to the resource publisher according to the second personality characteristic scores;
Screening to obtain a resource requester matched with the reference personality type as a target resource requester according to the target personality type corresponding to each resource requester;
the obtaining of the target resource request emotion score corresponding to each resource requester under the reference emotion state type includes:
and acquiring a target resource request emotion score corresponding to each target resource request party under the reference emotion state type.
4. the method of claim 1, further comprising:
Acquiring a first personality characteristic score corresponding to each candidate personality characteristic of each resource request party;
Acquiring a second personality characteristic score corresponding to each candidate personality characteristic of the resource distributor, and taking the candidate personality characteristic with the highest second personality characteristic score as a reference personality type corresponding to the resource distributor;
acquiring target personality characteristic scores corresponding to the resource requesters under the reference personality type, and calculating comprehensive scores corresponding to the resource requesters according to the target personality characteristic scores and the target resource request emotion scores;
and performing resource segmentation on the resources to be shared of the resource issuers according to the comprehensive scores, and determining resource sharing results matched with each resource requester.
5. The method of claim 1, wherein the training step of the micro-expression analysis model comprises:
acquiring training face pictures of users with different personality characteristics and corresponding micro-expression labels, wherein the micro-expression labels comprise a plurality of candidate emotion state types and corresponding standard emotion scores;
extracting the features of the training face picture to obtain training micro-expression features, and inputting the training micro-expression features into a micro-expression analysis model to obtain predicted emotion scores corresponding to each candidate emotion state type predicted by the micro-expression analysis model;
And calculating a prediction loss value according to the standard emotion score and the prediction emotion score, adjusting parameters of the micro-episodic analysis model according to the prediction loss value, and obtaining the trained micro-episodic analysis model when the prediction loss value reaches a convergence condition.
6. the method of claim 1, further comprising:
acquiring an emotional state type with the highest resource request emotional score corresponding to the resource request face picture as a current emotional state type;
Acquiring face phase feature point attribute information corresponding to the resource request face picture;
and returning the current emotional state type and the facial phase characteristic point attribute information corresponding to the resource request face picture to a terminal corresponding to a corresponding resource request party, so that the terminal displays the corresponding current emotional state type and the facial phase characteristic point attribute information on the resource request face picture.
7. An apparatus for resource sharing, the apparatus comprising:
the system comprises an acquisition module, a resource request module and a resource request module, wherein the acquisition module is used for acquiring resource request face pictures uploaded by a resource request party, the resource request face pictures are face pictures shot according to reference face pictures issued by a resource issuing party, and the resource request party comprises at least two face pictures;
the resource request emotion score determining module is used for extracting the characteristics of the resource request face picture to obtain resource request micro-expression characteristics, inputting the resource request micro-expression characteristics into a trained micro-expression analysis model for analysis, and obtaining resource request emotion scores corresponding to the resource requesting party under each candidate emotion state type;
the reference emotional state type determining module is used for extracting features of the reference face picture to obtain reference micro-expression features, inputting the reference micro-expression features into the trained micro-expression analysis model for analysis, and obtaining a reference emotional state type corresponding to the resource publisher, wherein the reference emotional state type is an emotional state type determined from candidate emotional state types according to the emotional score corresponding to the reference face picture;
and the resource sharing module is used for acquiring a target resource request emotion score corresponding to each resource requester under the reference emotion state type, performing resource segmentation on the resource to be shared of the resource publisher according to the target resource request emotion score, and determining a resource sharing result matched with each resource requester.
8. the apparatus according to claim 7, wherein the resource sharing module is further configured to count a target resource request emotion score corresponding to each resource requester in the reference emotional state type to obtain a total resource request emotion score, and calculate a resource request emotion similarity corresponding to each resource requester according to the target resource request emotion score corresponding to each resource requester in the reference emotional state type and the total resource request emotion score; sequencing the resource requesters according to the sequence of the emotion scores of the target resource requests from large to small, and obtaining the resource allocation sequence of each resource requester according to the sequencing result; and sequentially calculating resource sharing results matched with each resource requester according to the resource allocation sequence and the resource request emotion similarity corresponding to each resource requester, accumulating the resource sharing results to obtain the current allocated resource, determining the resource sharing result corresponding to the last resource requester as the difference value between the resource quota to be shared and the current allocated resource quota, and adjusting the resource quota corresponding to the second last resource requester when the resource quota corresponding to the last resource requester is greater than the resource quota corresponding to the second last resource requester so that the resource quota corresponding to the second last resource requester is greater than or equal to the resource quota corresponding to the last resource requester.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910677578.3A CN110555374A (en) | 2019-07-25 | 2019-07-25 | resource sharing method and device, computer equipment and storage medium |
PCT/CN2020/087538 WO2021012742A1 (en) | 2019-07-25 | 2020-04-28 | Resource sharing method and apparatus, computer device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910677578.3A CN110555374A (en) | 2019-07-25 | 2019-07-25 | resource sharing method and device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110555374A true CN110555374A (en) | 2019-12-10 |
Family
ID=68736427
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910677578.3A Pending CN110555374A (en) | 2019-07-25 | 2019-07-25 | resource sharing method and device, computer equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110555374A (en) |
WO (1) | WO2021012742A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021012742A1 (en) * | 2019-07-25 | 2021-01-28 | 深圳壹账通智能科技有限公司 | Resource sharing method and apparatus, computer device, and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9269374B1 (en) * | 2014-10-27 | 2016-02-23 | Mattersight Corporation | Predictive video analytics system and methods |
CN107895146A (en) * | 2017-11-01 | 2018-04-10 | 深圳市科迈爱康科技有限公司 | Micro- expression recognition method, device, system and computer-readable recording medium |
CN109166616A (en) * | 2018-09-04 | 2019-01-08 | 中国平安人寿保险股份有限公司 | Service resource allocation method, device, computer equipment and storage medium |
CN109684978A (en) * | 2018-12-18 | 2019-04-26 | 深圳壹账通智能科技有限公司 | Employees'Emotions monitoring method, device, computer equipment and storage medium |
CN109766917A (en) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | Interview video data handling procedure, device, computer equipment and storage medium |
CN109766766A (en) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | Employee work condition monitoring method, device, computer equipment and storage medium |
CN109766774A (en) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | User information collection method, apparatus, computer equipment and storage medium |
CN109858215A (en) * | 2017-11-30 | 2019-06-07 | 腾讯科技(深圳)有限公司 | Resource acquisition, sharing, processing method, device, storage medium and equipment |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090125806A1 (en) * | 2007-11-13 | 2009-05-14 | Inventec Corporation | Instant message system with personalized object and method thereof |
CN109614783A (en) * | 2018-12-20 | 2019-04-12 | 惠州Tcl移动通信有限公司 | Terminal safety protection method, device, mobile terminal and storage medium |
CN109754329B (en) * | 2019-01-31 | 2022-12-20 | 腾讯科技(深圳)有限公司 | Electronic resource processing method, terminal, server and storage medium |
CN110555374A (en) * | 2019-07-25 | 2019-12-10 | 深圳壹账通智能科技有限公司 | resource sharing method and device, computer equipment and storage medium |
-
2019
- 2019-07-25 CN CN201910677578.3A patent/CN110555374A/en active Pending
-
2020
- 2020-04-28 WO PCT/CN2020/087538 patent/WO2021012742A1/en active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9269374B1 (en) * | 2014-10-27 | 2016-02-23 | Mattersight Corporation | Predictive video analytics system and methods |
CN107895146A (en) * | 2017-11-01 | 2018-04-10 | 深圳市科迈爱康科技有限公司 | Micro- expression recognition method, device, system and computer-readable recording medium |
WO2019085495A1 (en) * | 2017-11-01 | 2019-05-09 | 深圳市科迈爱康科技有限公司 | Micro-expression recognition method, apparatus and system, and computer-readable storage medium |
CN109858215A (en) * | 2017-11-30 | 2019-06-07 | 腾讯科技(深圳)有限公司 | Resource acquisition, sharing, processing method, device, storage medium and equipment |
CN109166616A (en) * | 2018-09-04 | 2019-01-08 | 中国平安人寿保险股份有限公司 | Service resource allocation method, device, computer equipment and storage medium |
CN109684978A (en) * | 2018-12-18 | 2019-04-26 | 深圳壹账通智能科技有限公司 | Employees'Emotions monitoring method, device, computer equipment and storage medium |
CN109766917A (en) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | Interview video data handling procedure, device, computer equipment and storage medium |
CN109766766A (en) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | Employee work condition monitoring method, device, computer equipment and storage medium |
CN109766774A (en) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | User information collection method, apparatus, computer equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
冯满堂;马青玉;王瑞杰;: "基于人脸表情识别的智能网络教学系统研究", 计算机技术与发展, no. 06 * |
李广鹏;刘波;李坤;黄思琦;: "一种基于机器学习的人脸情绪识别方法研究", 计算机技术与发展, no. 05 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021012742A1 (en) * | 2019-07-25 | 2021-01-28 | 深圳壹账通智能科技有限公司 | Resource sharing method and apparatus, computer device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2021012742A1 (en) | 2021-01-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021068487A1 (en) | Face recognition model construction method, apparatus, computer device, and storage medium | |
CN108133330B (en) | Social crowdsourcing task allocation method and system | |
CN109902546A (en) | Face identification method, device and computer-readable medium | |
WO2020228384A1 (en) | Virtual head portrait generation method and device, and storage medium | |
US20180204094A1 (en) | Image recognition method and apparatus | |
CN110147729A (en) | User emotion recognition methods, device, computer equipment and storage medium | |
CN106295476A (en) | Face key point localization method and device | |
CN107341435A (en) | Processing method, device and the terminal device of video image | |
CN109523344A (en) | Product information recommended method, device, computer equipment and storage medium | |
JP2023531264A (en) | Systems and methods for improved facial attribute classification and its use | |
CN108776904A (en) | A kind of methods of exhibiting and its equipment of advertisement information | |
US20200176019A1 (en) | Method and system for recognizing emotion during call and utilizing recognized emotion | |
CN109614990A (en) | A kind of object detecting device | |
CN112818227B (en) | Content recommendation method and device, electronic equipment and storage medium | |
CN113240778A (en) | Virtual image generation method and device, electronic equipment and storage medium | |
Solomon et al. | Interactive evolutionary generation of facial composites for locating suspects in criminal investigations | |
CN118673210A (en) | Systems and methods for providing personalized product recommendations using deep learning | |
TW202221638A (en) | Method and apparatus for processing face image, electronic device and storage medium | |
CN107633196A (en) | A kind of eyeball moving projection scheme based on convolutional neural networks | |
CN110147740B (en) | Face recognition method, device, equipment and storage medium | |
US20220305365A1 (en) | Field Rating and Course Adjusted Strokes Gained for Global Golf Analysis | |
WO2023192531A1 (en) | Facial emotion recognition system | |
CN110555374A (en) | resource sharing method and device, computer equipment and storage medium | |
CN110399818A (en) | A kind of method and apparatus of risk profile | |
CN113076778A (en) | Method, system, readable storage medium and apparatus for reshaping analog image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20191210 |