CN109858377A - Contributing device, device, storage medium and electronic equipment based on micro- Expression Recognition - Google Patents

Contributing device, device, storage medium and electronic equipment based on micro- Expression Recognition Download PDF

Info

Publication number
CN109858377A
CN109858377A CN201910003249.0A CN201910003249A CN109858377A CN 109858377 A CN109858377 A CN 109858377A CN 201910003249 A CN201910003249 A CN 201910003249A CN 109858377 A CN109858377 A CN 109858377A
Authority
CN
China
Prior art keywords
expression
micro
smile
fidelity
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910003249.0A
Other languages
Chinese (zh)
Inventor
梁炳强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OneConnect Smart Technology Co Ltd
Original Assignee
OneConnect Smart Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OneConnect Smart Technology Co Ltd filed Critical OneConnect Smart Technology Co Ltd
Priority to CN201910003249.0A priority Critical patent/CN109858377A/en
Publication of CN109858377A publication Critical patent/CN109858377A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

This disclosure relates to micro- Expression Recognition technical field, a kind of contributing device based on micro- Expression Recognition and device, storage medium and electronic equipment are disclosed.The contributing device based on micro- Expression Recognition includes: the micro- expression for acquiring user;Smile's sincerity degree corresponding with the micro- expression is obtained, and target contribution number is obtained according to smile's sincerity degree;Number, which is contributed money, according to the target executes corresponding contribution operation.The disclosure obtains smile's sincerity degree based on the identification to the micro- expression of user, and smile's sincerity degree is associated with contribution number, realizes micro- Expression Recognition technology and contributes money the combination of behavior, improves the interest of contribution behavior, improve the property of participation of user.

Description

Donation method and device based on micro-expression recognition, storage medium and electronic equipment
Technical Field
The disclosure relates to the technical field of micro-expression recognition, and more particularly, discloses a micro-expression recognition-based donation method, a micro-expression recognition-based donation device, a storage medium and an electronic device.
Background
With the high-speed development of the current society, more and more people start to invest in public welfare activities, but many people often have low cognition and participation degree on the public welfare activities because of not finding proper activity forms or better activity creatives, and along with the development of the internet, people start to closely link public welfare and science and technology, so that the public welfare activities become convenient and attractive.
At present, many platforms have started to combine public welfare with science and technology, for example, by performing a donation operation at a user mobile phone terminal, but since a user cannot confirm a specific destination of a donated money, the public welfare still remains doubtful attitude, and thus participation enthusiasm of the user is reduced.
Therefore, there is a need in the art to provide a new method of donation.
It is to be noted that the information invented in the background section above is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure aims to provide a donation method and apparatus based on micro-expression recognition, a storage medium and an electronic device, so as to overcome the problems of low participation degree and poor participation enthusiasm of public welfare activities and the like caused by low cognition degree of people on the public welfare activities at least to a certain extent. In order to achieve the technical effects, the following technical scheme is adopted in the disclosure.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to one aspect of the present disclosure, there is provided a donation method based on micro-expression recognition, including:
collecting the micro expression of a user;
obtaining smile fidelity corresponding to the micro expression, and obtaining a target contribution amount according to the smile fidelity;
and executing corresponding contribution operation according to the target contribution amount.
In an exemplary embodiment of the present disclosure, before collecting the micro-expression of the user, the method further includes:
extracting expression characteristic information of a plurality of micro expression samples, and grading the micro expression samples according to the expression characteristic information to obtain smile integrity corresponding to the micro expression samples;
and inputting the expression characteristic information as an input vector and the smile integrity as an output vector into a machine learning model, training the machine learning model, and using the trained machine learning model as a mapping model of the expression characteristic information and the smile integrity.
In an exemplary embodiment of the present disclosure, the expressive feature information includes one or more of mouth angle uplift angle, eye bending angle, cheek convex height, chin opening;
the method for extracting the expression characteristic information of the multiple micro expression samples and scoring the micro expressions according to the expression characteristic information so as to obtain the fidelity of the smile corresponding to the micro expression samples comprises the following steps:
acquiring the quantity of the expression characteristic information in the micro expression sample larger than a preset threshold;
matching the number with a plurality of preset intervals, and if the number is within a target preset interval, determining that the smile degree value corresponding to the target preset interval is a target smile degree value;
acquiring a weighted value corresponding to the expression characteristic information larger than the preset threshold value;
and scoring the micro expression according to the target smile degree value and the weight value to obtain the smile integrity corresponding to the micro expression sample.
In an exemplary embodiment of the present disclosure, the scoring the micro expression according to the target smiley level value and the weight value to obtain the fidelity of the smile corresponding to the micro expression sample includes:
acquiring the sum of weighted values corresponding to the expression characteristic information;
and multiplying the sum of the weighted values with the target smile degree value to obtain the smile fidelity corresponding to the micro expression.
In an exemplary embodiment of the present disclosure, obtaining a fidelity of a smile corresponding to the micro expression and obtaining a target contribution amount according to the fidelity of the smile comprises:
obtaining the corresponding relation between the smile integrity and the contribution amount corresponding to the micro expression sample;
and acquiring the target donation amount according to the smile fidelity corresponding to the micro expression and the corresponding relation.
In an exemplary embodiment of the disclosure, obtaining the target contribution amount according to the smile honesty and the correspondence corresponding to the micro expression includes:
obtaining the expression characteristic information of the micro expression;
inputting the obtained expression characteristic information into the mapping model to determine the fidelity of the smile corresponding to the expression characteristic information;
and acquiring the target contribution amount based on the corresponding relation between the smile fidelity and the contribution amount.
In an exemplary embodiment of the present disclosure, the collecting micro-expression of the user includes:
receiving a micro expression of a user uploaded by the user on an activity platform; or,
calling a camera of user terminal equipment to take a picture so as to obtain the micro expression of the user; or,
and acquiring the micro expression of the user through micro expression acquisition equipment provided by the event host.
According to an aspect of the present disclosure, there is provided a donation device based on micro-expression recognition, including:
the expression acquisition module is used for acquiring the micro expression of the user;
the obtaining module is used for obtaining the smile fidelity corresponding to the micro expression and obtaining a target contribution amount according to the smile fidelity;
and the operation execution module is used for executing corresponding contribution operation according to the target contribution amount.
According to an aspect of the present disclosure, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the micro expression recognition based donation method according to any one of the above.
According to an aspect of the present disclosure, there is provided an electronic device including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to execute any one of the above mentioned micro-expression recognition based donation methods via execution of the executable instructions.
In the donation method based on micro-expression recognition, the fidelity of the smile corresponding to the micro-expression of the user is obtained based on the micro-expression recognition technology, and the platform performs corresponding donation according to the donation amount corresponding to the fidelity of the smile of the user. On one hand, the smiling volume integrity of the user is obtained through the micro expression recognition technology, the micro expressions of different users can be distinguished, and the interestingness of the activity is improved; meanwhile, the micro-expression recognition technology is combined with public welfare contribution behaviors, and the participation of public welfare activities of users is improved. On the other hand, the true honesty of the smiles of the users is associated with the contribution amount, the platform carries out corresponding contribution according to the true honesty of the smiles of the users, so that the users can bring more public service contributions by experiencing the honest smiles, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 schematically illustrates a flow diagram of a donation method based on micro-expression recognition according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart for establishing a mapping model of expressive feature information and smile honesty, according to an embodiment of the disclosure;
fig. 3 schematically illustrates a flowchart of scoring a micro expression sample according to expression feature information to obtain smile honesty corresponding to the micro expression sample according to an embodiment of the disclosure;
FIG. 4 schematically illustrates a schematic diagram of a donation device based on micro-expression recognition according to an embodiment of the present disclosure;
FIG. 5 schematically illustrates a schematic diagram of a storage medium according to an embodiment of the present disclosure;
fig. 6 schematically shows a block diagram of an electronic device according to an embodiment of the present disclosure.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
Exemplary embodiments will now be described more fully with reference to the accompanying drawings. The exemplary embodiments, however, may be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of exemplary embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar structures, and thus their detailed description will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known structures, methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. That is, these functional entities may be implemented in the form of software, or in one or more software-hardened modules, or in different networks and/or processor devices and/or microcontroller devices.
In the related art in the field, there are many forms of public welfare donations, for example, individuals collect donations at an activity site, and users realize internet donations through terminal devices; a business organization or group initiates an activity and collects funds for donation, and so on. However, such public welfare donation is not only single in mode, but also difficult to attract people to participate; meanwhile, due to the occurrence of various fraudulent behaviors, the recognition of public interest contributions by the public is reduced, and the enthusiasm of the user for participating in public interest activities is further reduced.
Based on this, in an exemplary embodiment of the present disclosure, a donation method based on micro-expression recognition is first provided. Fig. 1 shows a flow chart of a donation method based on micro-expression recognition, and referring to fig. 1, the donation method based on micro-expression recognition may include the following steps:
step S110: collecting the micro expression of a user;
step S120: obtaining smile fidelity corresponding to the micro expression, and obtaining a target contribution amount according to the smile fidelity;
step S130: and executing corresponding contribution operation according to the target contribution amount.
According to the donation method based on micro-expression recognition in the embodiment, on one hand, the smile integrity of the user is obtained through the micro-expression recognition technology, the micro-expressions of different users can be distinguished, and the interestingness of the activity is improved; meanwhile, the micro-expression recognition technology is combined with public welfare contribution behaviors, and the participation of public welfare activities of users is improved. On the other hand, the true honesty of the smiles of the users is associated with the contribution amount, and the platform makes corresponding contribution according to the true honesty of the smiles of the users, so that the users can bring more public service contributions when experiencing the honest smiles, and the user experience is increased.
For ease of understanding, the disclosure will take public welfare donation activities as an example, and further describe the donation method based on micro-expression recognition in this example embodiment.
In step S110, the micro expressions of the user are collected.
In this example embodiment, micro-situations of users accessing the public donation campaign platform may be collected. The mode of the user accessing the activity platform may be: scanning a two-dimensional code provided by an activity site by using terminal equipment with a camera, wherein the terminal equipment can be a smart phone or a PAD (PAD application data) and the like; or through a website provided by an input platform, or directly through micro-expression acquisition equipment provided by an activity site to log in the platform. The mode of collecting the micro expression of the user can be to receive the micro expression in a photo album of the user terminal equipment uploaded by the user on the activity platform, call a camera of the user terminal equipment to collect photos, and collect the photos through micro expression collecting equipment specially arranged by an activity host. In the embodiment of the disclosure, the collection mode of the micro expressions of the users may be micro expression collection for each individual user, or the micro expressions of all people in a certain range of the event venue may be collected collectively by the micro expression collection device of the event sponsor, and each micro expression is numbered for each micro expression so as to facilitate the distinguishing and identification of the micro expressions. It should be noted that other manners may be adopted to collect the micro-expression of the user according to the actual situation, and the disclosure does not make any special limitation on this.
In addition, before the micro expression of the user is collected, a micro expression database can be established, and various micro expression samples in the micro expression database can be obtained by the method for collecting the micro expression of the user, and can also be micro expressions obtained from a network platform (Baidu, Fox, Google and the like), so that the collection range of the micro expression samples in the micro expression database is expanded, and further micro expression identification is facilitated.
In step S120, the fidelity of the smile corresponding to the micro expression is obtained, and a target contribution amount is obtained according to the fidelity of the smile.
In the present example embodiment, the smiling integrity corresponding to the micro expression may be obtained by extracting the expressive feature information in the micro expression and according to the expressive feature information. Before obtaining the smile fidelity corresponding to the micro expression, a mapping model of the expression characteristic information and the smile fidelity may be established first. The mapping model reflects the corresponding relationship between the expression feature information and the fidelity of smile, fig. 2 shows a flowchart for establishing the mapping model between the expression feature information and the fidelity of smile, and as shown in fig. 2, the establishment of the mapping model includes the following steps:
step S210: and extracting expression characteristic information of various micro expression samples, and grading the micro expression samples according to the expression characteristic information to obtain the fidelity of smile corresponding to the micro expression samples.
In the present exemplary embodiment, feature extraction is performed on a plurality of micro expression samples based on the micro expression database, the main regions of the feature extraction include a mouth corner region, an eye corner region, a cheek region, a chin region, and the like, and accordingly, the obtained expression feature information mainly includes a mouth corner rising angle, an eye bending angle, whether a cheek is high, a chin opening, and the like. The micro-expression feature extraction method can be LBP-TOP (local Binary Pattern from Three organic planes) feature extraction algorithm, optical flow method and filter extraction method (Gabor). Further, the micro expression sample is scored according to the expression feature information to obtain the smile integrity corresponding to the micro expression sample, and fig. 3 shows a flowchart of scoring the micro expression sample according to the expression feature information to obtain the smile integrity corresponding to the micro expression sample, where as shown in fig. 3, obtaining the smile integrity corresponding to the micro expression sample includes steps S310 to S340, which are specifically as follows:
step S310: and acquiring the quantity of the expression characteristic information in the micro expression sample larger than a preset threshold value.
In the present exemplary embodiment, before comparing the extracted expression feature information with the preset threshold, first, according to the actual situation, a standard value is set for each expression feature information, that is, a preset threshold, for example, the expression feature information at the time of smiling may be used as the preset threshold, and so on. For the same smile, there is a difference in the degree of change of the expressive feature, for example, at smile, the degree of eye curvature is smaller than the degree of curvature of the mouth corner, and thus there is a difference in the setting of the preset threshold for different expressive feature information. Of course, a fixed preset threshold may also be set for each piece of expression feature information, which is not particularly limited in this disclosure. For example, table 1 shows each expression feature data and a corresponding preset threshold, where when the mouth angle raising angle is obtained, a horizontal line passing through a midpoint of a lower lip is taken as a reference zero degree, and an included angle between a connecting line of the mouth angle and the midpoint of the lower lip and the horizontal line is obtained, that is, the mouth angle raising angle; accordingly, the eye bending angle is the angle between the line connecting the corner of the eye and the midpoint of the upper eye line and the horizontal line passing through the midpoint of the upper eye line, the chin opening can be obtained by measuring the distance between the two cheeks, and so on. And acquiring the quantity of the expression characteristic information larger than the corresponding preset threshold value in the micro expression sample based on the preset threshold value corresponding to each expression characteristic information. It should be noted that the setting of the preset threshold of each piece of expressive feature information may also be set by other criteria, including but not limited to the preset threshold setting method described in this disclosure.
TABLE 1
Step S320: and matching the number with a plurality of preset intervals, and if the number is within a target preset interval, determining the smile degree value corresponding to the target preset interval as a target smile degree value.
In this exemplary embodiment, before the extracted number of pieces of expression feature information is matched with the preset intervals, the number of pieces of expression feature information larger than the preset threshold needs to be divided into a plurality of preset intervals, and a corresponding smile degree value is set for each preset interval. For example, table 2 shows a smile degree value table, as shown in table 2, when the number of the expression feature data of the micro expression sample larger than the preset threshold is 2, the smile degree value 40 corresponding to the number is the target smile degree value.
TABLE 2
Preset interval [0,3) [3,6) [6,9]
Smile degree value 40 70 100
Step S330: and acquiring a weight value corresponding to the expression characteristic information larger than the preset threshold value.
In the present exemplary embodiment, with continued reference to table 1, since the weights occupied by the pieces of expressive feature information in calculating the fidelity of smile are different, for example, the degree of mouth angle bending can represent the fidelity of smile more than the degree of eye bending, and thus the weight occupied by the degree of mouth angle bending in calculating the fidelity of smile is higher. In this example embodiment, a weight value of the expressive feature information that is greater than a preset threshold may be obtained.
Step S340: and scoring the micro expression according to the target smile degree value and the weight value to obtain the smile integrity corresponding to the micro expression sample.
In the present exemplary embodiment, a sum of weight values corresponding to the expression feature information is first obtained, and the smiling integrity corresponding to the micro expression is obtained by multiplying the target smiling degree value corresponding to the obtained micro expression by the sum of weight values. Specifically, with continued reference to tables 1 and 2, when the mouth angle rising angle and the eye bending angle are greater than the preset threshold values, the target smile degree value is determined to be 40 according to the smile degree value table; meanwhile, the weighting values of the mouth angle uplift angle and the eye bending angle are 1/2 and 1/4, respectively, and according to the target smile degree value and the weighting value corresponding to the expression characteristic information, the score of the micro expression is 30(40 × (1/2+1/4) ═ 30), that is, the smile integrity corresponding to the micro expression sample is 30.
Step S220: and inputting the expression characteristic information as an input vector and the smile integrity as an output vector into a machine learning model, training the machine learning model, and using the trained machine learning model as a mapping model of the expression characteristic information and the smile integrity.
In this example embodiment, the mapping model may be a convolutional neural network or a deep residual error network, and a person skilled in the art may use a corresponding machine learning model as needed, which is not specifically limited by the present disclosure. When the machine learning model is trained, the expression characteristic information and the corresponding smile integrity can be input into the machine learning model, and after training parameters such as the learning rate, the training times, the loss function and the optimization target are set, the mapping model can be obtained through automatic training.
Step S130: and executing corresponding contribution operation according to the target contribution amount.
In this example embodiment, before performing a corresponding contribution operation according to a target contribution amount, a corresponding relationship between the true integrity of the smile corresponding to the micro expression sample and the contribution amount may be obtained first, where the corresponding relationship may be a relationship table including the true integrity of the smile corresponding to a large number of micro expression samples and the corresponding contribution amount, and the target contribution amount may be obtained according to the true integrity of the smile corresponding to the micro expression and the corresponding relationship table. For example, Table 3 shows a table of correspondence between smile honesty and corresponding donation amounts for micro-situations. And if the smile integrity of the acquired micro-expression is 30, determining that the target donation amount is 300 yuan according to the corresponding relation table. It should be noted that, compared with the table listed in table 3, the smile fidelity obtained in practice and the corresponding contribution amount may also be divided into more corresponding relationships, but the method for obtaining the target contribution amount based on the correspondence table between the smile fidelity corresponding to the micro expression and the corresponding contribution amount is the same, and therefore, the details are not repeated. In addition, the correspondence between the smile integrity degree corresponding to the micro expression sample and the contribution amount may also be a machine learning model obtained based on training of the smile integrity degree corresponding to the micro expression sample and the corresponding contribution amount, and the machine learning model may be a convolutional neural network, a deep residual error network, or the like, which is not specifically limited by the present disclosure.
TABLE 3
Fidelity of smile 20 30 40 50
Donation amount (Yuan) 200 300 400 500
In the present exemplary embodiment, first, the expressive feature information of the micro-expression may be acquired; then inputting the expression feature information into a mapping model obtained based on expression feature information and smile integrity training to determine smile integrity corresponding to the expression feature information; and finally, acquiring a target donation amount based on the corresponding relationship between the fidelity of the smile and the corresponding donation amount. Furthermore, in the process of acquiring the fidelity of the smile of the user according to the expression characteristic information of the user and acquiring the corresponding contribution amount according to the fidelity of the smile, the marketing information of the corresponding host can be pushed to the user through the activity platform, and the user can also improve the cognition of the host by participating in the activity; the marketing of the users is realized in an unconscious state, the combination of the micro-expression recognition technology, the public welfare activities and the marketing is realized, and the user participation degree of the public welfare activities and the public welfare marketing effect are improved.
In addition, in the example embodiment, a donation device based on micro-expression recognition is also provided. Referring to fig. 4, the micro-expression recognition based donation device 400 may include: the expression acquisition module 410, the acquisition module 420 and the operation execution module 430. In particular, the amount of the solvent to be used,
the expression acquisition module 410 is used for acquiring the micro expressions of the user;
an obtaining module 420, configured to obtain a smile fidelity corresponding to the micro expression, and obtain a target contribution amount according to the smile fidelity;
and an operation executing module 430, configured to execute a corresponding donation operation according to the target donation amount.
Since each functional module of the donation device based on micro expression recognition in the embodiment of the present disclosure is the same as that in the embodiment of the donation method based on micro expression recognition, further description is omitted here.
Further, in an exemplary embodiment of the present disclosure, there is also provided a computer storage medium capable of implementing the above method. On which a program product capable of implementing the above-described method of the present specification is stored. In some possible embodiments, aspects of the present disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the present disclosure described in the "exemplary methods" section above of this specification, when the program product is run on the terminal device.
Referring to fig. 5, a program product 500 for implementing the above method according to an embodiment of the present disclosure is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided. As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 600 according to such an embodiment of the present disclosure is described below with reference to fig. 6. The electronic device 600 shown in fig. 6 is only an example and should not bring any limitations to the function and scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device 600 is embodied in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: the at least one processing unit 610, the at least one memory unit 620, a bus 630 connecting different system components (including the memory unit 620 and the processing unit 610), and a display unit 640.
Wherein the storage unit stores program code that is executable by the processing unit 610 to cause the processing unit 610 to perform steps according to various exemplary embodiments of the present disclosure as described in the above section "exemplary methods" of this specification.
The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)6201 and/or a cache memory unit 6202, and may further include a read-only memory unit (ROM) 6203.
The memory unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6206, such program modules 6206 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 630 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 660. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 660. As shown, the network adapter 660 communicates with the other modules of the electronic device 600 over the bus 630. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (10)

1. A donation method based on micro-expression recognition is characterized by comprising the following steps:
collecting the micro expression of a user;
obtaining smile fidelity corresponding to the micro expression, and obtaining a target contribution amount according to the smile fidelity;
and executing corresponding contribution operation according to the target contribution amount.
2. The micro-expression recognition based donation method according to claim 1, wherein before the micro-expressions of the users are collected, the method further comprises:
extracting expression characteristic information of a plurality of micro expression samples, and grading the micro expression samples according to the expression characteristic information to obtain smile integrity corresponding to the micro expression samples;
and inputting the expression characteristic information as an input vector and the smile integrity as an output vector into a machine learning model, training the machine learning model, and using the trained machine learning model as a mapping model of the expression characteristic information and the smile integrity.
3. The micro-expression recognition-based donation method according to claim 2, wherein the expressive feature information includes one or more of mouth-angle uplift angle, eye bending angle, cheek-protrusion height, chin-opening degree;
the method for extracting the expression characteristic information of the multiple micro expression samples and scoring the micro expressions according to the expression characteristic information so as to obtain the fidelity of the smile corresponding to the micro expression samples comprises the following steps:
acquiring the quantity of the expression characteristic information in the micro expression sample larger than a preset threshold;
matching the number with a plurality of preset intervals, and if the number is within a target preset interval, determining that the smile degree value corresponding to the target preset interval is a target smile degree value;
acquiring a weighted value corresponding to the expression characteristic information larger than the preset threshold value;
and scoring the micro expression according to the target smile degree value and the weight value to obtain the smile integrity corresponding to the micro expression sample.
4. The donation method based on micro expression recognition according to claim 3, wherein the scoring the micro expression according to the target smiley value and the weight value to obtain the fidelity of smile corresponding to the micro expression sample comprises:
acquiring the sum of weighted values corresponding to the expression characteristic information;
and multiplying the sum of the weighted values with the target smile degree value to obtain the smile fidelity corresponding to the micro expression.
5. The micro-expression recognition-based contribution method of claim 2, wherein obtaining a smile fidelity corresponding to the micro-expression and obtaining a target contribution amount according to the smile fidelity comprises:
obtaining the corresponding relation between the smile integrity and the contribution amount corresponding to the micro expression sample;
and acquiring the target donation amount according to the smile fidelity corresponding to the micro expression and the corresponding relation.
6. The donation method based on micro expression recognition according to claim 5, wherein the obtaining of the target donation amount according to the smiley fidelity corresponding to the micro expression and the corresponding relationship comprises:
obtaining the expression characteristic information of the micro expression;
inputting the obtained expression characteristic information into the mapping model to determine the fidelity of the smile corresponding to the expression characteristic information;
and acquiring the target contribution amount based on the corresponding relation between the smile fidelity and the contribution amount.
7. The donation method based on micro expression recognition according to any one of claims 1 to 6, wherein the collecting micro expressions of users comprises:
receiving a micro expression of a user uploaded by the user on an activity platform; or,
calling a camera of user terminal equipment to take a picture so as to obtain the micro expression of the user; or,
and acquiring the micro expression of the user through micro expression acquisition equipment provided by the event host.
8. A donation device based on micro-expression recognition, the device comprising:
the expression acquisition module is used for acquiring the micro expression of the user;
the obtaining module is used for obtaining the smile fidelity corresponding to the micro expression and obtaining a target contribution amount according to the smile fidelity;
and the operation execution module is used for executing corresponding contribution operation according to the target contribution amount.
9. A storage medium having stored thereon a computer program which, when executed by a processor, implements a micro expression recognition based donation method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the micro-expression recognition based donation method according to any one of claims 1 to 7 via execution of the executable instructions.
CN201910003249.0A 2019-01-03 2019-01-03 Contributing device, device, storage medium and electronic equipment based on micro- Expression Recognition Pending CN109858377A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910003249.0A CN109858377A (en) 2019-01-03 2019-01-03 Contributing device, device, storage medium and electronic equipment based on micro- Expression Recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910003249.0A CN109858377A (en) 2019-01-03 2019-01-03 Contributing device, device, storage medium and electronic equipment based on micro- Expression Recognition

Publications (1)

Publication Number Publication Date
CN109858377A true CN109858377A (en) 2019-06-07

Family

ID=66893834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910003249.0A Pending CN109858377A (en) 2019-01-03 2019-01-03 Contributing device, device, storage medium and electronic equipment based on micro- Expression Recognition

Country Status (1)

Country Link
CN (1) CN109858377A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101360219A (en) * 2008-09-03 2009-02-04 深圳市同洲电子股份有限公司 Method, system and terminal for charity donation using digital television
CN107798318A (en) * 2017-12-05 2018-03-13 四川文理学院 The method and its device of a kind of happy micro- expression of robot identification face

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101360219A (en) * 2008-09-03 2009-02-04 深圳市同洲电子股份有限公司 Method, system and terminal for charity donation using digital television
CN107798318A (en) * 2017-12-05 2018-03-13 四川文理学院 The method and its device of a kind of happy micro- expression of robot identification face

Similar Documents

Publication Publication Date Title
CN107680019B (en) Examination scheme implementation method, device, equipment and storage medium
CN108470253B (en) User identification method and device and storage equipment
US20180268458A1 (en) Automated recommendation and virtualization systems and methods for e-commerce
CN109993150B (en) Method and device for identifying age
CN107194158A (en) A kind of disease aided diagnosis method based on image recognition
CN112395979B (en) Image-based health state identification method, device, equipment and storage medium
CN111371767B (en) Malicious account identification method, malicious account identification device, medium and electronic device
CN110363084A (en) A kind of class state detection method, device, storage medium and electronics
CN109063587A (en) data processing method, storage medium and electronic equipment
WO2023173646A1 (en) Expression recognition method and apparatus
CN111626767B (en) Resource data issuing method, device and equipment
CN110348471B (en) Abnormal object identification method, device, medium and electronic equipment
CN106203050A (en) The exchange method of intelligent robot and device
CN110689046A (en) Image recognition method, image recognition device, computer device, and storage medium
CN113538070A (en) User life value cycle detection method and device and computer equipment
CN115238588A (en) Graph data processing method, risk prediction model training method and device
CN109858379A (en) Smile's sincerity degree detection method, device, storage medium and electronic equipment
CN111709384B (en) AR gesture recognition method and device, electronic equipment and storage medium
CN112818946A (en) Training of age identification model, age identification method and device and electronic equipment
CN110717817A (en) Pre-loan approval method and device, electronic equipment and computer-readable storage medium
CN109858377A (en) Contributing device, device, storage medium and electronic equipment based on micro- Expression Recognition
CN114092608B (en) Expression processing method and device, computer readable storage medium and electronic equipment
CN113553499B (en) Marketing fission-based cheating detection method, system and electronic equipment
CN114781517A (en) Risk identification method and device and terminal equipment
CN114265989A (en) Friend recommendation method, electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 201, room 518000, building A, No. 1, front Bay Road, Qianhai Shenzhen Guangdong Shenzhen Hong Kong cooperation zone (Qianhai business secretary)

Applicant after: Shenzhen one ledger Intelligent Technology Co., Ltd.

Address before: 518000 Guangdong city of Shenzhen province Qianhai Shenzhen Hong Kong cooperation zone before Bay Road No. 1 building 201 room A

Applicant before: Shenzhen one ledger Intelligent Technology Co., Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination