WO2021217912A1 - Procédé et appareil de génération d'informations sur la base d'une reconnaissance faciale, dispositif informatique et support de stockage - Google Patents

Procédé et appareil de génération d'informations sur la base d'une reconnaissance faciale, dispositif informatique et support de stockage Download PDF

Info

Publication number
WO2021217912A1
WO2021217912A1 PCT/CN2020/103796 CN2020103796W WO2021217912A1 WO 2021217912 A1 WO2021217912 A1 WO 2021217912A1 CN 2020103796 W CN2020103796 W CN 2020103796W WO 2021217912 A1 WO2021217912 A1 WO 2021217912A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
user
current user
welcome
face
Prior art date
Application number
PCT/CN2020/103796
Other languages
English (en)
Chinese (zh)
Inventor
郑秦苏
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2021217912A1 publication Critical patent/WO2021217912A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Definitions

  • This application relates to the field of image recognition technology, and in particular to an information generation method, device, computer equipment, and storage medium based on face recognition.
  • the door-front welcoming service equipment mainly includes inkjet banners, voice broadcast welcoming equipment, and screen broadcast welcoming equipment.
  • inkjet banners take a long time to make and cannot be recycled and reused.
  • the voice broadcast welcome device and the welcome word in the screen broadcast welcome device need to be edited manually by maintenance personnel.
  • the production or editing of the welcome speech results in low efficiency and single presentation, and it is impossible to detect the degree of satisfaction of the welcome user with the welcome speech.
  • the embodiments of the present application provide a method, device, computer equipment and storage medium for information generation based on face recognition, aiming to solve the problem of manual editing or production of door welcoming service equipment in the prior art, resulting in low efficiency and display mode Single problem.
  • an embodiment of the present application provides a method for generating information based on face recognition, which includes:
  • the welcoming information includes welcoming text information, welcoming audio information, or welcoming video information;
  • emotion recognition is performed on the user's facial image to obtain corresponding current user emotional information
  • a retainable tag is automatically added to the welcome information.
  • an information generation device based on face recognition which includes:
  • the first image judging unit is used to judge whether the current user image collected by the camera is received
  • the user identification unit is configured to, if the current user image collected by the camera is received, perform face recognition on the current user image to obtain corresponding current user identification information;
  • the welcoming information judging unit is used to determine whether the welcoming information corresponding to the current user identification information is stored; wherein the welcoming information includes welcoming text information, welcoming audio information, or welcoming video information ;
  • the welcome information sending unit is configured to send the welcome information to the corresponding display device for display if the welcome information corresponding to the current user identification information is stored;
  • the second image judging unit is used to judge whether the user's facial image collected by the camera is received
  • the emotion recognition unit is configured to perform emotion recognition on the user's facial image if the user's facial image collected by the camera is received, to obtain corresponding current user's emotion information;
  • An emotion judgment unit for judging whether the current user emotion information is happy mood or angry mood
  • the first label setting unit is configured to automatically add a retainable label to the welcome message if the previous user's emotional information is happy mood.
  • an embodiment of the present application provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and running on the processor, and the processor executes the computer
  • the program implements the information generation method based on face recognition described in the first aspect above.
  • an embodiment of the present application also provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor executes the above-mentioned first On the one hand, the information generation method based on face recognition.
  • the embodiments of the present application provide a method, device, computer equipment and storage medium for information generation based on face recognition, including determining whether the current user image collected by the camera is received; if the current user image collected by the camera is received, correct Perform face recognition on the current user image to obtain corresponding current user identification information; determine whether there is welcome information corresponding to the current user identification information; wherein the welcome information includes welcome text information , Welcome audio information, or welcome video information; if the welcome information corresponding to the current user identification information is stored, send the welcome information to the corresponding display device for display; determine whether the camera is received The collected facial image of the user; if the facial image of the user collected by the camera is received, emotion recognition is performed on the facial image of the user to obtain the corresponding current user emotion information; it is judged whether the current user emotion information is happy emotion or angry emotion And if the former user’s emotional information is a happy mood, a retainable tag is automatically added to the welcoming information.
  • This method realizes that the generation and deployment of welcome information is more efficient, and can obtain
  • FIG. 1 is a schematic diagram of an application scenario of a method for generating information based on face recognition provided by an embodiment of the application;
  • FIG. 2 is a schematic flowchart of a method for generating information based on face recognition provided by an embodiment of the application
  • FIG. 3 is a schematic diagram of a sub-process of a method for generating information based on face recognition provided by an embodiment of the application;
  • FIG. 4 is a schematic diagram of another sub-flow of the method for generating information based on face recognition provided by an embodiment of the application;
  • FIG. 5 is a schematic block diagram of an information generation device based on face recognition provided by an embodiment of the application.
  • FIG. 6 is a schematic block diagram of subunits of the apparatus for generating information based on face recognition according to an embodiment of the application;
  • FIG. 7 is a schematic block diagram of another subunit of the apparatus for generating information based on face recognition according to an embodiment of the application;
  • FIG. 8 is a schematic block diagram of a computer device provided by an embodiment of the application.
  • Figure 1 is a schematic diagram of an application scenario of a method for generating information based on face recognition provided by an embodiment of this application
  • Figure 2 is a schematic flowchart of a method for generating information based on face recognition provided by an embodiment of this application
  • the method for generating information based on face recognition is applied to a local terminal, and the method is executed by application software installed in the local terminal.
  • the method includes steps S110 to S180.
  • S110 Determine whether the current user image collected by the camera is received.
  • the welcoming system for example, exhibition venues, stadiums, athletes’ residences, high-end hotels, restaurants, tourist attractions, etc.
  • the welcoming system includes at least a camera, a display device, and Server
  • this application describes the technical solution from the perspective of the server.
  • the welcome information stored in the server and the massive face pictures that have been collected can be uploaded to the server by the user terminal (that is, a terminal such as a smart phone or tablet computer used by the user), and the user terminal performs the process through the user interaction interface corresponding to the server.
  • the user terminal that is, a terminal such as a smart phone or tablet computer used by the user
  • the camera is equipped with an infrared sensor for detecting that the distance between the user and the camera is less than a preset distance threshold (such as 2-10m), and then collecting the user's current user image and sending it to the server.
  • a preset distance threshold such as 2-10m
  • the camera If the camera detects that the distance between the user and the camera is less than the distance threshold, the camera collects the current user image and sends the current user image to the server.
  • the server adopts the 1:N mode of face recognition to determine whether the current user image is a face image corresponding to the user to be welcomed.
  • Face recognition does a 1:N comparison, that is, after the server receives a photo of "me”, it finds from the massive database of portraits that it matches the face data of the current user (ie "me”) Image and match it to find out "who am I”.
  • step S120 includes:
  • the specific steps for face recognition in the 1:N mode are as the above steps S121-S124.
  • the preprocessing of the current user image is based on the face detection result, the image is processed and finally serves the process of feature extraction . Due to various conditions and random interference, the original image obtained by the server cannot be used directly. It must be preprocessed in the early stages of image processing such as gray-scale correction and noise filtering.
  • the preprocessing process mainly includes light compensation, gray scale transformation, histogram equalization, normalization, geometric correction, filtering and sharpening of the face image.
  • the feature map is input to the pooling layer to obtain the one-dimensional vector corresponding to the maximum value corresponding to each feature map.
  • the one-dimensional vector corresponding to the maximum value corresponding to each feature map is input to the fully connected layer to obtain the The feature vector corresponding to the processed image.
  • the feature templates stored in the face database can be actively uploaded to the server by the user, or they can be continuously added to the feature templates that do not exist in the historical face database during the process of continuous use by the user, so as to accumulate the face.
  • Feature templates stored in the database For example, before the welcoming user arrives at the welcoming site, the administrator of the server can directly upload the head image of the welcoming user to the server for feature vector extraction, thereby obtaining the characteristic template of the welcoming user, and then The feature template of the to-be-welcoming user can be stored in the face database.
  • the 1:N mode of face recognition is that after the server receives a photo of "me”, it finds an image that matches the face data of the current user (ie "me") from the massive database of portraits, and Perform matching to find out "who am I", that is, the execution process of the above steps S121-S124.
  • the feature templates stored in the face database store the feature vectors corresponding to the massive amount of face images that have been collected, that is, each person’s face corresponds to a unique feature vector. With these massive feature templates as data After the foundation, it can be used to determine one or more people corresponding to the preprocessed picture, so as to realize face recognition.
  • the obtained user identification information can be the user's ID number. Since each citizen's ID number is unique, it can be used as its unique identification code.
  • the method further includes:
  • the feature vector corresponding to the current user image captured by the camera is not the same as the feature template stored in the face database, it means that the user corresponding to the current user image is not the user to be welcomed. Perform the following steps to welcome the guests, and then jump directly to the step that ends the process.
  • step S110 if the current user image collected by the camera is not received, wait for the preset delay waiting time, and return to step S110;
  • the server waits for the preset delay waiting time, and then returns to step S110. .
  • S130 Determine whether the welcome information corresponding to the current user identification information is stored; wherein the welcome information includes welcome text information, welcome audio information, or welcome video information.
  • the server when it is recognized that the current user image corresponds to the user to be greeted, it is necessary to retrieve in the server whether the welcome information corresponding to the current user identification information is stored; wherein, in the welcome information It includes welcome text information, welcome audio information, or welcome video information. That is, it is judged whether the pre-edited welcome information corresponding to the user to be greeted is stored in the server. This judgment process is also to quickly retrieve the personalized welcome information corresponding to the user to be greeted.
  • S140 If the welcome information corresponding to the current user identification information is stored, the welcome information is sent to a corresponding display device for display.
  • the welcome information includes welcome text information, welcome audio information, or welcome video information, that is, the specific content of the welcome information is diverse, and may be in plain text.
  • Welcome text information such as "Welcome XXX", the welcome text information is displayed on the display device scrolling or static display), or it can be welcome audio information (such as "Welcome XXX", the welcome audio information is displayed by The built-in speaker of the device is used for external playback.
  • the total number of playbacks, playback period, and total playback duration can be user-defined settings), or welcome video information (such as playing through the display device through animated text and various background videos)
  • the total number of times of playing, the playing period and the total playing time can be user-defined settings).
  • the welcome information in this application is pushed to the display device for display due to the specific identity of the user to be greeted, that is, the server can automatically control the welcome information displayed by the display device, and the display device is not for one-time use. It is reused many times to avoid the inefficient production process caused by the use of customized red silk banners with welcome words.
  • step S140 Steps to identify the user’s emotions.
  • the welcome information corresponding to the current user identification information is not stored in the server, the pre-stored general welcome information is acquired, and the general welcome information is sent to the corresponding display device for display.
  • the pre-stored general welcome information can be obtained at this time, and the general welcome information can be sent to the corresponding display device. To display.
  • S150 Determine whether the user's facial image collected by the camera is received.
  • the user's facial image collected by the camera is not the same image as the current user image in step S110, and the collection time of the user's facial image is also different from the collection time of the current user's image, generally the user's facial image
  • the acquisition time is later than the acquisition time of the current user image (for example, the acquisition time of the user's face image is 5-10s after the acquisition time of the current user image), that is, after the completion of the recognition of the current user image, the corresponding display
  • the camera collects an image of the user's face.
  • S160 If the user's facial image collected by the camera is received, perform emotion recognition on the user's facial image to obtain corresponding current user emotion information.
  • the purpose of performing emotion recognition based on the user's facial image is to further determine the current emotion type of the user after seeing the welcome message. For example, if the user is happy after seeing the welcome message, it means that the user is satisfied with the content of the welcome message; for example, if the user is angry after seeing the welcome message, it means that the user is angry with the welcome message.
  • the content is not satisfactory; for example, if the user is in a normal mood after seeing the welcome message, it means that the user has a neutral attitude towards the content of the welcome message.
  • the current user identification information and the corresponding welcome information are stored in a blockchain network, and the information is shared between different platforms through the blockchain.
  • the information stored in the blockchain network is used to find the welcome information corresponding to the current user identification information.
  • Blockchain is a new application mode of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information for verification. The validity of the information (anti-counterfeiting) and the generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
  • step S160 includes:
  • the face detector in OpenCV (a cross-platform computer vision library that implements many general algorithms in image processing and computer vision) is used to obtain 68 faces in the user's facial image. Feature points.
  • each of the 16-20 mouth feature points can be placed in the same Cartesian coordinate system as the face bounding box.
  • the uppermost mouth feature point in the mouth feature point set has its y The value is the maximum among the 16-20 mouth feature points, and the y value of the lowermost mouth feature point in the mouth feature point set is the minimum among the 16-20 mouth feature points.
  • the current user emotional information is determined based on the relationship between the user's mouth opening degree value and the preset mouth opening degree threshold.
  • the distinction between happy emotions and unhappy emotions is used as an example for specific explanation. It is not limited to only identifying happy emotions and unhappy emotions.
  • step S162 includes:
  • the face bounding box creation instruction to correspondingly create the current face bounding box; wherein the initial bounding box of the current face bounding box is a preset initial face bounding box;
  • the length and width of the initial face bounding box are expanded according to the preset expansion ratio value to update The current face bounding box, returning to the step of judging whether the face feature points in the user's facial image are all located within the area of the current face bounding box;
  • the current face bounding box is taken as the face bounding box that encloses all the facial feature points in the user's facial image .
  • a face bounding box that encloses all the face feature points in the user's face image can be obtained.
  • S170 Determine whether the current user emotion information is a happy mood or an angry mood.
  • the current user emotion information of the user to be welcomed is obtained through emotion recognition, it is necessary to determine whether the current emotion of the user is happy or angry, so as to determine whether the user is greeted by the user. Satisfaction of the information.
  • the user is happy after seeing the welcome message (that is, happy emotion), indicating that the user is satisfied with the content of the welcome message; for example, if the user is angry after seeing the welcome message (that is, angry ), indicating that the user is not satisfied with the content of the welcome message.
  • the welcome information corresponding to the user is displayed.
  • step S170 the method further includes:
  • the tag to be adjusted is automatically added to the welcome information.
  • the modified welcome information corresponding to the user can be called To show.
  • the method further includes:
  • the current welcome information to be recommended in the server can be obtained at this time, and the welcome information whose label is to be adjusted is updated to the current welcome information to be recommended. Guest information. In this way, the welcoming information that the user is not satisfied with can be adjusted in time.
  • This method realizes that the generation and deployment of welcome information is more efficient, and can obtain the user's feedback information on the welcome information based on the user's emotion recognition result.
  • the embodiment of the present application also provides an information generation device based on face recognition, and the information generation device based on face recognition is used to execute any embodiment of the aforementioned information generation method based on face recognition.
  • FIG. 5 is a schematic block diagram of an information generating apparatus based on face recognition provided by an embodiment of the present application.
  • the information generation device 100 based on face recognition includes: a first image determination unit 110, a user identity recognition unit 120, a welcome information determination unit 130, a welcome information transmission unit 140, and a second image determination unit 150 , The emotion recognition unit 160, the emotion judgment unit 170, and the first label setting unit 180.
  • the first image judging unit 110 is used to judge whether the current user image collected by the camera is received.
  • the user identification unit 120 is configured to, if the current user image collected by the camera is received, perform face recognition on the current user image to obtain corresponding current user identification information.
  • the user identification unit 120 includes:
  • the picture preprocessing unit 121 is configured to perform grayscale correction and noise filtering on the current user image to obtain a preprocessed picture
  • the feature vector obtaining unit 122 is configured to obtain a feature vector corresponding to the preprocessed picture through a convolutional neural network model
  • the vector comparison unit 123 is configured to compare the feature vector corresponding to the current user image with the feature template stored in the face database to determine whether the feature template stored in the face database is similar to the current one. Feature templates with the same feature vector corresponding to the user image;
  • the user information obtaining unit 124 is configured to obtain corresponding current user identification information if there is a feature template that is the same as the feature vector corresponding to the current user image in the feature templates stored in the face database.
  • the user identification unit 120 further includes:
  • the matching failure prompt unit 125 is configured to, if there is no feature template that is the same as the feature vector corresponding to the current user image among the feature templates stored in the face database, perform a prompt that the target user is not recognized, and execute the step of ending the process .
  • the welcoming information judging unit 130 is used to determine whether there is welcoming information corresponding to the current user identification information stored; wherein the welcoming information includes welcoming text information, welcoming audio information, or welcoming video information.
  • the welcome information sending unit 140 is configured to, if the welcome information corresponding to the current user identification information is stored, send the welcome information to the corresponding display device for display.
  • the second image judging unit 150 is used to judge whether the user's facial image collected by the camera is received.
  • the emotion recognition unit 160 is configured to, if the user's facial image collected by the camera is received, perform emotion recognition on the user's facial image to obtain corresponding current user emotion information.
  • the emotion recognition unit 160 includes:
  • the facial feature point acquiring unit 161 is configured to acquire the facial feature points in the user's facial image through a pre-stored face detector
  • the face bounding box obtaining unit 162 is configured to obtain a face bounding box that encloses all the facial feature points in the user's facial image;
  • the mouth feature point set acquiring unit 163 is configured to acquire the mouth feature point set included in the facial feature points in the user's facial image
  • the user mouth opening degree value obtaining unit 164 is used to obtain the vertical coordinate spacing value between the uppermost mouth feature point and the lowermost mouth feature point in the mouth feature point set, according to the vertical coordinate spacing value and the face boundary The ratio of the length of the box to obtain the corresponding value of the user's mouth opening degree;
  • the mouth opening degree value comparison unit 165 is configured to determine whether the user's mouth opening degree value is less than a preset mouth opening degree threshold
  • An angry emotion recognition unit 166 configured to set the value of the current user's emotion information as an angry emotion if the user's mouth opening degree value is less than the mouth opening degree threshold;
  • the happy emotion recognition unit 167 is configured to set the value of the current user's emotion information as happy emotion if the user's mouth opening degree value is greater than or equal to the mouth opening degree threshold value.
  • the face bounding box acquiring unit 162 includes:
  • the initial bounding box creation unit is used to call the face bounding box creation instruction to correspondingly create the current face bounding box; wherein the initial bounding box of the current face bounding box is a preset initial face bounding box;
  • An in-area judging unit the user judges whether the facial feature points in the user's facial image are all located in the area within the current face bounding box;
  • the bounding box adjustment unit is configured to expand the length and width of the initial face bounding box according to a preset if the face feature points in the user's face image are not all located in the area within the current face bounding box
  • the scale value is expanded to update the current face bounding box, and return to the step of determining whether the face feature points in the user's face image are all located in the area of the current face bounding box;
  • the bounding box selection unit is configured to use the current face bounding box as the face feature in the user facial image if the facial feature points in the user's facial image are all located in the area within the current face bounding box Click the bounding box of the face surrounded by all.
  • the emotion judging unit 170 is used to judge whether the current user emotion information is a happy mood or an angry mood.
  • the first tag setting unit 180 is configured to automatically add a retainable tag to the welcome message if the previous user's emotion information is happy emotion.
  • the information generating apparatus 100 based on face recognition further includes:
  • the second label setting unit is configured to automatically add a label to be adjusted to the welcome message if the previous user's emotional information is angry.
  • the information generating apparatus 100 based on face recognition further includes:
  • the recommended welcome information update unit is used to call the local current welcome information to be recommended, and update the welcome information labeled as to be adjusted to the current welcome information to be recommended
  • the above-mentioned information generating apparatus based on face recognition can be implemented in the form of a computer program, and the computer program can be run on a computer device as shown in FIG. 8.
  • FIG. 8 is a schematic block diagram of a computer device according to an embodiment of the present application.
  • the computer device 500 is a server, and the server may be an independent server or a server cluster composed of multiple servers.
  • the computer device 500 includes a processor 502, a memory, and a network interface 505 connected through a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
  • the non-volatile storage medium 503 can store an operating system 5031 and a computer program 5032.
  • the processor 502 can execute the information generation method based on face recognition.
  • the processor 502 is used to provide calculation and control capabilities, and support the operation of the entire computer device 500.
  • the internal memory 504 provides an environment for the operation of the computer program 5032 in the non-volatile storage medium 503.
  • the processor 502 can execute the information generation method based on face recognition.
  • the network interface 505 is used for network communication, such as providing data information transmission.
  • the structure shown in FIG. 8 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device 500 to which the solution of the present application is applied.
  • the specific computer device 500 may include more or fewer components than shown in the figure, or combine certain components, or have a different component arrangement.
  • the processor 502 is configured to run a computer program 5032 stored in a memory to implement the information generation method based on face recognition disclosed in the embodiment of the present application.
  • the embodiment of the computer device shown in FIG. 8 does not constitute a limitation on the specific configuration of the computer device.
  • the computer device may include more or less components than those shown in the figure. Or combine certain components, or different component arrangements.
  • the computer device may only include a memory and a processor. In such an embodiment, the structures and functions of the memory and the processor are consistent with the embodiment shown in FIG. 8 and will not be repeated here.
  • the processor 502 may be a central processing unit (Central Processing Unit, CPU), and the processor 502 may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor.
  • a computer-readable storage medium may be a non-volatile computer-readable storage medium, or may be a volatile computer-readable storage medium.
  • the computer-readable storage medium stores a computer program, where the computer program is executed by a processor to implement the information generation method based on face recognition disclosed in the embodiments of the present application.
  • the disclosed equipment, device, and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods, or the units with the same function may be combined into one. Units, for example, multiple units or components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may also be electrical, mechanical or other forms of connection.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments of the present application.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a storage medium.
  • the technical solution of this application is essentially or the part that contributes to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium. It includes several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un procédé et un appareil de génération d'informations sur la base d'une reconnaissance faciale, un dispositif informatique et un support de stockage, qui appartiennent au domaine technique de la reconnaissance d'image dans l'intelligence artificielle. Ledit procédé comprend les étapes suivantes : si une image d'utilisateur actuel acquise par une caméra est reçue, effectuer une reconnaissance faciale sur l'image d'utilisateur actuel, de façon à obtenir des informations de reconnaissance d'identité d'utilisateur actuel correspondantes ; si des informations d'accueil correspondant aux informations de reconnaissance d'identité d'utilisateur actuel sont stockées, envoyer les informations d'accueil à un dispositif d'affichage correspondant pour affichage ; si une image faciale d'utilisateur acquise par la caméra est reçue, effectuer une reconnaissance d'émotion sur l'image faciale d'utilisateur, de façon à obtenir des informations d'émotion d'utilisateur actuel correspondantes ; et si les informations d'émotion d'utilisateur actuel indiquent une émotion de joie, ajouter automatiquement, aux informations d'accueil, une étiquette indiquant que les informations peuvent être réservées. La présente invention génère et déploie efficacement des informations d'accueil et peut acquérir des informations de rétroaction des utilisateurs sur les informations d'accueil sur la base de résultats de reconnaissance d'émotion d'utilisateurs.
PCT/CN2020/103796 2020-04-28 2020-07-23 Procédé et appareil de génération d'informations sur la base d'une reconnaissance faciale, dispositif informatique et support de stockage WO2021217912A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010351274.0 2020-04-28
CN202010351274.0A CN111639534A (zh) 2020-04-28 2020-04-28 基于人脸识别的信息生成方法、装置及计算机设备

Publications (1)

Publication Number Publication Date
WO2021217912A1 true WO2021217912A1 (fr) 2021-11-04

Family

ID=72331889

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/103796 WO2021217912A1 (fr) 2020-04-28 2020-07-23 Procédé et appareil de génération d'informations sur la base d'une reconnaissance faciale, dispositif informatique et support de stockage

Country Status (2)

Country Link
CN (1) CN111639534A (fr)
WO (1) WO2021217912A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115052193A (zh) * 2022-05-25 2022-09-13 天翼爱音乐文化科技有限公司 视频推荐方法、系统、装置及存储介质
WO2023173785A1 (fr) * 2022-03-18 2023-09-21 珠海优特电力科技股份有限公司 Procédé, dispositif et système de vérification d'autorisation d'accès et terminal d'authentification d'identité
GB2620664A (en) * 2022-03-18 2024-01-17 Zhuhai Unitech Power Tech Co Access permission verification method, device, and system and identity authentication terminal

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112936299A (zh) * 2021-01-25 2021-06-11 浙江合众新能源汽车有限公司 一种基于人脸识别的用户上车迎宾系统
WO2022257044A1 (fr) * 2021-06-09 2022-12-15 京东方科技集团股份有限公司 Procédé d'interaction, système d'interaction et dispositif électronique
CN113408421B (zh) * 2021-06-21 2023-04-07 湖北央中巨石信息技术有限公司 基于区块链的人脸识别方法及系统
CN115551139A (zh) * 2022-10-11 2022-12-30 扬州华彩光电有限公司 基于人工智能的led灯视觉交互方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9117198B1 (en) * 2010-02-22 2015-08-25 Iheartmedia Management Services, Inc. Listener survey tool with time stamping
CN107845168A (zh) * 2017-10-26 2018-03-27 广州云从信息科技有限公司 一种基于人脸识别认证的vip识别方法
CN208156743U (zh) * 2018-05-11 2018-11-27 江苏腾武信息技术有限公司 人脸识别智能迎宾系统
CN109598827A (zh) * 2018-09-25 2019-04-09 深圳神目信息技术有限公司 一种人脸迎宾混合系统及其工作方法
CN109934705A (zh) * 2019-03-27 2019-06-25 浪潮金融信息技术有限公司 一种应用于银行的全渠道客户迎宾方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9117198B1 (en) * 2010-02-22 2015-08-25 Iheartmedia Management Services, Inc. Listener survey tool with time stamping
CN107845168A (zh) * 2017-10-26 2018-03-27 广州云从信息科技有限公司 一种基于人脸识别认证的vip识别方法
CN208156743U (zh) * 2018-05-11 2018-11-27 江苏腾武信息技术有限公司 人脸识别智能迎宾系统
CN109598827A (zh) * 2018-09-25 2019-04-09 深圳神目信息技术有限公司 一种人脸迎宾混合系统及其工作方法
CN109934705A (zh) * 2019-03-27 2019-06-25 浪潮金融信息技术有限公司 一种应用于银行的全渠道客户迎宾方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023173785A1 (fr) * 2022-03-18 2023-09-21 珠海优特电力科技股份有限公司 Procédé, dispositif et système de vérification d'autorisation d'accès et terminal d'authentification d'identité
GB2620664A (en) * 2022-03-18 2024-01-17 Zhuhai Unitech Power Tech Co Access permission verification method, device, and system and identity authentication terminal
CN115052193A (zh) * 2022-05-25 2022-09-13 天翼爱音乐文化科技有限公司 视频推荐方法、系统、装置及存储介质
CN115052193B (zh) * 2022-05-25 2023-07-18 天翼爱音乐文化科技有限公司 视频推荐方法、系统、装置及存储介质

Also Published As

Publication number Publication date
CN111639534A (zh) 2020-09-08

Similar Documents

Publication Publication Date Title
WO2021217912A1 (fr) Procédé et appareil de génération d'informations sur la base d'une reconnaissance faciale, dispositif informatique et support de stockage
US11169675B1 (en) Creator profile user interface
CN115004145B (zh) 针对远程内容源设备的子显示屏指定
US20210405831A1 (en) Updating avatar clothing for a user of a messaging system
EP4128672A1 (fr) Combinaison d'un premier contenu d'interface utilisateur dans une seconde interface utilisateur
JP2019117646A (ja) パーソナル感情アイコンを提供するための方法及びシステム
WO2016026402A2 (fr) Système et procédés de création de bibliothèque d'expressions faciales d'utilisateurs pour applications de messagerie et de réseautage social
US20060018522A1 (en) System and method applying image-based face recognition for online profile browsing
US20210409535A1 (en) Updating an avatar status for a user of a messaging system
US20140281975A1 (en) System for adaptive selection and presentation of context-based media in communications
WO2015054428A1 (fr) Systèmes et procédés d'ajout de métadonnées descriptives à un contenu numérique
WO2020207413A1 (fr) Procédé, appareil et dispositif de transfert de contenu
US20230410811A1 (en) Augmented reality-based translation of speech in association with travel
US11943283B2 (en) Dynamically assigning storage locations for messaging system data
US11651019B2 (en) Contextual media filter search
US20200265238A1 (en) Methods and Systems for Identification and Augmentation of Video Content
US20200218772A1 (en) Method and apparatus for dynamically identifying a user of an account for posting images
CN114846433A (zh) 指定子显示屏的基于手势的方法和系统
CN114930280A (zh) 子显示屏通知处理
CN104243276A (zh) 一种联系人推荐方法及装置
US20230022826A1 (en) Media content discard notification system
KR20230063772A (ko) 메타버스 개인 맞춤형 콘텐츠 생성 및 인증 방법 및 그를 위한 장치 및 시스템
CN114830076A (zh) 子显示屏指定和共享
US20210367798A1 (en) Group contact lists generation
US20210304754A1 (en) Speech-based selection of augmented reality content

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20933078

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21/02/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20933078

Country of ref document: EP

Kind code of ref document: A1