WO2021179706A1 - 会议签到方法、系统、计算机设备及计算机可读存储介质 - Google Patents

会议签到方法、系统、计算机设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2021179706A1
WO2021179706A1 PCT/CN2020/134832 CN2020134832W WO2021179706A1 WO 2021179706 A1 WO2021179706 A1 WO 2021179706A1 CN 2020134832 W CN2020134832 W CN 2020134832W WO 2021179706 A1 WO2021179706 A1 WO 2021179706A1
Authority
WO
WIPO (PCT)
Prior art keywords
participants
attributes
participant
conference
information
Prior art date
Application number
PCT/CN2020/134832
Other languages
English (en)
French (fr)
Inventor
杨志斌
林亚玲
陈斌
宋晨
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021179706A1 publication Critical patent/WO2021179706A1/zh

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/10Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • This application relates to the technical field of meeting sign-in, and in particular to a method, system, computer equipment, and computer-readable storage medium for meeting sign-in.
  • conference sign-in methods that can automatically complete sign-in and improve sign-in efficiency
  • application number CN201811409191.1 discloses a conference sign-in method, device and sign-in equipment to obtain the application information of participants; the list The information includes the correspondence between the name of the participant and the characteristic image; collecting the face image of the participant; matching the face image with the characteristic image; if the matching is successful, the participant corresponding to the characteristic image The person’s sign-in status is marked as sign-in successful.
  • the conference sign-in method with the application number CN201811409191.1 lacks scientificity and data, and because the participants cannot be profiled, the value conversion of the conference is low, and it cannot provide a strong data support. .
  • the purpose of this application is to provide a conference sign-in method, system, computer equipment, and computer-readable storage medium, which can collect statistics on successfully signed-in participants through portrait attribute analysis, so as to obtain true and readable data.
  • this application provides a conference sign-in method, which includes the following steps: Step S10: Obtain conference information and participant application information; Step S20: Take on-site photos of the conference participants; Step S30: Extract the scene The facial recognition feature value in the photo, compare whether the participant’s facial recognition feature value matches the participant’s declaration information, and obtain the matching participant’s on-site photo and face recognition feature value; step S40: change The face recognition feature values of the matched participants are disassembled and analyzed to obtain the gender attributes, emotional attributes, age attributes, and skin color attributes of the participants; the on-site photos of the matched participants are used for clothing characteristics Extract and analyze the clothing attributes of the participants; Step S50: Perform statistics of dimensional information according to the gender attributes, emotional attributes, age attributes, skin color attributes, and clothing attributes of the participants, and judge the meeting according to the dimensional information and the number of participants Participation.
  • This application also provides a conference sign-in system, which includes: a face input module, which is used to store conference information and participant declaration information; and a face recognition module, which is used to take on-site photos of participants and extract the scene
  • the facial recognition feature value in the photo is used to compare whether the participant’s facial recognition feature value matches the participant’s application information, and to obtain the matching participant’s on-site photo and face recognition feature value; user attributes
  • the analysis module is used to disassemble the face recognition feature values of the matched participants and analyze the gender attributes, emotional attributes, age attributes, and skin color attributes of the participants; match the participants Perform clothing feature extraction and analysis of the participants’ on-site photos to obtain the clothing attributes of the participants; and perform dimensional information statistics according to the gender attributes, emotional attributes, age attributes, skin color attributes, and clothing attributes of the participants, and according to the dimensional information and The number of participants judges the degree of participation in the meeting.
  • the present application also provides a computer device that includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and the processor implements the following steps when the processor executes the computer program: obtain Meeting information and participants’ declaration information; taking on-site photos of participants; extracting the facial recognition feature values in the on-site photos, comparing the facial recognition feature values of the participants and the participants’ declaration information whether they match, and Obtain the on-site photos and facial recognition feature values of the matched participants; disassemble the facial recognition feature values of the matched participants and analyze the gender attributes, emotional attributes, and age of the participants Attributes and skin color attributes; extract the clothing features of the matched participants’ on-site photos and analyze them to obtain the clothing attributes of the participants; perform dimensional information based on the gender attributes, mood attributes, age attributes, skin color attributes and clothing attributes of the participants According to the statistics of the dimensional information and the number of participants, the degree of participation in the meeting is judged.
  • This application also provides a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the following steps are realized: obtaining meeting information and application information of participants; taking pictures of participants on site ; Extract the facial recognition feature values from the live photos, compare whether the participant’s facial recognition feature values match the participant’s application information, and obtain the matching participants’ live photos and facial recognition feature values;
  • the face recognition feature values of the matched participants are disassembled and analyzed to obtain the gender attributes, emotional attributes, age attributes, and skin color attributes of the participants;
  • the on-site photos of the matched participants are used for clothing characteristics
  • the clothing attributes of the participants are extracted and analyzed; the dimensional information is calculated according to the gender attributes, emotional attributes, age attributes, skin color attributes, and clothing attributes of the participants, and the participation in the meeting is judged according to the dimensional information and the number of participants.
  • This application can analyze the attributes of the portrait to count the participants who have successfully signed in to obtain real and readable data, which not only provides a strong data support, but also greatly improves the value conversion of the meeting.
  • Fig. 1 is a flowchart of an embodiment of a method for signing in to an application meeting.
  • Fig. 2 is a detailed flowchart of step S10 in Fig. 1.
  • Fig. 3 is a flowchart of another embodiment of a method for signing in to an application meeting.
  • Figure 4 is a module diagram of the application meeting sign-in system.
  • Figure 5 is a schematic diagram of the hardware structure of the computer device applying the method for signing in to a conference.
  • the technical solution of this application can be applied to the fields of artificial intelligence, smart city, blockchain and/or big data technology.
  • the data involved in this application such as declaration information, feature values, attributes, and/or conference participation, can be stored in a database, or can be stored in a blockchain, such as distributed storage through a blockchain, this application Not limited.
  • This application provides a method for signing in to a conference, which includes the following steps.
  • Step S10 Obtain meeting information and application information of participants.
  • Step S20 Take on-site photos of the participants.
  • Step S30 Extract the facial recognition feature values from the live photos, compare whether the participant’s facial recognition feature values match the participant’s application information, and obtain the matching participants’ live photos and facial recognition feature values .
  • Step S40 Disassemble the face recognition feature values of the matched participants' face recognition feature values and analyze to obtain the gender attributes, emotional attributes, age attributes, and skin color attributes of the participants; match the scenes of the participants The clothing features of the photos are extracted and analyzed to obtain the clothing attributes of the participants.
  • the analysis process of the gender attribute reduce the face recognition feature value of the participants from the high-dimensional space to the low-dimensional space, calculate and obtain the gender sample of the most similar low-dimensional space, and obtain the gender according to the gender sample.
  • the analysis process of the emotional attributes is obtained by analyzing and judging the position of the face's mouth, the degree of opening of the mouth, and the angle of the mouth in the facial recognition feature values; the emotional attributes include joy, anger, sadness, panic, and disgust.
  • the analysis process of the age attribute extracting the feature values of the forehead, eye corners, and cheek regions from the face recognition feature values of the participants, and using a flexible model to combine the shape and wrinkles of the face to obtain the age.
  • the analysis process of the skin color attribute from the face recognition feature value of the participants, the image segmentation preprocessing based on the skin color information and the front face recognition verification are used, the color space characteristics are used to cluster the skin color, and the skin color mapping is performed Generate a binary image, and then obtain the skin color according to the shape characteristics of the face.
  • Step S50 Perform statistics on dimensional information according to the gender attributes, emotional attributes, age attributes, skin color attributes, and clothing attributes of the participants, and determine the degree of participation in the meeting according to the dimensional information and the number of participants.
  • Step S10 shown in Figure 1 further includes: Step S110: Pre-establish a preset conference database, and load conference information in the preset conference database.
  • the conference information includes conference theme, conference time, and conference. Location and number of participants;
  • the participant’s declaration information loaded in the step S120 includes a number of required items.
  • the required items can also form part of the dimensional information in the participant’s portrait attributes. In order to save the registration procedure, the required items can only be set and cannot be Dimensional information obtained from on-site photos.
  • the application information of the participants can be loaded in the manner of single addition, batch addition or QR code addition.
  • the application information of the participants includes the names of the participants and the certificates of the participants. Number, mobile phone number of the participant, and photo of the participant. Among them, the name of the participant, the ID number of the participant, and the mobile phone number of the participant are required items.
  • Batch add method download the batch add excel template, enter and upload the names of the participants, the certificate numbers of the participants, the phone numbers of the participants, and the photos of the participants in batches.
  • the detection in step S130 includes photo quality detection and certificate number detection.
  • Photo quality detection Use live detection technology to determine whether the photo quality of participants meets the requirements of face recognition, for example, whether the background in the photos of participants meets the requirements of face recognition, and whether the direction of faces in the photos of participants Meet the requirements for face recognition, whether the brightness of the light in the photos of participants meets the requirements for face recognition, and whether the proportion of faces in the photos of participants meets the requirements for face recognition; pass photo quality inspection to obtain primary screening photos .
  • Certificate number detection Use the public security V3 system to further compare the primary screening photos and the ID numbers associated with the primary screening photos to screen the application information of the participants who meet the requirements of the meeting as the application information of the participants .
  • the analysis process of step S40 includes: gender attribute portrait attribute analysis: reducing the face recognition feature value of participants from a high-dimensional space to a low-dimensional space, calculating and obtaining the most similar gender sample in the low-dimensional space, and according to The gender sample obtains the gender; the purpose of determining the gender is that this application meeting sign-in method can support multiple types of meetings, such as product release/marketing, etc., and can be targeted to analyze the gender characteristics of the participants, so that it can be carried out during and after the meeting The placement and promotion of precise advertisements reduces invalid promotion.
  • the method for signing in to the meeting in this application reduces the number of required items. For example, gender is not a required item, and the on-site photos of the participants are further obtained through step S20, and the gender of the participants is identified through step S40. Improve the portrait attributes of participants.
  • Emotional portrait attribute analysis Analyze the facial feature value algorithm, and the extracted features such as the position of the face, the degree of opening of the mouth, and the angle of the mouth are worthy of the user's mood is happy or sad; traditional meeting analysis cannot be accurate Understand the participation of participants.
  • the level of participation is mainly analyzed and judged by emotional attributes. Emotional attributes can calculate expression characteristics such as joy, anger, sadness, panic, and disgust.
  • the user attribute analysis module increases the collection of information on the dimension of emotional attributes, allowing the organizer to understand user participation from data analysis.
  • Portrait attribute analysis of age attributes When extracting the facial feature value algorithm, a flexible model is used to organically combine the shape and wrinkles of the face to fully extract the feature values of the forehead, corners of the eyes, cheeks and other areas to infer age.
  • the age of the participant is not a required field, and the information is further supplemented and perfected through face recognition to provide a more specific portrait attribute of the participant.
  • statistics on the dimension of age characteristics can provide a clearer picture of the age distribution of participants.
  • Skin color attribute analysis of portrait attributes also when extracting facial feature values, using skin color information-based image segmentation preprocessing and frontal face recognition verification, using color space characteristics, clustering analysis of skin color, and performing skin color in YCbCr space
  • the mapping generates a binary image, and then the skin color is confirmed according to the shape characteristics of the face.
  • the classification and optimization of the algorithm neural network are used to realize the distinction and discrimination of skin colors.
  • Portrait attribute analysis of clothing attributes Use a trained convolutional neural network for feature extraction of photos for image retrieval, retrieve clothes with high similarity values, and distinguish the clothing types of the people in the photos. This application can also provide a fun experience that matches the clothing type according to the clothing type, so that participants can feel better when they sign in.
  • the system After completing the above-mentioned multiple attribute analysis, the system will automatically perform data statistics to achieve data readability.
  • the sign-in system of this application meeting will associate data based on gender, age, mood, clothing, skin color attributes, location, appearance time, and associated participants, and finally make statistics and output the complete portrait of the user, so that the user can be accurate based on this information Determine the multi-dimensional information of participants.
  • the related information is obtained by classifying the portrait attributes of the participants who have successfully checked in.
  • the related information includes the distribution of age groups, the distribution of Chinese and Western people, and the personality of the characters is analyzed according to the clothing attributes.
  • the proportion of the participants in the meeting and the participants in the meeting can be calculated, and the organizer can refer to the information of the participants.
  • the actual degree of participation can be calculated, and the organizer can refer to the information of the participants.
  • this application provides a method for signing in to a conference, which includes the following steps.
  • Step S10 Obtain meeting information and application information of participants.
  • Step S12 Download the application information of the participants as an identification training sample set.
  • the application information of the participants includes the name of the participant, the ID number of the participant, the mobile phone number of the participant, and the photo of the participant.
  • Step S13 Perform face recognition feature value extraction on the photos of the participants in the recognition training sample set to obtain the first face feature value combination.
  • Step S20 Take on-site photos of the participants.
  • Step S32 Perform feature value extraction of the face recognition technology on the live photo to obtain a second face feature value.
  • Step S34 Determine whether the second face feature value matches the first face feature value combination, and if the second face feature value matches a certain first face feature in the first face feature value combination, perform step S36: If the second face feature value does not match any of the first face features in the first face feature value combination, step S38 is performed.
  • Step S36 Prompt that the sign-in is successful, and save the matching on-site photos, and then execute step S40.
  • Step S38 Prompt sign-in failure.
  • Step S40 Disassemble the face recognition feature values of the matched participants' face recognition feature values and analyze to obtain the gender attributes, emotional attributes, age attributes, skin color attributes and portrait attributes of the participants; the participants will be matched Perform clothing feature extraction and analysis to obtain the clothing attributes of participants.
  • Step S50 Perform statistics on dimensional information according to the gender attributes, emotional attributes, age attributes, skin color attributes, and clothing attributes of the participants, and determine the degree of participation in the meeting according to the dimensional information and the number of participants.
  • the post-meeting statistics of the dimension information are as follows.
  • Step S50 obtains statistics according to the gender attributes of the participants: male: 80 people, female: 20 people.
  • Step S50 obtains according to the age attribute statistics of the participants: 20-30 years old: 20 people, 30-40 years old: 51 people, 40-50 years old: 8 people, and over 50 years old: 21 people.
  • Step S50 obtains statistics based on the skin color attributes of the participants: black race: 5 people, white race: 25 people, and yellow race: 70 people.
  • Step S50 obtains according to the clothing attribute statistics of the participants: formal suits: 60 people, casual suits: 25 people, jackets: 10 people, and windbreakers: 5 people.
  • Step S50 obtains according to the emotional attributes statistics of the participants: joy: 60 people, anger: 15 people, sadness: 0 people, panic: 0 people, and disgust: 25 people.
  • Participation can be obtained by the proportion of the joyful people in the emotional attribute.
  • the level of participation can be obtained by comparison according to the set participation threshold. Assuming that the participation threshold is 0.5, the emotional attribute statistics obtain 60/100>0.5, indicating that the participation is high. If the dimensional information statistics of the emotional attributes are: joy: 10 people, anger: 25 people, sadness: 0 people, panic: 0 people, disgust: 65 people, and the emotional attribute statistics obtain 10/100 ⁇ 0.5, indicating low participation.
  • Participation can also be calculated according to the weighted range of different emotional attributes to reflect participation.
  • the weight of joy is 5, the weight of anger is 4, the weight of sadness is 3, the weight of panic is 2, and the weight of disgust is 1.
  • Score the weight of joy ⁇ the number of joyful + the weight of anger ⁇ the number of angry + the weight of sadness ⁇ the number of sad + the weight of panic ⁇ the number of panic + the weight of disgust ⁇ the number of disgust.
  • the level of participation can be compared according to the set participation threshold. According to the number of participants, the participation threshold is calculated. For example, with 100 people as an example, the participation threshold is 350. The higher the score, the higher the participation. The lower, the participation.
  • the weighted calculated score of the emotional attribute is 385>350, which indicates a high degree of participation.
  • this application provides a conference sign-in system 1, which includes: a face input module 10, which is used to store conference information and participant declaration information; and a face recognition module 20, which is used to capture On-site photos of participants, extract the facial recognition feature values in the on-site photos, and compare whether the facial recognition feature values of the participants match with the participant’s application information, and obtain the on-site photos of the matched participants Face recognition feature value; user attribute analysis module 30, which is used to disassemble the face recognition feature value of the matched participant’s face recognition feature value and analyze to obtain the gender attribute, emotional attribute, and age of the participant Attributes, skin color attributes; the clothing feature extraction of the matching participants' on-site photos and analysis to obtain the clothing attributes of the participants.
  • a face input module 10 which is used to store conference information and participant declaration information
  • a face recognition module 20 which is used to capture On-site photos of participants, extract the facial recognition feature values in the on-site photos, and compare whether the facial recognition feature values of the participants match with the participant’s application information, and obtain the on-site
  • the analysis process of the gender attribute reduce the face recognition feature value of the participants from the high-dimensional space to the low-dimensional space, calculate and obtain the gender sample of the most similar low-dimensional space, and obtain the gender according to the gender sample.
  • the analysis process of the emotional attributes is obtained by analyzing and judging the position of the face's mouth, the degree of opening of the mouth, and the angle of the mouth in the facial recognition feature values; the emotional attributes include joy, anger, sadness, panic, and disgust.
  • the analysis process of the age attribute extracting the feature values of the forehead, eye corners, and cheek regions from the face recognition feature values of the participants, and using a flexible model to combine the shape and wrinkles of the face to obtain the age.
  • the analysis process of the skin color attribute from the face recognition feature value of the participants, the image segmentation preprocessing based on the skin color information and the front face recognition verification are used, the color space characteristics are used to cluster the skin color, and the skin color mapping is performed Generate a binary image, and then obtain the skin color according to the shape characteristics of the face.
  • the dimensional information is counted, and the degree of participation in the meeting is judged according to the dimensional information and the number of participants.
  • the meeting information stored by the face entry module 10 includes meeting subject, meeting time, meeting place, and number of people in the meeting.
  • the participant’s application information stored in the face input module 10 includes a number of required items.
  • the required items can also form part of the dimensional information in the participant’s portrait attributes. In order to save the registration procedure, the required items can only be set.
  • Dimensional information obtained from on-site photos for example, the participant’s declaration information includes the participant’s name, participant’s ID number, participant’s mobile phone number, and participant’s photo. Among them, the participant’s photo The name of the person, the ID number of the participant, and the phone number of the participant are required items.
  • the face input module 10 can load the application information of participants in a single addition, batch addition, or two-dimensional code addition.
  • Batch add method download the batch add excel template, enter and upload the names of the participants, the certificate numbers of the participants, the phone numbers of the participants, and the photos of the participants in batches.
  • the face input module 10 is also used to detect the application information of participants to screen participants who meet the requirements for participation.
  • the detection includes photo quality inspection and certificate number inspection.
  • Photo quality detection Use live detection technology to determine whether the photo quality of participants meets the requirements of face recognition, for example, whether the background in the photos of participants meets the requirements of face recognition, and whether the direction of faces in the photos of participants Meet the requirements for face recognition, whether the brightness of the light in the photos of participants meets the requirements for face recognition, and whether the proportion of faces in the photos of participants meets the requirements for face recognition; pass photo quality inspection to obtain primary screening photos .
  • Certificate number detection Use the public security V3 system to further compare the primary screening photos and the ID numbers associated with the primary screening photos to screen the application information of the participants who meet the requirements of the meeting as the application information of the participants .
  • the face recognition module 20 is specifically configured to download the application information of the participants to be used as a recognition training sample set; extract the facial recognition feature values from the photos of the participants in the recognition training sample set to obtain the first facial feature value Combination; extract the feature value of face recognition technology from the live photos to obtain the second face feature value; judge whether the combination of the second face feature value and the first face feature value match, if the second face feature value matches the first face feature value If a certain first face feature in a combination of face feature values matches, it will prompt that the sign-in is successful, and the matched live photo will be saved; if the second face feature value and the first face feature value combination are any first If the facial features do not match, it will prompt sign-in failure.
  • the user attribute analysis module 30 obtains portrait attributes of participants including at least one of gender attributes, emotional attributes, age attributes, skin color attributes, or clothing attributes.
  • Portrait attribute analysis of gender attributes Using facial feature value algorithm, the high-dimensional image of feature values extracted from the faces of participants is reduced to low-latitude space, and the face recognition algorithm also maps the gender samples that it has trained Go to the low-latitude space, and then calculate the sample that is most similar to the extracted feature value image, and assign the gender of the sample to the image of the participant, and finally determine the gender; the purpose of determining the gender is because the application conference sign-in system can Supports various types of meetings, such as product launches/marketing, etc. It can analyze the gender characteristics of participants in a targeted manner, so that accurate advertising can be placed and promoted during and after the meeting, and invalid promotion can be reduced.
  • the face entry module set by the conference sign-in system of this application the number of required items is reduced. For example, gender is not a required item, and the face recognition module is used to further obtain the on-site photos of the participants, and pass the user attribute analysis module 30 Identify the gender of the participants, supplement and improve the attributes of the portraits of the participants.
  • Emotional portrait attribute analysis Analyze the facial feature value algorithm, and the extracted features such as the position of the face's mouth, the degree of mouth opening, and the angle of the mouth are worthy of whether the user's mood is happy or sad; traditional meeting analysis cannot be accurate Understand the participation of participants.
  • the level of participation is mainly analyzed and judged by emotional attributes. Emotional attributes can calculate expression characteristics such as joy, anger, sadness, panic, and disgust.
  • the user attribute analysis module increases the collection of information on the dimension of emotional attributes, allowing the organizer to understand user participation from data analysis.
  • Portrait attribute analysis of age attribute When extracting the facial feature value algorithm, use a flexible model to organically combine the shape and wrinkles of the face, and fully extract the feature values of the forehead, corners of the eyes, cheeks and other areas to infer the age;
  • the age of the participants in the personnel declaration information is not a required field, but face recognition is used to further supplement and improve the information to provide a clearer portrait attribute of the participants.
  • statistics on the dimension of age characteristics can provide a clearer picture of the age distribution of participants.
  • Skin color attribute analysis of portrait attributes also when extracting facial feature values, using skin color information-based image segmentation preprocessing and frontal face recognition verification, using color space characteristics, clustering analysis of skin color, and performing skin color in YCbCr space
  • the mapping generates a binary image, and then the skin color is confirmed according to the shape characteristics of the face.
  • the classification and optimization of the algorithm neural network are used to realize the distinction and discrimination of skin colors.
  • Portrait attribute analysis of clothing attributes Use a trained convolutional neural network for feature extraction of photos for image retrieval, retrieve clothes with high similarity values, and distinguish the clothing types of the people in the photos. This application can also provide a fun experience that matches the clothing type according to the clothing type, so that participants can feel better when they sign in.
  • the conference sign-in system 1 After completing the above-mentioned multiple attribute analysis, the conference sign-in system 1 will automatically perform dimensional information statistics to achieve data readability.
  • the user attribute analysis module 30 whose core function is to provide users with more capabilities to expand their applications.
  • the registration system of this application will be based on gender, age, mood, clothing, skin color attributes, combined with venue, appearance time, and associated participants to perform data Associate, and finally perform statistics and output the complete portrait of the user, so that the user can accurately judge the multi-dimensional information of the participants based on this information.
  • Relevant information is obtained by classifying the portrait attributes of participants who have successfully checked in.
  • the relevant information includes the distribution of age groups, the distribution of Chinese and Western people, and the personality of the characters is analyzed according to clothing attributes.
  • the proportion of the participants in the meeting and the participants in the meeting can be calculated, and the organizer can refer to the information of the participants.
  • the actual degree of participation can be calculated, and the organizer can refer to the information of the participants.
  • this application can analyze the attributes of the portrait to count the participants who have successfully signed in to obtain real and readable data. It can not only provide a strong data support, but also greatly improve the meeting’s performance. Value conversion.
  • the real sign-in data is obtained by matching and comparing the on-site photos of the participants with the declaration information of the participants, and the portrait analysis of the matched participants is carried out through the user attribute analysis module to obtain the portrait attributes of the participants. For the statistics of dimensional information to obtain readable data.
  • the related information obtained from the portrait attribute analysis does not need to be loaded in advance by the registered participants, but is obtained from the on-site photo analysis, which saves the registration procedure.
  • this application provides multiple ways to load the participant's declaration information, which increases the convenience of loading information. The application information of participants is tested to automatically screen participants.
  • the present application also provides a computer device 2.
  • the computer device 2 includes: a memory 21 for storing executable program codes (computer programs); and a processor 22 for calling the memory
  • the executable program code in 21, the execution steps include the above-mentioned conference sign-in method.
  • a processor 22 is taken as an example.
  • the processor 22 executes various functional applications and data processing of the computer device 2 by running non-volatile software programs, instructions, and modules stored in the memory 21, that is, implements the conference sign-in method in any of the foregoing method embodiments.
  • the memory 21 may include a storage program area and a storage data area.
  • the storage program area may store an operating system and an application program required by at least one function; the storage data area may store historical medical data of the user in the computer device 2.
  • the memory 21 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
  • the memory 21 may optionally include a memory 21 remotely provided with respect to the processor 22, and these remote memories 21 may be connected to the conference sign-in system 1 via a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the one or more modules are stored in the memory 21, and when executed by the one or more processors 22, the conference sign-in method in any of the foregoing method embodiments is executed, for example, the above-described FIG. 1- Figure 2 program.
  • the computer equipment 2 of the embodiment of the present application exists in various forms, including but not limited to: (1) Mobile communication equipment: This type of equipment is characterized by having a mobile communication function, and its main goal is to provide voice and data communication. Such terminals include: smart phones (such as iPhone), multimedia phones, functional phones, and low-end phones. (2) Ultra-mobile personal computer equipment: This type of equipment belongs to the category of personal computers, which has calculation and processing functions, and generally also has mobile Internet features. Such terminals include: PDA, MID and UMPC devices, such as iPad. (3) Portable entertainment equipment: This type of equipment can display and play multimedia content. Such devices include: audio, video players (such as iPod), handheld game consoles, e-books, as well as smart toys and portable car navigation devices.
  • Mobile communication equipment This type of equipment is characterized by having a mobile communication function, and its main goal is to provide voice and data communication.
  • Such terminals include: smart phones (such as iPhone), multimedia phones, functional phones, and low-end phones.
  • Ultra-mobile personal computer equipment This type of equipment belongs
  • Server A device that provides computing services.
  • the composition of a server includes a processor, hard disk, memory, system bus, etc.
  • the server is similar to a general computer architecture, but because it needs to provide highly reliable services, it is in terms of processing capacity and stability. , Reliability, security, scalability, and manageability. (5) Other electronic devices with data interaction function.
  • Another embodiment of the present application further provides a computer-readable storage medium, the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are executed by one or more processors, such as the One processor 22 may enable the above-mentioned one or more processors 22 to execute the conference sign-in method in any of the above-mentioned method embodiments, for example, execute the above-described programs in FIGS. 1 to 3.
  • the storage medium involved in this application such as a computer-readable storage medium, may be non-volatile or volatile.
  • the device embodiments described above are merely illustrative, where the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place. , Or it can be distributed to at least two network units. Some or all of the modules may be selected according to actual needs to achieve the objectives of the solutions of the embodiments of the present application. Those of ordinary skill in the art can understand and implement it without creative work.
  • each implementation manner can be implemented by means of software plus a general hardware platform, and of course, it can also be implemented by hardware.
  • a person of ordinary skill in the art can understand that all or part of the processes in the methods of the foregoing embodiments can be implemented by instructing relevant hardware through a computer program.
  • the program can be stored in a computer readable storage medium. When executed, it may include the procedures of the above-mentioned method embodiments.
  • the storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.

Abstract

一种会议签到方法、系统、计算机设备及计算机可读存储介质,会议签到方法包括以下步骤:获取会议信息及参会人员的申报信息(S10);摄取参会人员的现场照片(S20);提取现场照片中的人脸识别特征值,获取匹配的参会人员的现场照片与人脸识别特征值(S30);将匹配的参会人员的人脸识别特征值进行人脸识别特征值拆解并分析获得参会人员的性别属性、情绪属性、年龄属性、肤色属性;将匹配的参会人员的现场照片进行衣着特征提取并分析获得参会人员的衣着属性(S40);进行维度信息的统计,根据所述维度信息以及参会人数判断会议参与度(S50)。可通过画像属性分析对签到成功的参会人员进行统计,以便于获得真实且具可读性的数据。

Description

会议签到方法、系统、计算机设备及计算机可读存储介质
本申请要求于2020年3月13日提交中国专利局、申请号为202010174883.3,发明名称为“会议签到方法、系统、计算机设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及会议签到技术领域,尤其涉及一种会议签到方法、系统、计算机设备及计算机可读存储介质。
背景技术
为了统计会议出席情况,一般会议开始前会让参会人员进行签到;在现有的会议签到方法里,多采用参会人员刷卡或签字的方式签到。采用刷卡签到的话,不管是什么人,只要有卡就能混入会场,这可能出现冒充的情况;而且如果需要参与会议的人员忘记带卡,又无法进入会场;以及如果卡片丢失还需要挂失补卡,用户体验极差。采用签字签到的话,也可能出现有人冒充签字的情况,或者参会人员可以代人签字,并且签字时需要的时间较长,所以签到效率很低。
虽然已经出现能够实现自动完成签到和提高签到效率的会议签到方法,如中国专利申请,申请号CN201811409191.1揭露一种会议签到方法、装置及签到设备,获取参会人员的申报信息;所述名单信息包括参会人员姓名与特征图像的对应关系;采集参会人员的人脸图像;将所述人脸图像与所述特征图像进行匹配;若匹配成功,则对所述特征图像对应的参会人员的签到状态标记为签到成功。通过在会议开始前获取参会人员的申报信息,并采用人脸识别技术对进行签到的参会人员的身份进行甄别,进而完成签到。但是,发明人意识到,申请号CN201811409191.1的会议签到方法缺乏科学性和数据性的,且由于无法对参会人员进行画像分析,导致会议的价值转化低,且不能提供一个有力的数据支撑。
技术问题
本申请的目的是提供一种会议签到方法、系统、计算机设备及计算机可读存储介质,可通过画像属性分析对签到成功的参会人员进行统计,以便于获得真实且具可读性的数据。
技术解决方案
为实现所述目的,本申请提供一种会议签到方法,其包括以下步骤:步骤S10:获取会议信息及参会人员的申报信息;步骤S20:摄取参会人员的现场照片;步骤S30:提取现场照片中的人脸识别特征值,比较参会人员的人脸识别特征值和参会人员的申报信息是否匹配,并获取匹配的参会人员的现场照片与人脸识别特征值;步骤S40:将匹配的参会人员的人脸识别特征值进行人脸识别特征值拆解并分析获得参会人员的性别属性、情绪属性、年龄属性、肤色属性;将匹配的参会人员的现场照片进行衣着特征提取并分析获得参会人员的衣着属性;步骤S50:根据参会人员的性别属性、情绪属性、年龄属性、肤色属性与衣着属性进行维度信息的统计,根据所述维度信息以及参会人数判断会议参与度。
本申请还提供一种会议签到系统,其包括:人脸录入模块,其用于存储会议信息及参会人员的申报信息;人脸识别模块,其用于摄取参会人员的现场照片,提取现场照片中的人脸识别特征值,并用于比较参会人员的人脸识别特征值和参会人员的申报信息是否匹配,并获取匹配的参会人员的现场照片与人脸识别特征值;用户属性分析模块,其用于将匹配的参会人员的人脸识别特征值进行人脸识别特征值拆解并分析获得参会人员的性别属性、情绪属性、年龄属性、肤色属性;将匹配的参会人员的现场照片进行衣着特征提取并分析获得参会人员的衣着属性;并根据参会人员的性别属性、情绪属性、年龄属性、肤色属性与衣着属性进行维度信息的统计,根据所述维度信息以及参会人数判断会议参与度。
本申请还提供一种计算机设备,所述计算机设备,包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现以下步骤:获取会议信息及参会人员的申报信息;摄取参会人员的现场照片;提取现场照片中的人脸识别特征值,比较参会人员的人脸识别特征值和参会人员的申报信息是否匹配,并获取匹配的参会人员的现场照片与人脸识别特征值;将匹配的参会人员的人脸识别特征值进行人脸识别特征值拆解并分析获得参会人员的性别属性、情绪属性、年龄属性、肤色属性;将匹配的参会人员的现场照片进行衣着特征提取并分析获得参会人员的衣着属性;根据参会人员的性别属性、情绪属性、年龄属性、肤色属性与衣着属性进行维度信息的统计,根据所述维度信息以及参会人数判断会议参与度。
本申请又提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:获取会议信息及参会人员的申报信息;摄取参会人员的现场照片;提取现场照片中的人脸识别特征值,比较参会人员的人脸识别特征值和参会人员的申报信息是否匹配,并获取匹配的参会人员的现场照片与人脸识别特征值;将匹配的参会人员的人脸识别特征值进行人脸识别特征值拆解并分析获得参会人员的性别属性、情绪属性、年龄属性、肤色属性;将匹配的参会人员的现场照片进行衣着特征提取并分析获得参会人员的衣着属性;根据参会人员的性别属性、情绪属性、年龄属性、肤色属性与衣着属性进行维度信息的统计,根据所述维度信息以及参会人数判断会议参与度。
有益效果
本申请可通过画像属性分析对签到成功的参会人员进行统计,以便于获得真实且具可读性的数据,不仅能提供一个有力的数据支撑,也能大大提高了会议的价值转化。
附图说明
图1为本申请会议签到方法一种实施例的流程图。
图2为图1中步骤S10的具体流程图。
图3为本申请会议签到方法另一种实施例的流程图。
图4为本申请会议签到系统的模块图。
图5为本申请会议签到方法的计算机设备的硬件结构示意图。
附图标记:1、会议签到系统 10、人脸录入模块  20、人脸识别模块30、用户属性分析模块  2、计算机设备    21、存储器     22、处理器。
本发明的实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的技术方案可应用于人工智能、智慧城市、区块链和/或大数据技术领域。可选的,本申请涉及的数据如申报信息、特征值、属性和/或会议参与度等可存储于数据库中,或者可以存储于区块链中,比如通过区块链分布式存储,本申请不做限定。
请参阅图1所示,本申请提供一种会议签到方法,其包括以下步骤。
步骤S10:获取会议信息及参会人员的申报信息。
步骤S20:摄取参会人员的现场照片。
步骤S30:提取现场照片中的人脸识别特征值,比较参会人员的人脸识别特征值和参会人员的申报信息是否匹配,并获取匹配的参会人员的现场照片与人脸识别特征值。
步骤S40:将匹配的参会人员的人脸识别特征值进行人脸识别特征值拆解并分析获得参会人员的性别属性、情绪属性、年龄属性、肤色属性;将匹配的参会人员的现场照片进行衣着特征提取并分析获得参会人员的衣着属性。
其中,所述性别属性的分析过程:将参会人员的人脸识别特征值从高维空间降低到低维空间,计算获取最相似的低维空间的性别样本,并根据性别样本获得性别。
所述情绪属性的分析过程:通过人脸识别特征值中人脸嘴部位置、嘴巴张开的程度、嘴巴的角度分析判断获得;所述情绪属性包括喜悦、愤怒、悲伤、惊恐、厌恶。
所述年龄属性的分析过程:从参会人员的人脸识别特征值提取额头、眼角、脸颊区域的特征值,运用柔性模型将人脸的形状和皱纹结合获得年龄。
所述肤色属性的分析过程:从参会人员的人脸识别特征值运用基于肤色信息的图像分割预处理和正面人脸识别验证,利用颜色空间特性,对肤色的聚类分析,并进行肤色映射生成二值图像,然后根据人脸的形状特征获得肤色。
步骤S50:根据参会人员的性别属性、情绪属性、年龄属性、肤色属性与衣着属性进行维度信息的统计,根据所述维度信息以及参会人数判断会议参与度。
请参阅图2所示,图1所示的步骤S10进一步包括:步骤S110:预先建立预设会议数据库,在预设会议数据库中载入会议信息,所述会议信息包括会议主题、会议时间、会议地点、会议人数;步骤S120:在预设会议数据库中载入参会人员的申报信息;步骤S130:对参会人员的申报信息进行检测,以筛选符合参会要求的参会人员。
所述步骤S120中载入的参会人员的申报信息包括若干必填项目,必填项目也能形成参会人员的画像属性中的部分维度信息,为了节省报名程序,必填项目仅设置无法从现场照片中获取的维度信息。
所述步骤S120可通过单个新增、批量新增或二维码新增的方式载入参会人员的申报信息,所述参会人员的申报信息包括参会人员的姓名、参会人员的证件号码、参会人员的手机号、参会人员的照片。其中,参会人员的姓名、参会人员的证件号码、参会人员的手机号为必填项目。
a.单个新增方式:录入参会人员的姓名、参会人员的证件号码、参会人员的手机号、参会人员的照片。
b.批量新增方式:下载批量新增excel模板,批量录入并上传参会人员的姓名、参会人员的证件号码、参会人员的手机号、参会人员的照片。
c.二维码新增方式:参会人员获得并识别参会二维码,并录入个人姓名、参会人员的证件号码、参会人员的手机号、参会人员的照片。
所述步骤S130的检测包括照片质量检测与证件号码检测。
照片质量检测:利用活体检测技术判断参会人员的照片质量是否符合人脸识别要求,例如,参会人员的照片中的背景是否符合人脸识别要求,参会人员的照片中的人脸方向是否符合人脸识别要求,参会人员的照片中的光线亮度是否符合人脸识别要求,参会人员的照片中的人脸所占比例是否符合人脸识别要求;通过照片质量检测以获取初级筛选照片。
证件号码检测:利用公安V3系统对初级筛选照片以及与初级筛选照片相关联的证件号码进一步进行人证比对,以筛选符合参会要求的参会人员的申报信息作为应参会人员的申报信息。
所述步骤S40的分析过程包括:性别属性的画像属性分析:将参会人员的人脸识别特征值从高维空间降低到低维空间,计算获取最相似的低维空间的性别样本,并根据性别样本获得性别;确定性别的目的是因为本申请会议签到方法能够支持多种类型的会议,如产品发布/营销等,能够定向分析参会人员的性别特征,从而在会中、会后可以进行精准广告的投放与推广,减少无效推广。本申请会议签到方法在步骤S10中,减少必填项目的数量,例如,性别不是必填项目,而通过步骤S20进一步获取参会人员的现场照片,并通过步骤S40识别参会人员的性别,补充完善参会人员的画像属性。
情绪属性的画像属性分析:分析人脸特征值算法,对提取的人脸嘴部位置、嘴巴张开的程度、嘴巴的角度等特征值得出用户心情是开心还是悲伤;传统的会议分析是无法精准了解参会人的参与度,参与度的高低主要通过情绪属性来分析判断,情绪属性能够计算出喜悦、愤怒、悲伤、惊恐、厌恶等表情特征。用户属性分析模块增加情绪属性这一维度信息的收集,让举办方从数据分析中了解用户参与度。
年龄属性的画像属性分析:在提取人脸特征值算法时,通过运用柔性模型将人脸的形状和皱纹有机结合起来,充分提取额头、眼角、脸颊等区域的特征值,推测出年龄。
步骤S10中获取参会人员的申报信息时,参会人员的年龄不是必填项,而通过人脸识别进一步补充完善信息以提供更为明确的参会人员的画像属性。同时年龄特征这一维度信息的统计,能够更清楚的提供参会人员的年龄分布。
肤色属性的画像属性分析:同样在提取人脸特征值时,运用基于肤色信息的图像分割预处理和正面人脸识别验证,利用颜色空间特性,对肤色的聚类分析,并在YCbCr空间进行肤色映射生成二值图像,然后根据人脸的形状特征确认肤色。对于不同光线下的图像,通过算法神经网络的分类优化,实现肤色的区分判别。
衣着属性的画像属性分析:针对照片使用已训练好的卷积神经网络进行特征提取用于图像检索,检索出相似度值高的衣服,以此区分出照片人物的衣着类型。本申请还可根据衣着类型提供与衣着类型相匹配的趣味体验,从而让参会人员在签到时感受更好。
在完成上述多种属性分析后,系统将会自动进行数据的统计,实现数据可读性。
本申请会议签到系统会基于性别、年龄、情绪、衣着、肤色属性结合场地、出现时间、关联参会人员进行数据关联,并最终进行统计与输出用户完整画像,让使用方能够基于此信息去精准判断参会人员的多维度信息。
通过对签到成功的参会人员的画像属性分类得到关联信息,其中,关联信息包括年龄段分布状况、中西方人士分布状况,并根据衣着属性分析人物个性等。
此外,将签到成功的参会人员的申报信息与应参会人员的申报信息进行比对,便可统计出现场参会人员与应参会人员的占比,可供举办方参考参会人员的真实参与程度。
请参阅图3所示,本申请提供一种会议签到方法,其包括以下步骤。
步骤S10:获取会议信息及参会人员的申报信息。
步骤S12:下载应参会人员的申报信息作为识别训练样本集,参会人员的申报信息包括参会人员的姓名、参会人员的证件号码、参会人员的手机号、参会人员的照片。
步骤S13:将识别训练样本集中参会人员的照片进行人脸识别特征值提取,以获得第一人脸特征值组合。
步骤S20:摄取参会人员的现场照片。
步骤S32:对现场照片进行人脸识别技术的特征值提取以获得第二人脸特征值。
步骤S34:判断第二人脸特征值与第一人脸特征值组合是否匹配,若第二人脸特征值与第一人脸特征值组合中的某一第一人脸特征匹配,则执行步骤S36; 若第二人脸特征值与第一人脸特征值组合中的任一第一人脸特征均不匹配,则执行步骤S38。
步骤S36:提示签到成功,并保存匹配的现场照片,然后执行步骤S40。
步骤S38:提示签到失败。
步骤S40:将匹配的参会人员的人脸识别特征值进行人脸识别特征值拆解并分析获得参会人员的性别属性、情绪属性、年龄属性、肤色属性画像属性;将匹配的参会人员的现场照片进行衣着特征提取并分析获得参会人员的衣着属性。
步骤S50:根据参会人员的性别属性、情绪属性、年龄属性、肤色属性与衣着属性进行维度信息的统计,根据所述维度信息以及参会人数判断会议参与度。
通过人脸特征值的提取及比对处理,以对参会人员通过人脸识别的方式签到,有效避免冒充签字的现象。并利用人脸识别的结果保存匹配的现场照片,以使会前申报、会中识别、会后统计具有延续性,有效排除不相关的信息干扰。
假设获取的匹配的参会人员共有100人,维度信息的会后统计如下所示。
步骤S50根据参会人员的性别属性统计获得:男性:80人,女性:20人。
步骤S50根据参会人员的年龄属性统计获得:20-30岁:20人,30-40岁:51人,40-50岁:8人,50岁以上:21人。
步骤S50根据参会人员的肤色属性统计获得:黑色人种:5人,白色人种:25人,黄色人种:70人。
步骤S50根据参会人员的衣着属性统计获得:正装西服:60人,休闲西服:25人,夹克:10人,风衣:5人。
步骤S50根据参会人员的情绪属性统计获得:喜悦:60人、愤怒:15人、悲伤:0人、惊恐:0人、厌恶:25人。
其中参与度的获得可通过情绪属性中喜悦人数所占的比重获得,情绪属性中喜悦人数所占的比重越高,则参与度越高;情绪属性中喜悦人数所占的比重越低,则参与度越低。参与度的等级可根据设定的参与度阈值比较所得,假设参与度阈值为:0.5,所述情绪属性统计获得60/100>0.5,表明参与度高。若情绪属性的维度信息统计为:喜悦:10人、愤怒:25人、悲伤:0人、惊恐:0人、厌恶:65人,情绪属性统计获得10/100<0.5,表明参与度低。
参与度的获得也可根据不同情绪属性加权计算后的分值范围反映参与度,例如喜悦的权重为5、愤怒的权重为4、悲伤的权重为3、惊恐的权重为2、厌恶的权重为1,分值=喜悦的权重×喜悦的人数+愤怒的权重×愤怒的人数+悲伤的权重×悲伤的人数+惊恐的权重×惊恐的人数+厌恶的权重×厌恶的人数。参与度的等级可根据设定的参与度阈值比较所得,根据参与人数,计算得到参与度阈值,例如以100人为例,参与度阈值为350,分值越高,则参与度越高,分值越低,则参与度越低。所述情绪属性加权计算后的分值为385>350,表明参与度高。
请参阅图4所示,本申请提供一种会议签到系统1,其包括:人脸录入模块10,其用于存储会议信息及参会人员的申报信息;人脸识别模块20,其用于摄取参会人员的现场照片,提取现场照片中的人脸识别特征值,并用于比较参会人员的人脸识别特征值和参会人员的申报信息是否匹配,并获取匹配的参会人员的现场照片与人脸识别特征值;用户属性分析模块30,其用于将匹配的参会人员的人脸识别特征值进行人脸识别特征值拆解并分析获得参会人员的性别属性、情绪属性、年龄属性、肤色属性;将匹配的参会人员的现场照片进行衣着特征提取并分析获得参会人员的衣着属性。
其中,所述性别属性的分析过程:将参会人员的人脸识别特征值从高维空间降低到低维空间,计算获取最相似的低维空间的性别样本,并根据性别样本获得性别。
所述情绪属性的分析过程:通过人脸识别特征值中人脸嘴部位置、嘴巴张开的程度、嘴巴的角度分析判断获得;所述情绪属性包括喜悦、愤怒、悲伤、惊恐、厌恶。
所述年龄属性的分析过程:从参会人员的人脸识别特征值提取额头、眼角、脸颊区域的特征值,运用柔性模型将人脸的形状和皱纹结合获得年龄。
所述肤色属性的分析过程:从参会人员的人脸识别特征值运用基于肤色信息的图像分割预处理和正面人脸识别验证,利用颜色空间特性,对肤色的聚类分析,并进行肤色映射生成二值图像,然后根据人脸的形状特征获得肤色。
并根据参会人员的性别属性、情绪属性、年龄属性、肤色属性与衣着属性进行维度信息的统计,根据所述维度信息以及参会人数判断会议参与度。
所述人脸录入模块10存储的会议信息包括会议主题、会议时间、会议地点、会议人数。
所述人脸录入模块10存储的参会人员的申报信息包括若干必填项目,必填项目也能形成参会人员的画像属性中的部分维度信息,为了节省报名程序,必填项目仅设置无法从现场照片中获取的维度信息,例如,所述参会人员的申报信息包括参会人员的姓名、参会人员的证件号码、参会人员的手机号、参会人员的照片,其中,参会人员的姓名、参会人员的证件号码、参会人员的手机号为必填项目。
所述人脸录入模块10可通过单个新增、批量新增或二维码新增的方式载入参会人员的申报信息。
a.单个新增方式:录入参会人员的姓名、参会人员的证件号码、参会人员的手机号、参会人员的照片。
b.批量新增方式:下载批量新增excel模板,批量录入并上传参会人员的姓名、参会人员的证件号码、参会人员的手机号、参会人员的照片。
c.二维码新增方式:参会人员获得并识别参会二维码,并录入个人姓名、参会人员的证件号码、参会人员的手机号、参会人员的照片。
所述人脸录入模块10还用于对参会人员的申报信息进行检测,以筛选符合参会要求的参会人员,所述检测包括照片质量检测与证件号码检测。
照片质量检测:利用活体检测技术判断参会人员的照片质量是否符合人脸识别要求,例如,参会人员的照片中的背景是否符合人脸识别要求,参会人员的照片中的人脸方向是否符合人脸识别要求,参会人员的照片中的光线亮度是否符合人脸识别要求,参会人员的照片中的人脸所占比例是否符合人脸识别要求;通过照片质量检测以获取初级筛选照片。
证件号码检测:利用公安V3系统对初级筛选照片以及与初级筛选照片相关联的证件号码进一步进行人证比对,以筛选符合参会要求的参会人员的申报信息作为应参会人员的申报信息。
所述人脸识别模块20具体用于下载应参会人员的申报信息作为识别训练样本集;将识别训练样本集中参会人员的照片进行人脸识别特征值提取,以获得第一人脸特征值组合;对现场照片进行人脸识别技术的特征值提取以获得第二人脸特征值;判断第二人脸特征值与第一人脸特征值组合是否匹配,若第二人脸特征值与第一人脸特征值组合中的某一第一人脸特征匹配,则提示签到成功,并保存匹配的现场照片;若第二人脸特征值与第一人脸特征值组合中的任一第一人脸特征均不匹配,则提示签到失败。
所述用户属性分析模块30获得参会人员的画像属性包括性别属性、情绪属性、年龄属性、肤色属性或衣着属性的至少一种。
性别属性的画像属性分析:运用人脸特征值算法,将从参会人员脸部上提取的特征值高维图像降低到低纬空间,同时人脸识别算法把自身已经训练过的性别样本也映射到低纬空间中,然后计算离提取的特征值图像最相似的样本,并把该样本的性别赋值给参会人员的图像,最终确定其性别;确定性别的目的是因为本申请会议签到系统能够支持多种类型的会议,如产品发布/营销等,能够定向分析参会人员的性别特征,从而在会中、会后可以进行精准广告的投放与推广,减少无效推广。本申请会议签到系统所设置的人脸录入模块中,减少必填项目的数量,例如,性别不是必填项目,而通过人脸识别模块进一步获取参会人员的现场照片,并通过用户属性分析模块30识别参会人员的性别,补充完善参会人员的画像属性。
情绪属性的画像属性分析:分析人脸特征值算法,对提取的人脸嘴部位置、嘴巴张开的程度、嘴巴的角度等特征值得出用户心情是开心还是悲伤;传统的会议分析是无法精准了解参会人的参与度,参与度的高低主要通过情绪属性来分析判断,情绪属性能够计算出喜悦、愤怒、悲伤、惊恐、厌恶等表情特征。用户属性分析模块增加情绪属性这一维度信息的收集,让举办方从数据分析中了解用户参与度。
年龄属性的画像属性分析:在提取人脸特征值算法时,通过运用柔性模型将人脸的形状和皱纹有机结合起来,充分提取额头、眼角、脸颊等区域的特征值,推测出年龄;参会人员的申报信息中参会人员的年龄不是必填项,而通过人脸识别进一步补充完善信息以提供更为明确的参会人员的画像属性。同时年龄特征这一维度信息的统计,能够更清楚的提供参会人员的年龄分布。
肤色属性的画像属性分析:同样在提取人脸特征值时,运用基于肤色信息的图像分割预处理和正面人脸识别验证,利用颜色空间特性,对肤色的聚类分析,并在YCbCr空间进行肤色映射生成二值图像,然后根据人脸的形状特征确认肤色。对于不同光线下的图像,通过算法神经网络的分类优化,实现肤色的区分判别。
衣着属性的画像属性分析:针对照片使用已训练好的卷积神经网络进行特征提取用于图像检索,检索出相似度值高的衣服,以此区分出照片人物的衣着类型。本申请还可根据衣着类型提供与衣着类型相匹配的趣味体验,从而让参会人员在签到时感受更好。
在完成上述多种属性分析后,会议签到系统1将会自动进行维度信息的统计,实现数据可读性。
用户属性分析模块30,其核心作用是提供更多能力给使用方去拓展运用,本申请会议签到系统会基于性别、年龄、情绪、衣着、肤色属性结合场地、出现时间、关联参会人员进行数据关联,并最终进行统计与输出用户完整画像,让使用方能够基于此信息去精准判断参会人员的多维度信息。
通过对签到成功的参会人员的画像属性分类得到关联信息,其中,关联信息包括年龄段分布状况、中西方人士分布状况,并根据衣着属性分析人物个性等。
此外,将签到成功的参会人员的申报信息与应参会人员的申报信息进行比对,便可统计出现场参会人员与应参会人员的占比,可供举办方参考参会人员的真实参与程度。
综上所述,本申请可通过画像属性分析对签到成功的参会人员进行统计,以便于获得真实且具可读性的数据,不仅能提供一个有力的数据支撑,也能大大提高了会议的价值转化。通过将参会人员的现场照片和参会人员的申报信息进行匹配比较以获得真实的签到数据,并通过用户属性分析模块对匹配的参会人员进行画像分析,获得参会人员的画像属性,以便于进行维度信息的统计以获得可读性的数据。画像属性分析所得到的关联信息不需报名参会人员提前载入,而由现场照片分析得到,节省了报名程序。另,本申请提供多种方式载入参会人员的申报信息,增加了载入信息的便利性。对参会人员的申报信息进行检测,以对参会人员进行自动筛选。
请参阅图5所示,本申请还提供一种计算机设备2,所述计算机设备2包括:存储器21,用于存储可执行程序代码(计算机程序);以及处理器22,用于调用所述存储器21中的所述可执行程序代码,执行步骤包括上述的会议签到方法。
图5中以一个处理器22为例。
存储器21作为一种计算机可读存储介质如非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中的会议签到方法对应的程序指令/模块。处理器22通过运行存储在存储器21中的非易失性软件程序、指令以及模块,从而执行计算机设备2的各种功能应用以及数据处理,即实现上述任意方法实施例中的会议签到方法。
存储器21可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储用户在计算机设备2的历史医疗数据。此外,存储器21可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器21可选包括相对于处理器22远程设置的存储器21,这些远程存储器21可以通过网络连接至会议签到系统1。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
所述一个或者多个模块存储在所述存储器21中,当被所述一个或者多个处理器22执行时,执行上述任意方法实施例中的会议签到方法,例如,执行以上描述的图1-图2的程序。
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。
本申请实施例的计算机设备2以多种形式存在,包括但不限于:(1)移动通信设备:这类设备的特点是具备移动通信功能,并且以提供话音、数据通信为主要目标。这类终端包括:智能手机(例如iPhone)、多媒体手机、功能性手机,以及低端手机等。(2)超移动个人计算机设备:这类设备属于个人计算机的范畴,有计算和处理功能,一般也具备移动上网特性。这类终端包括:PDA、MID和UMPC设备等,例如iPad。(3)便携式娱乐设备:这类设备可以显示和播放多媒体内容。该类设备包括:音频、视频播放器(例如iPod),掌上游戏机,电子书,以及智能玩具和便携式车载导航设备。(4)服务器:提供计算服务的设备,服务器的构成包括处理器、硬盘、内存、系统总线等,服务器和通用的计算机架构类似,但是由于需要提供高可靠的服务,因此在处理能力、稳定性、可靠性、安全性、可扩展性、可管理性等方面要求较高。(5)其他具有数据交互功能的电子装置。
本申请又一实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个处理器执行,例如图5中的一个处理器22,可使得上述一个或多个处理器22可执行上述任意方法实施例中的会议签到方法,例如,执行以上描述的图1-图3的程序。
可选的,本申请涉及的存储介质如计算机可读存储介质可以是非易失性的,也可以是易失性的。
以上所描述的装置实施例仅仅是示意性的,其中作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到至少两个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本申请实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
通过以上的实施方式的描述,本领域普通技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-OnlyMemory,ROM)或随机存储记忆体(RandomAccessMemory,RAM)等。
所述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到所述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1.    一种会议签到方法,包括以下步骤:
    步骤S10:获取会议信息及参会人员的申报信息;
    步骤S20:摄取参会人员的现场照片;
    步骤S30:提取现场照片中的人脸识别特征值,比较参会人员的人脸识别特征值和参会人员的申报信息是否匹配,并获取匹配的参会人员的现场照片与人脸识别特征值;
    步骤S40:将匹配的参会人员的人脸识别特征值进行人脸识别特征值拆解并分析获得参会人员的性别属性、情绪属性、年龄属性、肤色属性画像属性;将匹配的参会人员的现场照片进行衣着特征提取并分析获得参会人员的衣着属性;
    步骤S50:根据参会人员的性别属性、情绪属性、年龄属性、肤色属性与衣着属性进行维度信息的统计,根据所述维度信息以及参会人数判断会议参与度。
  2. 根据权利要求1所述的会议签到方法,其中,所述步骤S10进一步包括:
    步骤S110:预先建立预设会议数据库,在预设会议数据库中载入会议信息,所述会议信息包括会议主题、会议时间、会议地点、会议人数;
    步骤S120:在预设会议数据库中载入参会人员的申报信息;
    步骤S130:对参会人员的申报信息进行检测,以筛选符合参会要求的参会人员。
  3. 根据权利要求2所述的会议签到方法,其中,所述步骤S120可通过单个新增、批量新增或二维码新增的方式载入参会人员的申报信息,所述参会人员的申报信息包括参会人员的姓名、参会人员的证件号码、参会人员的手机号、参会人员的照片。
  4. 根据权利要求3所述的会议签到方法,其中,所述步骤S130的检测包括照片质量检测与证件号码检测。
  5. 根据权利要求4所述的会议签到方法,其中,所述照片质量检测所述包括检测参会人员的照片中的背景是否符合人脸识别要求,检测参会人员的照片中的人脸方向是否符合人脸识别要求,检测参会人员的照片中的光线亮度是否符合人脸识别要求,检测参会人员的照片中的人脸所占比例是否符合人脸识别要求,并通过照片质量检测获取初级筛选照片。
  6. 根据权利要求5所述的会议签到方法,其中,将所述初级筛选照片与所述初级筛选照片相关联的证件号码进一步进行人证比对,以筛选符合参会要求的参会人员的申报信息作为应参会人员的申报信息。
  7. 根据权利要求6所述的会议签到方法,其中,所述步骤S20可于会中的多个时刻分别摄取参会人员的现场照片,以用于分析获得参会人员的情绪属性,所述多个时刻为随机采样点或为定时定样点。
  8. 一种会议签到系统,包括:
    人脸录入模块,其用于存储会议信息及参会人员的申报信息;
    人脸识别模块,其用于摄取参会人员的现场照片,提取现场照片中的人脸识别特征值,并用于比较参会人员的人脸识别特征值和参会人员的申报信息是否匹配,并获取匹配的参会人员的现场照片与人脸识别特征值;
    用户属性分析模块,其用于将匹配的参会人员的人脸识别特征值进行人脸识别特征值拆解并分析获得参会人员的性别属性、情绪属性、年龄属性、肤色属性;将匹配的参会人员的现场照片进行衣着特征提取并分析获得参会人员的衣着属性;并根据参会人员的性别属性、情绪属性、年龄属性、肤色属性与衣着属性进行维度信息的统计,根据所述维度信息以及参会人数判断会议参与度。
  9. 一种计算机设备,所述计算机设备包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时实现以下步骤:
    步骤S10:获取会议信息及参会人员的申报信息;
    步骤S20:摄取参会人员的现场照片;
    步骤S30:提取现场照片中的人脸识别特征值,比较参会人员的人脸识别特征值和参会人员的申报信息是否匹配,并获取匹配的参会人员的现场照片与人脸识别特征值;
    步骤S40:将匹配的参会人员的人脸识别特征值进行人脸识别特征值拆解并分析获得参会人员的性别属性、情绪属性、年龄属性、肤色属性画像属性;将匹配的参会人员的现场照片进行衣着特征提取并分析获得参会人员的衣着属性;
    步骤S50:根据参会人员的性别属性、情绪属性、年龄属性、肤色属性与衣着属性进行维度信息的统计,根据所述维度信息以及参会人数判断会议参与度。
  10. 根据权利要求9所述的计算机设备,其中,所述步骤S10进一步包括:
    步骤S110:预先建立预设会议数据库,在预设会议数据库中载入会议信息,所述会议信息包括会议主题、会议时间、会议地点、会议人数;
    步骤S120:在预设会议数据库中载入参会人员的申报信息;
    步骤S130:对参会人员的申报信息进行检测,以筛选符合参会要求的参会人员。
  11. 根据权利要求10所述的计算机设备,其中,所述步骤S120通过单个新增、批量新增或二维码新增的方式载入参会人员的申报信息,所述参会人员的申报信息包括参会人员的姓名、参会人员的证件号码、参会人员的手机号、参会人员的照片。
  12. 根据权利要求11所述的计算机设备,其中,所述步骤S130的检测包括照片质量检测与证件号码检测。
  13. 根据权利要求12所述的计算机设备,其中,所述照片质量检测所述包括检测参会人员的照片中的背景是否符合人脸识别要求,检测参会人员的照片中的人脸方向是否符合人脸识别要求,检测参会人员的照片中的光线亮度是否符合人脸识别要求,检测参会人员的照片中的人脸所占比例是否符合人脸识别要求,并通过照片质量检测获取初级筛选照片;
    将所述初级筛选照片与所述初级筛选照片相关联的证件号码进一步进行人证比对,以筛选符合参会要求的参会人员的申报信息作为应参会人员的申报信息。
  14. 根据权利要求13所述的计算机设备,其中,所述步骤S20可于会中的多个时刻分别摄取参会人员的现场照片,以用于分析获得参会人员的情绪属性,所述多个时刻为随机采样点或为定时定样点。
  15. 一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现以下步骤:
    步骤S10:获取会议信息及参会人员的申报信息;
    步骤S20:摄取参会人员的现场照片;
    步骤S30:提取现场照片中的人脸识别特征值,比较参会人员的人脸识别特征值和参会人员的申报信息是否匹配,并获取匹配的参会人员的现场照片与人脸识别特征值;
    步骤S40:将匹配的参会人员的人脸识别特征值进行人脸识别特征值拆解并分析获得参会人员的性别属性、情绪属性、年龄属性、肤色属性画像属性;将匹配的参会人员的现场照片进行衣着特征提取并分析获得参会人员的衣着属性;
    步骤S50:根据参会人员的性别属性、情绪属性、年龄属性、肤色属性与衣着属性进行维度信息的统计,根据所述维度信息以及参会人数判断会议参与度。
  16. 根据权利要求15所述的计算机可读存储介质,其中,所述步骤S10进一步包括:
    步骤S110:预先建立预设会议数据库,在预设会议数据库中载入会议信息,所述会议信息包括会议主题、会议时间、会议地点、会议人数;
    步骤S120:在预设会议数据库中载入参会人员的申报信息;
    步骤S130:对参会人员的申报信息进行检测,以筛选符合参会要求的参会人员。
  17. 根据权利要求16所述的计算机可读存储介质,其中,所述步骤S120可通过单个新增、批量新增或二维码新增的方式载入参会人员的申报信息,所述参会人员的申报信息包括参会人员的姓名、参会人员的证件号码、参会人员的手机号、参会人员的照片。
  18. 根据权利要求17所述的计算机可读存储介质,其中,所述步骤S130的检测包括照片质量检测与证件号码检测。
  19. 根据权利要求18所述的计算机可读存储介质,其中,所述照片质量检测所述包括检测参会人员的照片中的背景是否符合人脸识别要求,检测参会人员的照片中的人脸方向是否符合人脸识别要求,检测参会人员的照片中的光线亮度是否符合人脸识别要求,检测参会人员的照片中的人脸所占比例是否符合人脸识别要求,并通过照片质量检测获取初级筛选照片;
    将所述初级筛选照片与所述初级筛选照片相关联的证件号码进一步进行人证比对,以筛选符合参会要求的参会人员的申报信息作为应参会人员的申报信息。
  20. 根据权利要求19所述的计算机可读存储介质,其中,所述步骤S20可于会中的多个时刻分别摄取参会人员的现场照片,以用于分析获得参会人员的情绪属性,所述多个时刻为随机采样点或为定时定样点。
PCT/CN2020/134832 2020-03-13 2020-12-09 会议签到方法、系统、计算机设备及计算机可读存储介质 WO2021179706A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010174883.3 2020-03-13
CN202010174883.3A CN111445591A (zh) 2020-03-13 2020-03-13 会议签到方法、系统、计算机设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021179706A1 true WO2021179706A1 (zh) 2021-09-16

Family

ID=71627523

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/134832 WO2021179706A1 (zh) 2020-03-13 2020-12-09 会议签到方法、系统、计算机设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN111445591A (zh)
WO (1) WO2021179706A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114120398A (zh) * 2021-11-10 2022-03-01 国网山东省电力公司信息通信公司 用于户外会议的无感智能签到系统及方法
CN114140863A (zh) * 2022-01-29 2022-03-04 深圳市中讯网联科技有限公司 基于人脸识别的签到方法、装置、存储介质及电子设备
CN114240342A (zh) * 2021-11-30 2022-03-25 珠海大横琴科技发展有限公司 一种会议控制的方法和装置
CN114882608A (zh) * 2022-06-23 2022-08-09 淮阴工学院 一种提高电子签到实时性的方法
CN114999017A (zh) * 2022-06-06 2022-09-02 重庆酉辰戌智能科技有限公司 一种校园人脸识别赋能系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445591A (zh) * 2020-03-13 2020-07-24 平安科技(深圳)有限公司 会议签到方法、系统、计算机设备及计算机可读存储介质
CN112329638A (zh) * 2020-11-06 2021-02-05 上海优扬新媒信息技术有限公司 一种图像打分方法、装置和系统
CN112396391A (zh) * 2020-11-09 2021-02-23 上海凌立健康管理股份有限公司 一种线下培训签到防作弊系统
CN112396392A (zh) * 2020-11-09 2021-02-23 上海凌立健康管理股份有限公司 一种线下培训签到防作弊方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020030584A1 (en) * 2000-09-13 2002-03-14 Dore Perler Biometric access control system with time and attendance data logging and reporting capabilities
US20120314048A1 (en) * 2007-06-11 2012-12-13 Matos Jeffrey A Apparatus and method for verifying the identity of an author and a person receiving information
CN104506562A (zh) * 2015-01-13 2015-04-08 东北大学 融合二维码与人脸识别的会议身份认证装置及方法
CN109446880A (zh) * 2018-09-05 2019-03-08 广州维纳斯家居股份有限公司 智能用户参与度评价方法、装置、智能升降桌及存储介质
CN109522829A (zh) * 2018-11-02 2019-03-26 南京邮电大学 一种基于深度学习的智能手机“刷脸”会议注册方法
CN111445591A (zh) * 2020-03-13 2020-07-24 平安科技(深圳)有限公司 会议签到方法、系统、计算机设备及计算机可读存储介质

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251898B (zh) * 2008-03-25 2010-09-15 腾讯科技(深圳)有限公司 一种肤色检测方法及装置
CN101650782A (zh) * 2009-09-16 2010-02-17 上海电力学院 基于肤色模型和形状约束的正面人脸轮廓提取方法
CN101770613A (zh) * 2010-01-19 2010-07-07 北京智慧眼科技发展有限公司 基于人脸识别和活体检测的社保身份认证方法
CN102324025B (zh) * 2011-09-06 2013-03-20 北京航空航天大学 基于高斯肤色模型和特征分析的人脸检测与跟踪方法
CN103902958A (zh) * 2012-12-28 2014-07-02 重庆凯泽科技有限公司 人脸识别的方法
CN104732602B (zh) * 2015-02-04 2017-02-22 四川长虹电器股份有限公司 一种基于云端人脸及表情识别的考勤方法
CN108416286A (zh) * 2018-03-02 2018-08-17 浙江天悦信息技术有限公司 一种基于实时视频聊天场景的机器人互动方法
CN110648186B (zh) * 2018-06-26 2022-07-01 杭州海康威视数字技术股份有限公司 数据分析方法、装置、设备及计算机可读存储介质
CN109658040A (zh) * 2018-09-27 2019-04-19 深圳壹账通智能科技有限公司 会议管理的方法、装置、设备及计算机存储介质
CN109614508B (zh) * 2018-12-12 2021-09-03 杭州知衣科技有限公司 一种基于深度学习的服装图像搜索方法
CN109903411A (zh) * 2019-01-26 2019-06-18 北方民族大学 一种基于人脸识别的会议签到装置及使用方法
CN110072075B (zh) * 2019-04-30 2022-05-13 平安科技(深圳)有限公司 一种基于人脸识别的会议管理方法、系统和可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020030584A1 (en) * 2000-09-13 2002-03-14 Dore Perler Biometric access control system with time and attendance data logging and reporting capabilities
US20120314048A1 (en) * 2007-06-11 2012-12-13 Matos Jeffrey A Apparatus and method for verifying the identity of an author and a person receiving information
CN104506562A (zh) * 2015-01-13 2015-04-08 东北大学 融合二维码与人脸识别的会议身份认证装置及方法
CN109446880A (zh) * 2018-09-05 2019-03-08 广州维纳斯家居股份有限公司 智能用户参与度评价方法、装置、智能升降桌及存储介质
CN109522829A (zh) * 2018-11-02 2019-03-26 南京邮电大学 一种基于深度学习的智能手机“刷脸”会议注册方法
CN111445591A (zh) * 2020-03-13 2020-07-24 平安科技(深圳)有限公司 会议签到方法、系统、计算机设备及计算机可读存储介质

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114120398A (zh) * 2021-11-10 2022-03-01 国网山东省电力公司信息通信公司 用于户外会议的无感智能签到系统及方法
CN114240342A (zh) * 2021-11-30 2022-03-25 珠海大横琴科技发展有限公司 一种会议控制的方法和装置
CN114140863A (zh) * 2022-01-29 2022-03-04 深圳市中讯网联科技有限公司 基于人脸识别的签到方法、装置、存储介质及电子设备
CN114999017A (zh) * 2022-06-06 2022-09-02 重庆酉辰戌智能科技有限公司 一种校园人脸识别赋能系统
CN114882608A (zh) * 2022-06-23 2022-08-09 淮阴工学院 一种提高电子签到实时性的方法

Also Published As

Publication number Publication date
CN111445591A (zh) 2020-07-24

Similar Documents

Publication Publication Date Title
WO2021179706A1 (zh) 会议签到方法、系统、计算机设备及计算机可读存储介质
US11074436B1 (en) Method and apparatus for face recognition
WO2020151489A1 (zh) 基于面部识别的活体检测的方法、电子设备和存储介质
Shi et al. A facial expression recognition method based on a multibranch cross-connection convolutional neural network
CN108399665A (zh) 基于人脸识别的安全监控方法、装置及存储介质
CN112148922A (zh) 会议记录方法、装置、数据处理设备及可读存储介质
CN106803289A (zh) 一种智能移动防伪签到方法与系统
CN109508694A (zh) 一种人脸识别方法及识别装置
CN109003346A (zh) 一种基于人脸识别技术的校园考勤方法及其系统
CN112733802B (zh) 图像的遮挡检测方法、装置、电子设备及存储介质
CN105824799B (zh) 一种信息处理方法、设备和终端设备
CN105335719A (zh) 活体检测方法及装置
CN109522829B (zh) 一种基于深度学习的智能手机“刷脸”会议注册方法
CN106056083B (zh) 一种信息处理方法及终端
CN110210194A (zh) 电子合同显示方法、装置、电子设备及存储介质
CN106033539A (zh) 一种基于视频人脸识别的会议引导方法及系统
CN111414803A (zh) 人脸识别方法、装置、电子设备
CN109522824A (zh) 人脸属性识别方法、装置、计算机装置及存储介质
CN110245573A (zh) 一种基于人脸识别的签到方法、装置及终端设备
CN112150349A (zh) 一种图像处理方法、装置、计算机设备及存储介质
CN111178124A (zh) 一种婚恋交友系统及其数据处理方法
US11295117B2 (en) Facial modelling and matching systems and methods
CN112699811B (zh) 活体检测方法、装置、设备、储存介质及程序产品
CN112651333B (zh) 静默活体检测方法、装置、终端设备和存储介质
Ríos-Sánchez et al. gb2sμMOD: A MUltiMODal biometric video database using visible and IR light

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20924122

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20924122

Country of ref document: EP

Kind code of ref document: A1