CN113269541B - Talent online interview data analysis system and method based on Internet - Google Patents

Talent online interview data analysis system and method based on Internet Download PDF

Info

Publication number
CN113269541B
CN113269541B CN202110821797.1A CN202110821797A CN113269541B CN 113269541 B CN113269541 B CN 113269541B CN 202110821797 A CN202110821797 A CN 202110821797A CN 113269541 B CN113269541 B CN 113269541B
Authority
CN
China
Prior art keywords
interviewer
interview
degree
analysis module
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110821797.1A
Other languages
Chinese (zh)
Other versions
CN113269541A (en
Inventor
陈二妹
周成滔
李雪勇
李群娣
李文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qicheng Education Technology Co ltd
Original Assignee
Shenzhen Qicheng Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qicheng Education Technology Co ltd filed Critical Shenzhen Qicheng Education Technology Co ltd
Priority to CN202110821797.1A priority Critical patent/CN113269541B/en
Publication of CN113269541A publication Critical patent/CN113269541A/en
Application granted granted Critical
Publication of CN113269541B publication Critical patent/CN113269541B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/105Human resources
    • G06Q10/1053Employment or hiring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Tourism & Hospitality (AREA)
  • Quality & Reliability (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Psychiatry (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Social Psychology (AREA)
  • Data Mining & Analysis (AREA)
  • Game Theory and Decision Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a talent online interview data analysis system and method based on the Internet, which comprises an interview background acquisition module, an interview background chaos degree analysis module, a limb action acquisition module, a limb position positioning module, a limb action chaos degree analysis module, a sight line position positioning module, a sight line position deviation analysis module, a voice recognition module, a voice continuity degree analysis module and an interview data comprehensive analysis module, and has the beneficial effects that: through the chaos degree of the interview background, the limb action chaos degree of the interviewer, the sight line position deviation degree of the interviewer and the voice coherence degree, the comprehensive evaluation of the interview on the interview is calculated according to the chaos degree of the interview background, the limb action chaos degree, the sight line position deviation degree and the voice coherence degree, the difficulty of obtaining the limb language information in the traditional online interview process can be solved, and the evaluation on the interviewer is more comprehensive.

Description

Talent online interview data analysis system and method based on Internet
Technical Field
The invention relates to the technical field of Internet, in particular to a talent online interview data analysis system and method based on the Internet.
Background
At present, on-line interview form adopted by campus recruitment is mainstream, the figure of the on-line interview can be seen through the school enrollment announcements of a plurality of enterprises at present, the on-line interview is generally voice or video, but the video accounts for most of the recruiting information issued by each company at present, and scientific research shows that all expressions of the information are divided into three parts, namely language, voice and tone, body language and the like, wherein the language is 7%, the voice and tone are 38%, and the body language is 55%, so for the acquisition of the information, the body language accounts for a large part, but in the process of the on-line interview, the body language information is just the most difficult to acquire.
The prior art still has a plurality of defects, in the process of online interviewing by an enterprise, an interviewer asks an interviewer to answer, and the problem of the interviewer is limited to a certain extent, if a fixed question bank exists, problem information is obtained from the question bank, and meanwhile, the biggest difference between online interviewing and offline interviewing is that the overall image of the interviewer cannot be intuitively felt by the interviewer in the online interviewing process, and the problems can be more random due to face-to-face communication in the online interviewing process and cannot be limited in the scope of the question bank, so that the comprehensive evaluation of the interviewer can be more comprehensive, and the question-and-answer form adopted by the online interviewer cannot achieve the effect of the offline interviewing, and the scientificity and objectivity to a certain extent are lacked, so that the comprehensive evaluation of the interviewer to the interviewer is influenced.
Based on the above problems, there is a need for providing a talent online interview data analysis system and method based on the internet, by obtaining an interview background of an interviewer and analyzing the confusion degree of the interview background, further analyzing the limb movement confusion degree of the interviewer, the line-of-sight position deviation degree of the interviewer and the voice continuity degree, so as to calculate the comprehensive evaluation of the interview on the interviewer according to the interview background confusion degree, the limb movement confusion degree, the line-of-sight position deviation degree and the voice continuity degree, thereby solving the difficulty of obtaining limb language information in the traditional online interview process and making the evaluation on the interviewer more comprehensive.
Disclosure of Invention
The invention aims to provide a talent online interview data analysis system and method based on the Internet, so as to solve the problems in the background technology.
In order to solve the technical problems, the invention provides the following technical scheme:
an internet-based talent online interview data analysis system comprises an interview background acquisition module, an interview background chaos degree analysis module, a limb action acquisition module, a limb position positioning module, a limb action chaos degree analysis module, a sight line position positioning module, a sight line position deviation analysis module, a voice recognition module, a voice continuity degree analysis module and an interview data comprehensive analysis module,
the interview background acquisition module is used for acquiring a background picture in an interview environment of an interviewer, the interview background confusion degree analysis module is used for acquiring the acquired background picture through the interview background acquisition module and analyzing the confusion degree of the background picture, the limb action acquisition module is used for acquiring limb actions of the interviewer, the limb position positioning module is used for positioning the limb position, the limb action confusion degree analysis module is used for analyzing the confusion degree of the limb actions according to the acquired limb actions and the positioning of the limb position, the sight line position positioning module is used for positioning the sight line touching position of the interviewer in real time, the sight line position deviation analysis module is used for analyzing the position deviation of the sight line according to the real-time positioning change of the sight line touching position, and the voice recognition module is used for recognizing the voice difference between the interviewer and the interviewer, the voice coherence degree analysis module is used for analyzing the voice coherence degree in the interview process, and the interview data comprehensive analysis module is used for analyzing the background confusion degree of the interview
Figure 964813DEST_PATH_IMAGE001
Degree of disorder of limbs and trunk movements
Figure 779186DEST_PATH_IMAGE002
Degree of deviation of visual line position
Figure 319757DEST_PATH_IMAGE003
And degree of speech coherence
Figure 727736DEST_PATH_IMAGE004
And comprehensively analyzing the interview data.
Furthermore, the interview background acquisition module is connected with the interview background chaos degree analysis module, the interview background acquisition module acquires an interview background picture of an interviewer when the interviewer is connected with an online video interview, the interview background chaos degree analysis module acquires the acquired interview background picture through the interview background acquisition module, the interview background chaos degree analysis module analyzes color distribution in the interview background picture, the interview background chaos degree analysis module extracts object colors in the interview background picture, draws a color distribution diagram according to the extracted object colors, and further analyzes distribution areas of various colors in the color distribution diagram and shapes of the distribution areas,
the analysis module for analyzing the confusion degree of the interview background counts the number of the areas according to the color distribution areas and further acquires the shape of any color distribution area to judge the symmetry of any color distribution area, in the process of online interview, an interviewer cannot have a visual feeling on the overall image of an interviewer, but in the process of interview, the overall image of a person is a more critical influence factor, the requirement on the personal image is high for certain working properties, the attention degree of the interviewer to the interview can be seen from the personal image, the same dressing can reflect the personality of the person from the side, and meanwhile, because of different working properties, the personality of the person is also a key for determining whether the person can work adequately or not, so when the overall image cannot be acquired, the confusion degree of the interview background is analyzed, the general online interview is about good time, so the interview background is selected with the subjectivity of an interviewer, if the interview background is disordered, the problem of the lateral reaction is that the interviewer does not pay attention to the interview or the interviewer is not careful, and for some delicate work, the quality which must be possessed is carefully, so the symmetry of each color distribution area is judged by extracting the colors in the interview background, and a value reflecting the disordered degree of the interview background is obtained.
Furthermore, the interview background confusion degree analysis module takes a vertical line as a symmetry axis of any color distribution area, establishes a plurality of first symmetry points at the edge of the color distribution area on any side of the symmetry axis, and analyzes the interview background confusion degree according to the symmetry axis and the first symmetry pointsDetermining the position of a second symmetric point on the other side of the symmetric axis, and counting the number of the second symmetric points on the edge of the color distribution region on the other side, wherein the number is recorded as
Figure 379166DEST_PATH_IMAGE005
Marking a second symmetric point which is not positioned on the edge of the color distribution area on the other side, further obtaining the distance between the marked symmetric point and the edge of the color distribution area on the other side, counting the number of the marked symmetric points with the distance less than or equal to a distance threshold value, and recording the number as
Figure 48045DEST_PATH_IMAGE006
The interview background chaos degree analysis module records the number of the first symmetric points as
Figure 510250DEST_PATH_IMAGE007
Calculating the symmetry of any color distribution region
Figure 451530DEST_PATH_IMAGE008
The face test background chaos degree analysis module further calculates the chaos degree of the face test background
Figure 126225DEST_PATH_IMAGE009
And N is the number of the color distribution areas, one color distribution area is analyzed firstly, then the whole interview background is analyzed, the symmetry of the color area is judged, namely the symmetry of an object is judged, the objects in general real life are symmetrical, therefore, the symmetry analysis of the color distribution area reflects the overall neatness degree of the interview background, and an interviewer arranges the interview background completely before about good interview time or selects the neat interview background.
Further, the body action disorder degree analysis module is connected with the body action acquisition module and the body position positioning module, the body action acquisition module acquires the body action of the interviewer when the interviewer is put through an on-line video interview, the body position positioning module establishes a plurality of mobile reference points on the body of the interviewer according to the acquired body action, and the body action disorder degree analysis module analyzes the body action disorder degree of the interviewer according to the position moving frequency and the position moving rule of the mobile reference points.
Further, the limb action disorder degree analysis module takes the moving reference point positions on all the limbs acquired first as first original point positions, analyzes the moving frequency of all the first original point positions within a certain time period from the moment of establishing the first original point positions, records the movement of the first original point once when the moving distance of the first original point is greater than or equal to a first preset value, counts the moving times of all the first original point positions, and further calculates the moving frequency of each first original point within the time period according to the moving times and the certain time period
Figure 39823DEST_PATH_IMAGE010
Acquiring the maximum value of the moving frequency in the time period, wherein the body action disorder degree analysis module stores a moving frequency interval and a disorder evaluation value corresponding to the moving frequency interval in advance, determines a moving frequency slave interval according to the maximum value of the moving frequency, finds the corresponding disorder evaluation value and records the corresponding disorder evaluation value as the maximum value
Figure 672930DEST_PATH_IMAGE011
When the certain time period ends, the limb action disorder degree analysis module takes the ending time of the certain time period as the starting time of the next time period, takes the position of the moving reference point acquired at the starting time as the position of a second original point, and analyzes the moving frequency of all the positions of the second original point within the certain time period from the starting time
Figure 835927DEST_PATH_IMAGE012
And so on, up to the videoAfter the interview is finished, the moving frequency of the nth original point position is obtained through analysis
Figure 907788DEST_PATH_IMAGE013
The body action disorder degree analysis module calculates the disorder degree of the body action according to the movement frequency
Figure 692205DEST_PATH_IMAGE014
Wherein
Figure 479901DEST_PATH_IMAGE015
scientific research shows that the whole expression of the information is divided into three parts of language, voice and tone, body language and the like, wherein the language is 7%, the voice and tone is 38%, and the body language is 55%, so that the body language occupies a large part for the acquisition of the information, but in the process of online interviewing, the body language information is just the most difficult to acquire, a plurality of mobile reference points are established for the body part by acquiring the body action, the body action of an interviewer can not be regarded as one body action by small jitter generally, so that a distance threshold value is set, the body action of the interviewer is judged by comparing the mobile distance of the mobile reference point with the distance threshold value, the mobile frequency of the mobile reference point in a certain time period is counted according to the mobile reference point, and the maximum mobile frequency is acquired, the maximum moving frequency in each time period is obtained, so that an average value is calculated, the average value is used as the limb movement disorder degree of the interviewee in the whole interviewing process, and the larger the moving frequency value is, the more frequent the limb movement of the interviewee in a certain time period is.
Further, the sight line position positioning module is connected with the sight line position deviation analysis module, the limb position positioning module and the voice recognition module,
the sight line position positioning module locks the sight line position of the interviewer when the interviewer connects the online video interview, the voice recognition module recognizes the voices of the interviewer and the interviewer,
if the current recognized voice is sent by the interviewer, the limb position of the interviewer is obtained through the limb position positioning module, when the moving distance of the limb position is larger than or equal to a first preset value, a limb action is recorded, the sight line position deviation analysis module obtains the sight line position of the current interviewer through the sight line position positioning module, and further obtains the limb moving part, if the moving part of the current interviewer is a hand, whether the sight line position corresponds to the hand position of the interviewer is judged, and the limb moving times of the interviewer are recorded
Figure 474402DEST_PATH_IMAGE016
And the number of times that the sight line position corresponds to the hand position of the interviewer
Figure 490899DEST_PATH_IMAGE017
If the current recognized voice is sent out by the interviewer, the sight line position of the current interviewer is obtained through the sight line position positioning module, and the sight line position deviation analysis module obtains the sight line movement times of the interviewer
Figure 644669DEST_PATH_IMAGE018
And the number of times that the sight line position corresponds to the upper screen position
Figure 353999DEST_PATH_IMAGE019
The sight line position deviation analysis module further calculates the sight line deviation degree of the interviewer
Figure 101375DEST_PATH_IMAGE020
Wherein
Figure 374094DEST_PATH_IMAGE021
Figure 929840DEST_PATH_IMAGE022
all are the coefficients of the linear vibration wave,
Figure 403546DEST_PATH_IMAGE023
the method comprises the steps that a person watches the opposite side to respect the opposite side during conversation, and the attention degree of the conversation is displayed, because the technical requirements of some technical posts may require that an interviewee uses body language during technical communication in the interviewing process, then the sight touching position of the interviewee is further obtained, a general person watches the hand when the hand moves and watches the opposite side during the conversation, the current speaker is firstly identified, if the interviewee is speaking currently, the body movement and the sight position are obtained, if the interviewee is speaking currently, the sight position of the interviewee is obtained, whether the interviewee is watching a screen is judged, and the sight deviation degree of the interviewee is analyzed according to collected data information.
Further, the voice recognition module is connected with the voice continuity analysis module, and when the interviewer connects the online video interview, the voice recognition module recognizes the voice in the whole interview process, judges whether the interviewer speaks or the interviewer speaks at present,
if the current voice is sent by the interviewer, the interviewer speaks in the time period
Figure 231694DEST_PATH_IMAGE024
The voice consistency analysis module counts the pause times of the current interviewer and the pause time of each time, and further calculates the total pause time of the interviewer
Figure 855573DEST_PATH_IMAGE025
The voice coherence degree analysis module analyzes the voice coherence degree according to the total pause time
Figure 859302DEST_PATH_IMAGE026
And time period
Figure 97385DEST_PATH_IMAGE027
The ratio between the two is calculated, and the voice coherence degree analysis module further calculates the stopping time of the interviewer when the interviewer speaks every time in the whole interviewing processThe ratio of the total pause time to the speaking time is calculated to obtain the average value of all the ratios
Figure 225878DEST_PATH_IMAGE028
The voice coherence degree analysis module is prestored with a ratio average value interval and a coherence degree evaluation value corresponding to the ratio average value interval
Figure 981344DEST_PATH_IMAGE029
If the current voice is sent by the interviewer, the interviewer speaks in the time period
Figure 698633DEST_PATH_IMAGE030
Counting the voice occurrence times of the current interviewer by a voice continuity analysis module, wherein the voice continuity analysis module stores a voice occurrence time interval and a continuity evaluation value corresponding to the voice occurrence time interval in advance
Figure 186247DEST_PATH_IMAGE031
The voice coherence degree analysis module further calculates the voice coherence degree of the interviewer
Figure 864353DEST_PATH_IMAGE032
Wherein
Figure 813723DEST_PATH_IMAGE033
Figure 932989DEST_PATH_IMAGE034
all are the coefficients of the linear vibration wave,
Figure 840771DEST_PATH_IMAGE035
generally, when talking, it is a very impolite thing to interrupt others' speech, and because of the requirement of working nature, it needs interviewer to have a good communication logic, clear thought, strong language organization ability, similar to the service industry, and needs to make contact with clients frequently, or research and development industry, needs to make contact with clients frequentlyThe technical scheme is discussed, the language expression ability of the interviewee can be embodied according to the pause times in one utterance, whether the thought is clear or not is judged, and when the interviewee is identified to be speaking at present, whether the interviewee interrupts the interviewee in the speaking process or not is analyzed, and the interviewee is comprehensively evaluated according to the behavior of the interviewee.
Furthermore, the interview data comprehensive analysis module is connected with the interview background chaos degree analysis module, the limb action chaos degree analysis module, the sight line position deviation degree analysis module and the voice coherence degree analysis module,
the comprehensive analysis module for interview data further obtains the background chaos degree of the interview
Figure 881539DEST_PATH_IMAGE036
Degree of disorder of limbs and trunk movements
Figure 978808DEST_PATH_IMAGE037
Degree of deviation of visual line position
Figure 467427DEST_PATH_IMAGE038
And degree of speech coherence
Figure 234526DEST_PATH_IMAGE039
And the comprehensive analysis and the further calculation of the interview data obtain a comprehensive analysis evaluation value
Figure 74175DEST_PATH_IMAGE040
Wherein
Figure 709555DEST_PATH_IMAGE041
Figure 6676DEST_PATH_IMAGE042
Figure 256260DEST_PATH_IMAGE043
Figure 396255DEST_PATH_IMAGE044
to be aThe number of the first and second groups is,
Figure 710692DEST_PATH_IMAGE045
further, the online interview data analysis method based on the internet for the talents comprises the following steps:
s1: analyzing the degree of confusion of the interviewer background;
s2: analyzing the degree of disorder of the body movements of the interviewer;
s3: analyzing the visual line position deviation degree of the interviewer;
s4: analyzing the voice coherence degree of the interviewer;
s5: and calculating the comprehensive evaluation of the interviewer according to the background disorder degree, the limb action disorder degree, the sight line position deviation degree and the voice continuity degree of the interview.
Further, the interview data analysis method further comprises the following steps:
s1-1: the interview background acquisition module is connected with the interview background chaos degree analysis module, the interview background acquisition module acquires an interview background picture of an interviewer when the interviewer connects an on-line video interview, the interview background chaos degree analysis module acquires the acquired interview background picture through the interview background acquisition module, the interview background chaos degree analysis module analyzes color distribution in the interview background picture, the interview background chaos degree analysis module extracts object colors in the interview background picture, draws a color distribution diagram according to the extracted object colors, and further analyzes distribution areas of various colors in the color distribution diagram and shapes of the distribution areas,
the interview background chaos degree analysis module counts the number of the areas according to the color distribution areas, further obtains the shape of any color distribution area, and judges the symmetry of any color distribution area, the interview background chaos degree analysis module takes a vertical line as a symmetry axis of any color distribution area, establishes a plurality of first symmetry points on the edge of the color distribution area on any side of the symmetry axis, and the interview background chaos degree analysis module counts the number of the areas according to the symmetry axis and the first symmetry pointsThe position of a second symmetric point is determined on the other side of the symmetric axis by the symmetric point, and the number of the second symmetric points on the edge of the color distribution area on the other side is counted as
Figure 173904DEST_PATH_IMAGE046
Marking a second symmetric point which is not positioned on the edge of the color distribution area on the other side, further obtaining the distance between the marked symmetric point and the edge of the color distribution area on the other side, counting the number of the marked symmetric points with the distance less than or equal to a distance threshold value, and recording the number as
Figure 407439DEST_PATH_IMAGE047
The interview background chaos degree analysis module records the number of the first symmetric points as
Figure 910096DEST_PATH_IMAGE048
Calculating the symmetry of any color distribution region
Figure 808650DEST_PATH_IMAGE049
The face test background disorder degree analysis module further calculates the disorder degree of the face test background
Figure 204997DEST_PATH_IMAGE050
Wherein N is the number of color distribution areas;
s2-1: the limb action disorder degree analysis module is connected with the limb action acquisition module and the limb position positioning module,
the limb action acquisition module acquires the limb actions of the interviewer when the interviewer connects the online video interview, the limb position positioning module establishes a plurality of mobile reference points on the limbs of the interviewer according to the acquired limb actions, the limb action disorder degree analysis module analyzes the limb action disorder degree of the interviewer according to the position moving frequency and the position moving rule of the mobile reference points,
the limb action disorder degree analysis module is used for acquiring the moving reference points on all the limbs firstlyThe position is taken as a first original point position, the moving frequency of all the first original point positions is analyzed within a certain time period from the moment of establishing the first original point position, when the moving distance of the first original point is larger than or equal to a first preset value, the first original point is recorded to move once, the moving times of all the first original point positions are counted, and the moving frequency of each first original point within the time period is further calculated according to the moving times and the certain time period
Figure 750378DEST_PATH_IMAGE051
Acquiring the maximum value of the moving frequency in the time period, wherein the chaos degree analysis module of the limb action prestores a chaos evaluation value corresponding to the moving frequency interval and the moving frequency interval, and determines a moving frequency slave interval according to the maximum value of the moving frequency and finds out the corresponding chaos evaluation value to be recorded as the chaos evaluation value
Figure 786336DEST_PATH_IMAGE052
And the limb action disorder degree analysis module takes the end time of a certain time period as the start time of the next time period and the mobile reference point position obtained at the start time as the second original point position when the certain time period ends, and analyzes the mobile frequency of all the second original point positions within the certain time period from the start time
Figure 567211DEST_PATH_IMAGE053
And repeating the steps until the video interview is finished, and analyzing to obtain the moving frequency of the nth original point position
Figure 693430DEST_PATH_IMAGE054
The body action disorder degree analysis module calculates the disorder degree of the body action according to the movement frequency
Figure 455718DEST_PATH_IMAGE055
Wherein
Figure 57601DEST_PATH_IMAGE056
a chaos evaluation value corresponding to the maximum value of the moving frequency of the nth original point;
s3-1: the sight line position positioning module is connected with the sight line position deviation analysis module, the limb position positioning module and the voice recognition module,
the sight line position positioning module locks the sight line position of the interviewer when the interviewer connects the online video interview, the voice recognition module recognizes the voices of the interviewer and the interviewer,
if the current recognized voice is sent out by the interviewer, the limb position of the interviewer is obtained through the limb position positioning module, when the moving distance of the limb position is larger than or equal to a first preset value, a limb action is recorded, the sight line position deviation analysis module obtains the sight line position of the current interviewer through the sight line position positioning module, the limb moving position is further obtained, if the moving position of the current interviewer is a hand, whether the sight line position corresponds to the hand position of the interviewer is judged, and the limb moving times of the interviewer is recorded
Figure 251953DEST_PATH_IMAGE057
And the number of times that the sight line position corresponds to the hand position of the interviewer
Figure 544263DEST_PATH_IMAGE058
If the current recognized voice is sent out by the interviewer, the sight line position of the current interviewer is obtained through the sight line position positioning module, and the sight line movement times of the interviewer is obtained through the sight line position deviation analysis module
Figure 556081DEST_PATH_IMAGE059
And the number of times that the sight line position corresponds to the upper screen position
Figure 989468DEST_PATH_IMAGE060
The visual line position deviation analysis module further calculates the visual line deviation degree of the interviewer
Figure 299095DEST_PATH_IMAGE061
Wherein
Figure 258961DEST_PATH_IMAGE062
Figure 317047DEST_PATH_IMAGE063
all are the coefficients of the linear vibration wave,
Figure 814893DEST_PATH_IMAGE064
s4-1: the voice recognition module is connected with the voice continuity analysis module, when the interviewer is connected with the online video interview, the voice recognition module recognizes the voice in the whole interview process, judges whether the interviewer speaks or the interviewer speaks at present,
if the current voice is sent by the interviewer, the interviewer speaks in the time period
Figure 741261DEST_PATH_IMAGE065
The voice consistency analysis module counts the pause times of the current interviewer and the pause time of each time, and further calculates the total pause time of the interviewer
Figure 430999DEST_PATH_IMAGE026
The voice coherence degree analysis module analyzes the voice coherence degree according to the total pause time
Figure 50199DEST_PATH_IMAGE026
And time period
Figure 769763DEST_PATH_IMAGE066
The voice coherence degree analysis module further calculates the ratio of the total pause time and the speaking time of the interviewer during each speaking in the whole interview process, and calculates the average value of all the ratios
Figure 109608DEST_PATH_IMAGE067
The voice coherence degree analysis module prestores a ratio average value interval and a value corresponding to the ratio average value intervalCoherence assessment value
Figure 965438DEST_PATH_IMAGE068
If the current voice is sent by the interviewer, the interviewer speaks in the time period
Figure 755539DEST_PATH_IMAGE069
The voice continuity analysis module is used for counting the voice occurrence times of the current interviewer, and the voice continuity analysis module is prestored with a voice occurrence time interval and a continuity evaluation value corresponding to the voice occurrence time interval
Figure 385235DEST_PATH_IMAGE070
The voice coherence degree analysis module further calculates the voice coherence degree of the interviewer
Figure 918984DEST_PATH_IMAGE071
Wherein
Figure 894899DEST_PATH_IMAGE072
Figure 200110DEST_PATH_IMAGE073
all are the coefficients of the linear vibration wave,
Figure 972894DEST_PATH_IMAGE074
s5-1: the interview data comprehensive analysis module is connected with the interview background chaos degree analysis module, the limb action chaos degree analysis module, the sight line position deviation degree analysis module and the voice continuity degree analysis module,
the comprehensive analysis module of interview data further acquires the background chaos degree of the interview
Figure 966126DEST_PATH_IMAGE075
Degree of disorder of limbs and trunk movements
Figure 484964DEST_PATH_IMAGE076
Degree of deviation of visual line position
Figure 272660DEST_PATH_IMAGE077
And degree of speech coherence
Figure 267161DEST_PATH_IMAGE078
And comprehensively analyzing and further calculating the interview data to obtain comprehensive analysis evaluation value
Figure 549237DEST_PATH_IMAGE079
Wherein
Figure 703007DEST_PATH_IMAGE080
Figure 412337DEST_PATH_IMAGE081
Figure 894134DEST_PATH_IMAGE082
Figure 455869DEST_PATH_IMAGE083
as a function of the number of the coefficients,
Figure 11615DEST_PATH_IMAGE084
compared with the prior art, the invention has the following beneficial effects: according to the comprehensive evaluation method and the comprehensive evaluation system, the interview background of the interviewer is obtained, the confusion degree of the interviewer background is analyzed, the limb action confusion degree of the interviewer, the line-of-sight position deviation degree of the interviewer and the voice coherence degree are further analyzed, so that the comprehensive evaluation of the interview on the interviewer is calculated according to the confusion degree of the interview background, the limb action confusion degree, the line-of-sight position deviation degree and the voice coherence degree, the difficulty in obtaining limb language information in the traditional online interview process can be solved, and the evaluation on the interviewer is more comprehensive.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a block schematic diagram of an Internet-based talent online interview data analysis system of the present invention;
FIG. 2 is a schematic diagram of the steps of the online talent interview data analysis method based on the Internet.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-2, the present invention provides a technical solution:
an internet-based talent online interview data analysis system comprises an interview background acquisition module, an interview background chaos degree analysis module, a limb action acquisition module, a limb position positioning module, a limb action chaos degree analysis module, a sight line position positioning module, a sight line position deviation analysis module, a voice recognition module, a voice continuity degree analysis module and an interview data comprehensive analysis module,
the interview background acquisition module is used for acquiring a background picture in an interview environment of an interviewer, the interview background disorder degree analysis module acquires the acquired background picture through the interview background acquisition module and analyzes the disorder degree of the background picture, the limb action acquisition module is used for acquiring limb actions of the interviewer, the limb position positioning module is used for positioning the limb positions, the limb action disorder degree analysis module is used for analyzing the disorder degree of the limb actions according to the acquired limb actions and the positioning of the limb positions, the sight line position positioning module is used for positioning the sight line touching position of the interviewer in real time, the sight line position deviation analysis module is used for analyzing the position deviation of the sight line according to the real-time positioning change of the sight line touching position, and the voice recognition module is used for recognizing the interview background picture in the interview environment of the interviewerThe voice of the testee and the interviewer is distinguished, the voice consistency degree analysis module is used for analyzing the voice consistency degree in the interview process, and the interview data comprehensive analysis module is used for analyzing the background chaos degree of the interview
Figure 219743DEST_PATH_IMAGE085
Degree of disorder of limbs and trunk movements
Figure 47890DEST_PATH_IMAGE086
Degree of deviation of visual line position
Figure 937349DEST_PATH_IMAGE003
And degree of speech coherence
Figure 409919DEST_PATH_IMAGE039
And comprehensively analyzing the interview data.
The interview background acquisition module is connected with the interview background chaos degree analysis module, the interview background acquisition module acquires an interview background picture of an interviewer when the interviewer connects an on-line video interview, the interview background chaos degree analysis module acquires the acquired interview background picture through the interview background acquisition module, the interview background chaos degree analysis module analyzes color distribution in the interview background picture, the interview background chaos degree analysis module extracts object colors in the interview background picture, draws a color distribution diagram according to the extracted object colors, and further analyzes distribution areas of various colors in the color distribution diagram and shapes of the distribution areas,
the interview background chaos degree analysis module counts the number of the areas according to the color distribution areas, further obtains the shape of any color distribution area, and judges the symmetry of any color distribution area.
The interview background confusion degree analysis module takes a vertical line as a symmetrical axis of any color distribution area, establishes a plurality of first symmetrical points at the edge of the color distribution area on any side of the symmetrical axis, determines the position of a second symmetrical point on the other side of the symmetrical axis according to the symmetrical axis and the first symmetrical points, and counts the positions of the second symmetrical points on the other side of the symmetrical axisThe number of the second symmetrical points on the edge of the color distribution area on one side is recorded as
Figure 975898DEST_PATH_IMAGE087
Marking a second symmetric point which is not positioned on the edge of the color distribution area on the other side, further obtaining the distance between the marked symmetric point and the edge of the color distribution area on the other side, counting the number of the marked symmetric points with the distance less than or equal to a distance threshold value, and recording the number as
Figure 307653DEST_PATH_IMAGE088
The interview background chaos degree analysis module records the number of the first symmetric points as
Figure 797541DEST_PATH_IMAGE089
Calculating the symmetry of any color distribution region
Figure 577147DEST_PATH_IMAGE090
The face test background disorder degree analysis module further calculates the disorder degree of the face test background
Figure 268022DEST_PATH_IMAGE091
Where N is the number of color distribution regions.
The limb action disorder degree analysis module is connected with the limb action acquisition module and the limb position positioning module, the limb action acquisition module acquires the limb action of an interviewer when the interviewer is put on line for video interviewing, the limb position positioning module establishes a plurality of mobile reference points on the limb of the interviewer according to the acquired limb action, and the limb action disorder degree analysis module analyzes the limb action disorder degree of the interviewer according to the position moving frequency and the position moving rule of the mobile reference points.
The limb action disorder degree analysis module takes the mobile reference point positions on all the limbs acquired firstly as first original point positions, and analyzes all the first original point positions within a certain time period from the moment of establishing the first original point positionsThe moving frequency of the position of an original point is recorded as the moving of the first original point once when the moving distance of the first original point is more than or equal to a first preset value, the moving times of all the positions of the first original point are counted, and the moving frequency of each first original point in a certain time period is further calculated according to the moving times and the certain time period
Figure 680549DEST_PATH_IMAGE092
Acquiring the maximum value of the moving frequency in the time period, wherein the chaos degree analysis module of the limb action prestores a chaos evaluation value corresponding to the moving frequency interval and the moving frequency interval, and determines a moving frequency slave interval according to the maximum value of the moving frequency and finds out the corresponding chaos evaluation value to be recorded as the chaos evaluation value
Figure 161078DEST_PATH_IMAGE093
And the limb action disorder degree analysis module takes the end time of a certain time period as the start time of the next time period and the mobile reference point position obtained at the start time as the second original point position when the certain time period ends, and analyzes the mobile frequency of all the second original point positions within the certain time period from the start time
Figure 749185DEST_PATH_IMAGE094
And repeating the steps until the video interview is finished, and analyzing to obtain the moving frequency of the nth original point position
Figure 470016DEST_PATH_IMAGE095
The body action disorder degree analysis module calculates the disorder degree of the body action according to the movement frequency
Figure 25631DEST_PATH_IMAGE096
Wherein
Figure 795004DEST_PATH_IMAGE097
the estimated value of the disturbance corresponding to the maximum value of the moving frequency of the nth original point.
The sight line position positioning module is connected with the sight line position deviation analysis module, the limb position positioning module and the voice recognition module,
the sight line position positioning module locks the sight line position of the interviewer when the interviewer connects the online video interview, the voice recognition module recognizes the voices of the interviewer and the interviewer,
if the current recognized voice is sent out by the interviewer, the limb position of the interviewer is obtained through the limb position positioning module, when the moving distance of the limb position is larger than or equal to a first preset value, a limb action is recorded, the sight line position deviation analysis module obtains the sight line position of the current interviewer through the sight line position positioning module, the limb moving position is further obtained, if the moving position of the current interviewer is a hand, whether the sight line position corresponds to the hand position of the interviewer is judged, and the limb moving times of the interviewer is recorded
Figure 362252DEST_PATH_IMAGE098
And the number of times that the sight line position corresponds to the hand position of the interviewer
Figure 378618DEST_PATH_IMAGE099
If the current recognized voice is sent out by the interviewer, the sight line position of the current interviewer is obtained through the sight line position positioning module, and the sight line movement times of the interviewer is obtained through the sight line position deviation analysis module
Figure 703420DEST_PATH_IMAGE100
And the number of times that the sight line position corresponds to the upper screen position
Figure 604380DEST_PATH_IMAGE101
The visual line position deviation analysis module further calculates the visual line deviation degree of the interviewer
Figure 150768DEST_PATH_IMAGE102
Wherein
Figure 478981DEST_PATH_IMAGE103
Figure 291080DEST_PATH_IMAGE022
all are the coefficients of the linear vibration wave,
Figure 589206DEST_PATH_IMAGE104
the voice recognition module is connected with the voice continuity analysis module, when the interviewer is connected with the online video interview, the voice recognition module recognizes the voice in the whole interview process, judges whether the interviewer speaks or the interviewer speaks at present,
if the current voice is sent by the interviewer, the interviewer speaks in the time period
Figure 131046DEST_PATH_IMAGE105
The voice consistency analysis module counts the pause times of the current interviewer and the pause time of each time, and further calculates the total pause time of the interviewer
Figure 302264DEST_PATH_IMAGE106
The voice coherence degree analysis module analyzes the voice coherence degree according to the total pause time
Figure 788609DEST_PATH_IMAGE107
And time period
Figure 703475DEST_PATH_IMAGE108
The voice coherence degree analysis module further calculates the ratio of the total pause time and the speaking time of the interviewer during each speaking in the whole interview process, and calculates the average value of all the ratios
Figure 99822DEST_PATH_IMAGE028
The voice coherence degree analysis module is prestored with a ratio average value interval and a coherence degree evaluation value corresponding to the ratio average value interval
Figure 691209DEST_PATH_IMAGE029
If the current voice is sent by the interviewer, the interviewer speaks in the time period
Figure 415582DEST_PATH_IMAGE030
The voice continuity analysis module is used for counting the voice occurrence times of the current interviewer, and the voice continuity analysis module is prestored with a voice occurrence time interval and a continuity evaluation value corresponding to the voice occurrence time interval
Figure 196456DEST_PATH_IMAGE109
The voice coherence degree analysis module further calculates the voice coherence degree of the interviewer
Figure 634260DEST_PATH_IMAGE110
Wherein
Figure 84964DEST_PATH_IMAGE033
Figure 421267DEST_PATH_IMAGE034
all are the coefficients of the linear vibration wave,
Figure 927204DEST_PATH_IMAGE111
the interview data comprehensive analysis module is connected with the interview background chaos degree analysis module, the limb action chaos degree analysis module, the sight line position deviation degree analysis module and the voice continuity degree analysis module,
the comprehensive analysis module of interview data further acquires the background chaos degree of the interview
Figure 907929DEST_PATH_IMAGE112
Degree of disorder of limbs and trunk movements
Figure 841119DEST_PATH_IMAGE113
Degree of deviation of visual line position
Figure 930298DEST_PATH_IMAGE114
And degree of speech coherence
Figure 662762DEST_PATH_IMAGE039
And comprehensively analyzing and further calculating the interview data to obtain comprehensive analysis evaluation value
Figure 153786DEST_PATH_IMAGE115
Wherein
Figure 257877DEST_PATH_IMAGE116
Figure 444139DEST_PATH_IMAGE117
Figure 104927DEST_PATH_IMAGE118
Figure 371829DEST_PATH_IMAGE119
as a function of the number of the coefficients,
Figure 600817DEST_PATH_IMAGE120
an online interview data analysis method based on Internet for talents, comprising the following steps:
s1: analyzing the degree of confusion of the interviewer background;
s2: analyzing the degree of disorder of the body movements of the interviewer;
s3: analyzing the visual line position deviation degree of the interviewer;
s4: analyzing the voice coherence degree of the interviewer;
s5: and calculating the comprehensive evaluation of the interviewer according to the background disorder degree, the limb action disorder degree, the sight line position deviation degree and the voice continuity degree of the interview.
The interview data analysis method further comprises the following steps:
s1-1: the interview background acquisition module is connected with the interview background chaos degree analysis module, the interview background acquisition module acquires an interview background picture of an interviewer when the interviewer connects an on-line video interview, the interview background chaos degree analysis module acquires the acquired interview background picture through the interview background acquisition module, the interview background chaos degree analysis module analyzes color distribution in the interview background picture, the interview background chaos degree analysis module extracts object colors in the interview background picture, draws a color distribution diagram according to the extracted object colors, and further analyzes distribution areas of various colors in the color distribution diagram and shapes of the distribution areas,
the interview background disorder degree analysis module counts the number of areas according to the color distribution areas, further obtains the shape of any color distribution area, and judges the symmetry of any color distribution area, the interview background disorder degree analysis module takes a vertical line as a symmetrical axis of any color distribution area, establishes a plurality of first symmetrical points on the edge of the color distribution area on any side of the symmetrical axis, determines the position of a second symmetrical point on the other side of the symmetrical axis according to the symmetrical axis and the first symmetrical points, counts the number of the second symmetrical points on the edge of the color distribution area on the other side, and counts the number of the second symmetrical points
Figure 133429DEST_PATH_IMAGE121
Marking a second symmetric point which is not positioned on the edge of the color distribution area on the other side, further obtaining the distance between the marked symmetric point and the edge of the color distribution area on the other side, counting the number of the marked symmetric points with the distance less than or equal to a distance threshold value, and recording the number as
Figure 519280DEST_PATH_IMAGE122
The interview background chaos degree analysis module records the number of the first symmetric points as
Figure 391421DEST_PATH_IMAGE123
Calculating the symmetry of any color distribution region
Figure 774998DEST_PATH_IMAGE124
Analysis model for confusion degree of interview backgroundThe blocks further calculate the degree of confusion of the interview background
Figure 467010DEST_PATH_IMAGE125
Wherein N is the number of color distribution areas;
s2-1: the limb action disorder degree analysis module is connected with the limb action acquisition module and the limb position positioning module,
the limb action acquisition module acquires the limb actions of the interviewer when the interviewer connects the online video interview, the limb position positioning module establishes a plurality of mobile reference points on the limbs of the interviewer according to the acquired limb actions, the limb action disorder degree analysis module analyzes the limb action disorder degree of the interviewer according to the position moving frequency and the position moving rule of the mobile reference points,
the limb action disorder degree analysis module takes the mobile reference point positions on all limbs acquired firstly as first original point positions, analyzes the mobile frequency of all the first original point positions within a certain time period from the moment of establishing the first original point positions, records the movement of the first original point once when the moving distance of the first original point is greater than or equal to a first preset value, counts the moving times of all the first original point positions, and further calculates the mobile frequency of each first original point within the time period according to the moving times and the certain time period
Figure 922131DEST_PATH_IMAGE126
Acquiring the maximum value of the moving frequency in the time period, wherein the chaos degree analysis module of the limb action prestores a chaos evaluation value corresponding to the moving frequency interval and the moving frequency interval, and determines a moving frequency slave interval according to the maximum value of the moving frequency and finds out the corresponding chaos evaluation value to be recorded as the chaos evaluation value
Figure 976675DEST_PATH_IMAGE127
When a certain time period is over, the limb action disorder degree analysis module takes the end time of the certain time period as the next timeThe starting time of the time interval takes the position of the moving reference point acquired at the starting time as the position of the second original point, and the moving frequency of all the positions of the second original point is analyzed within a certain time interval from the starting time
Figure 281886DEST_PATH_IMAGE128
And repeating the steps until the video interview is finished, and analyzing to obtain the moving frequency of the nth original point position
Figure 976041DEST_PATH_IMAGE129
The body action disorder degree analysis module calculates the disorder degree of the body action according to the movement frequency
Figure 47902DEST_PATH_IMAGE130
Wherein
Figure 566739DEST_PATH_IMAGE131
a chaos evaluation value corresponding to the maximum value of the moving frequency of the nth original point;
s3-1: the sight line position positioning module is connected with the sight line position deviation analysis module, the limb position positioning module and the voice recognition module,
the sight line position positioning module locks the sight line position of the interviewer when the interviewer connects the online video interview, the voice recognition module recognizes the voices of the interviewer and the interviewer,
if the current recognized voice is sent out by the interviewer, the limb position of the interviewer is obtained through the limb position positioning module, when the moving distance of the limb position is larger than or equal to a first preset value, a limb action is recorded, the sight line position deviation analysis module obtains the sight line position of the current interviewer through the sight line position positioning module, the limb moving position is further obtained, if the moving position of the current interviewer is a hand, whether the sight line position corresponds to the hand position of the interviewer is judged, and the limb moving times of the interviewer is recorded
Figure 698643DEST_PATH_IMAGE132
And the person whose sight line position corresponds to the person to be interviewedNumber of hand positions of
Figure 614516DEST_PATH_IMAGE133
If the current recognized voice is sent out by the interviewer, the sight line position of the current interviewer is obtained through the sight line position positioning module, and the sight line movement times of the interviewer is obtained through the sight line position deviation analysis module
Figure 99855DEST_PATH_IMAGE134
And the number of times that the sight line position corresponds to the upper screen position
Figure 863411DEST_PATH_IMAGE135
The visual line position deviation analysis module further calculates the visual line deviation degree of the interviewer
Figure 87588DEST_PATH_IMAGE136
Wherein
Figure 179172DEST_PATH_IMAGE137
Figure 592836DEST_PATH_IMAGE138
all are the coefficients of the linear vibration wave,
Figure 397850DEST_PATH_IMAGE139
s4-1: the voice recognition module is connected with the voice continuity analysis module, when the interviewer is connected with the online video interview, the voice recognition module recognizes the voice in the whole interview process, judges whether the interviewer speaks or the interviewer speaks at present,
if the current voice is sent by the interviewer, the interviewer speaks in the time period
Figure 746923DEST_PATH_IMAGE140
The voice consistency analysis module counts the pause times of the current interviewer and the pause time of each time, and further calculates the total pause time of the interviewer
Figure 450436DEST_PATH_IMAGE141
The voice coherence degree analysis module analyzes the voice coherence degree according to the total pause time
Figure 589163DEST_PATH_IMAGE141
And time period
Figure 202678DEST_PATH_IMAGE142
The voice coherence degree analysis module further calculates the ratio of the total pause time and the speaking time of the interviewer during each speaking in the whole interview process, and calculates the average value of all the ratios
Figure 581706DEST_PATH_IMAGE143
The voice coherence degree analysis module is prestored with a ratio average value interval and a coherence degree evaluation value corresponding to the ratio average value interval
Figure 959467DEST_PATH_IMAGE144
If the current voice is sent by the interviewer, the interviewer speaks in the time period
Figure 590300DEST_PATH_IMAGE145
The voice continuity analysis module is used for counting the voice occurrence times of the current interviewer, and the voice continuity analysis module is prestored with a voice occurrence time interval and a continuity evaluation value corresponding to the voice occurrence time interval
Figure 182955DEST_PATH_IMAGE031
The voice coherence degree analysis module further calculates the voice coherence degree of the interviewer
Figure 857519DEST_PATH_IMAGE146
Wherein
Figure 473308DEST_PATH_IMAGE147
Figure 32465DEST_PATH_IMAGE034
all are the coefficients of the linear vibration wave,
Figure 604261DEST_PATH_IMAGE035
s5-1: the interview data comprehensive analysis module is connected with the interview background chaos degree analysis module, the limb action chaos degree analysis module, the sight line position deviation degree analysis module and the voice continuity degree analysis module,
the comprehensive analysis module of interview data further acquires the background chaos degree of the interview
Figure 590671DEST_PATH_IMAGE112
Degree of disorder of limbs and trunk movements
Figure 428177DEST_PATH_IMAGE148
Degree of deviation of visual line position
Figure 384501DEST_PATH_IMAGE149
And degree of speech coherence
Figure 217328DEST_PATH_IMAGE150
And comprehensively analyzing and further calculating the interview data to obtain comprehensive analysis evaluation value
Figure 46744DEST_PATH_IMAGE151
Wherein
Figure 558496DEST_PATH_IMAGE152
Figure 459456DEST_PATH_IMAGE153
Figure 818893DEST_PATH_IMAGE154
Figure 271740DEST_PATH_IMAGE119
as a function of the number of the coefficients,
Figure 146155DEST_PATH_IMAGE155
it is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. Talent online interview data analysis system based on internet, its characterized in that: comprises an interview background acquisition module, an interview background chaos degree analysis module, a limb action acquisition module, a limb position positioning module, a limb action chaos degree analysis module, a sight line position positioning module, a sight line position deviation analysis module, a voice recognition module, a voice coherence degree analysis module and an interview data comprehensive analysis module,
the interview background acquisition module is used for acquiring background pictures in the interview environment of an interviewer, the interview background chaos degree analysis module acquires the acquired background pictures through the interview background acquisition module and analyzes the chaos degree of the background pictures, and the limb action acquisition module is used for acquiring limb actions of the interviewerThe body position positioning module is used for positioning the body position, the body action disorder degree analysis module is used for analyzing the disorder degree of the body action according to the acquired body action and the positioning of the body position, the sight line position positioning module is used for positioning the sight line touching position of the interviewer in real time, the sight line position deviation analysis module is used for analyzing the position deviation of the sight line according to the real-time positioning change of the sight line touching position, the voice recognition module is used for recognizing the voice difference of the interviewer and the interviewee, the voice coherence degree analysis module is used for analyzing the voice coherence degree in the interview process, and the interview data comprehensive analysis module is used for analyzing the voice coherence degree in the interview process according to the interview background disorder degree
Figure 302404DEST_PATH_IMAGE001
Degree of disorder of limbs and trunk movements
Figure 697613DEST_PATH_IMAGE002
Degree of deviation of visual line position
Figure 43144DEST_PATH_IMAGE003
And degree of speech coherence
Figure 865606DEST_PATH_IMAGE004
Comprehensively analyzing the interview data;
the interview background acquisition module is connected with the interview background chaos degree analysis module,
the interview background acquisition module acquires an interview background picture of an interviewer when the interviewer connects an online video interview, the interview background confusion degree analysis module acquires the acquired interview background picture through the interview background acquisition module, the interview background confusion degree analysis module analyzes color distribution in the interview background picture, the interview background confusion degree analysis module extracts object colors in the interview background picture, draws a color distribution diagram according to the extracted object colors, and further analyzes distribution areas of various colors in the color distribution diagram and shapes of the distribution areas,
the interview background chaos degree analysis module counts the number of the areas according to the color distribution areas, further obtains the shape of any color distribution area, and judges the symmetry of any color distribution area;
the interview background confusion degree analysis module takes a vertical line as a symmetrical axis of any color distribution area, establishes a plurality of first symmetrical points at the edge of the color distribution area on any side of the symmetrical axis, determines the position of a second symmetrical point on the other side of the symmetrical axis according to the symmetrical axis and the first symmetrical points, and counts the number of the second symmetrical points on the edge of the color distribution area on the other side, wherein the number is recorded as
Figure DEST_PATH_IMAGE005
Marking a second symmetric point which is not positioned on the edge of the color distribution area on the other side, further obtaining the distance between the marked symmetric point and the edge of the color distribution area on the other side, counting the number of the marked symmetric points with the distance less than or equal to a distance threshold value, and recording the number as
Figure 663798DEST_PATH_IMAGE006
The interview background chaos degree analysis module records the number of the first symmetric points as
Figure DEST_PATH_IMAGE007
Calculating the symmetry of any color distribution region
Figure 925015DEST_PATH_IMAGE008
The face test background chaos degree analysis module further calculates the chaos degree of the face test background
Figure DEST_PATH_IMAGE009
Wherein N is the number of the color distribution regions.
2. The internet-based talent online interview data analysis system of claim 1, wherein: the limb action disorder degree analysis module is connected with the limb action acquisition module and the limb position positioning module,
the body action obtaining module obtains the body action of the interviewer when the interviewer connects an online video interview, the body position positioning module establishes a plurality of mobile reference points on the body of the interviewer according to the obtained body action, and the body action disorder degree analyzing module analyzes the body action disorder degree of the interviewer according to the position moving frequency and the position moving rule of the mobile reference points.
3. The internet-based talent online interview data analysis system of claim 2, wherein: the limb action disorder degree analysis module takes the mobile reference point positions on all the limbs acquired firstly as first original point positions, analyzes the moving frequency of all the first original point positions within a certain time period from the moment of establishing the first original point positions, records the movement of the first original point once when the moving distance of the first original point is greater than or equal to a first preset value, counts the moving times of all the first original point positions, and further calculates the moving frequency of each first original point within the time period according to the moving times and the certain time period
Figure 125052DEST_PATH_IMAGE010
Obtaining the maximum value of the moving frequency in the time period, wherein the body action disorder degree analysis module stores a moving frequency interval and a disorder evaluation value corresponding to the moving frequency interval in advance, the body action disorder degree analysis module determines a moving frequency slave interval according to the maximum value of the moving frequency, searches the corresponding disorder evaluation value and records the corresponding disorder evaluation value
Figure DEST_PATH_IMAGE011
When the certain time period ends, the limb action disorder degree analysis module takes the ending time of the certain time period as the starting time of the next time period, takes the position of the moving reference point acquired at the starting time as the position of a second original point, and analyzes the moving frequency of all the positions of the second original point within the certain time period from the starting time
Figure 649574DEST_PATH_IMAGE012
And analogizing until the video interview is finished, and analyzing to obtain the moving frequency of the nth original point position
Figure DEST_PATH_IMAGE013
The body action disorder degree analysis module calculates the disorder degree of the body action according to the movement frequency
Figure 200641DEST_PATH_IMAGE014
Wherein,
Figure DEST_PATH_IMAGE015
the estimated value of the disturbance corresponding to the maximum value of the moving frequency of the nth original point.
4. The internet-based talent online interview data analysis system of claim 1, wherein: the sight line position positioning module is connected with the sight line position deviation analysis module, the limb position positioning module and the voice recognition module,
the sight line position positioning module locks the sight line position of the interviewer when the interviewer connects the online video interview, the voice recognition module recognizes the voices of the interviewer and the interviewer,
if the current recognized voice is sent by the interviewer, the limb position of the interviewer is obtained through the limb position positioning module, when the moving distance of the limb position is larger than or equal to a first preset value, a limb action is recorded, and the sight line position deviation analysis module obtains the current position of the interviewer through the sight line position positioning moduleThe visual line position of the interviewer is further obtained, if the moving part of the interviewer is a hand, whether the visual line position corresponds to the hand position of the interviewer is judged, and the limb moving times of the interviewer are recorded
Figure 468812DEST_PATH_IMAGE016
And the number of times that the sight line position corresponds to the hand position of the interviewer
Figure DEST_PATH_IMAGE017
If the current recognized voice is sent out by the interviewer, the sight line position of the current interviewer is obtained through the sight line position positioning module, and the sight line position deviation analysis module obtains the sight line movement times of the interviewer
Figure 523355DEST_PATH_IMAGE018
And the number of times that the sight line position corresponds to the upper screen position
Figure DEST_PATH_IMAGE019
The sight line position deviation analysis module further calculates the sight line deviation degree of the interviewer
Figure 484358DEST_PATH_IMAGE020
Wherein,
Figure DEST_PATH_IMAGE021
all are the coefficients of the linear vibration wave,
Figure 254212DEST_PATH_IMAGE022
5. the internet-based talent online interview data analysis system of claim 1, wherein: the voice recognition module is connected with the voice coherence degree analysis module,
the voice recognition module recognizes the voice in the whole interview process when the interviewer connects the online video interview, judges whether the interviewer speaks or the interviewer speaks at present,
if the current voice is sent by the interviewer, the interviewer speaks in the time period
Figure DEST_PATH_IMAGE023
The voice consistency analysis module counts the pause times of the current interviewer and the pause time of each time, and further calculates the total pause time of the interviewer
Figure 60494DEST_PATH_IMAGE024
The voice coherence degree analysis module analyzes the voice coherence degree according to the total pause time
Figure 703965DEST_PATH_IMAGE024
And time period
Figure 835869DEST_PATH_IMAGE023
The voice coherence degree analysis module further calculates the ratio of the total pause time and the speaking time of the interviewer during each speaking in the whole interviewing process, and calculates the average value of all the ratios
Figure DEST_PATH_IMAGE025
The voice coherence degree analysis module is prestored with a ratio average value interval and a coherence degree evaluation value corresponding to the ratio average value interval
Figure 95949DEST_PATH_IMAGE026
If the current voice is sent by the interviewer, the interviewer speaks in the time period
Figure DEST_PATH_IMAGE027
Counting the occurrence frequency of the voice of the current interviewer by a voice continuity analysis module which stores the occurrence frequency in advanceThere are interval of number of times of occurrence of voice and estimated value of degree of coherence corresponding to interval of number of times of occurrence of voice
Figure 705922DEST_PATH_IMAGE028
The voice coherence degree analysis module further calculates the voice coherence degree of the interviewer
Figure DEST_PATH_IMAGE029
Wherein,
Figure 735058DEST_PATH_IMAGE030
all are the coefficients of the linear vibration wave,
Figure DEST_PATH_IMAGE031
6. the internet-based talent online interview data analysis system of claim 1, wherein: the interview data comprehensive analysis module is connected with the interview background chaos degree analysis module, the limb action chaos degree analysis module, the sight line position deviation degree analysis module and the voice coherence degree analysis module,
the comprehensive analysis module for interview data further obtains the background chaos degree of the interview
Figure 37864DEST_PATH_IMAGE032
Degree of disorder of limbs and trunk movements
Figure DEST_PATH_IMAGE033
Degree of deviation of visual line position
Figure 519661DEST_PATH_IMAGE034
And degree of speech coherence
Figure DEST_PATH_IMAGE035
And the comprehensive analysis and the further calculation of the interview data obtain a comprehensive analysis evaluation value
Figure 933324DEST_PATH_IMAGE036
Wherein
Figure DEST_PATH_IMAGE037
as a function of the number of the coefficients,
Figure 816967DEST_PATH_IMAGE038
are all less than 1 and greater than 0.
7. An online talent interview data analysis method based on the Internet is characterized in that: the interview data analysis method comprises the following steps:
s1: analyzing the degree of confusion of the interviewer background;
s2: analyzing the degree of disorder of the body movements of the interviewer;
s3: analyzing the visual line position deviation degree of the interviewer;
s4: analyzing the voice coherence degree of the interviewer;
s5: calculating the comprehensive evaluation of the interviewer on the interviewer according to the background chaos degree, the limb action chaos degree, the sight line position deviation degree and the voice coherence degree of the interview;
the interview data analysis method further comprises the following steps:
s1-1: the interview background acquisition module is connected with the interview background chaos degree analysis module, the interview background acquisition module acquires an interview background picture of an interviewer when the interviewer connects an on-line video interview, the interview background chaos degree analysis module acquires the acquired interview background picture through the interview background acquisition module, the interview background chaos degree analysis module analyzes color distribution in the interview background picture, the interview background chaos degree analysis module extracts object colors in the interview background picture, draws a color distribution diagram according to the extracted object colors, and further analyzes distribution areas of various colors in the color distribution diagram and shapes of the distribution areas,
area system for analyzing chaos degree of interview background according to color distributionCounting the number of areas, further acquiring the shape of any color distribution area, judging the symmetry of any color distribution area, establishing a plurality of first symmetric points at the edge of the color distribution area on any side of a symmetric axis by using a vertical line as the symmetric axis of any color distribution area by using an interview background disorder degree analysis module, determining the position of a second symmetric point at the other side of the symmetric axis according to the symmetric axis and the first symmetric points by using the interview background disorder degree analysis module, counting the number of the second symmetric points on the edge of the color distribution area on the other side, and recording the number as
Figure DEST_PATH_IMAGE039
Marking a second symmetric point which is not positioned on the edge of the color distribution area on the other side, further obtaining the distance between the marked symmetric point and the edge of the color distribution area on the other side, counting the number of the marked symmetric points with the distance less than or equal to a distance threshold value, and recording the number as
Figure 556253DEST_PATH_IMAGE040
The interview background chaos degree analysis module records the number of the first symmetric points as
Figure DEST_PATH_IMAGE041
Calculating the symmetry of any color distribution region
Figure 259766DEST_PATH_IMAGE042
The face test background chaos degree analysis module further calculates the chaos degree of the face test background
Figure DEST_PATH_IMAGE043
Wherein N is the number of color distribution regions;
s2-1: the limb action disorder degree analysis module is connected with the limb action acquisition module and the limb position positioning module,
the limb action acquisition module acquires the limb actions of the interviewer when the interviewer connects the online video interview, the limb position positioning module establishes a plurality of mobile reference points on the limbs of the interviewer according to the acquired limb actions, the limb action disorder degree analysis module analyzes the limb action disorder degree of the interviewer according to the position moving frequency and the position moving rule of the mobile reference points,
the limb action disorder degree analysis module takes the mobile reference point positions on all limbs acquired firstly as first original point positions, analyzes the mobile frequency of all the first original point positions within a certain time period from the moment of establishing the first original point positions, records the movement of the first original point once when the moving distance of the first original point is greater than or equal to a first preset value, counts the moving times of all the first original point positions, and further calculates the mobile frequency of each first original point within the time period according to the moving times and the certain time period
Figure 211542DEST_PATH_IMAGE044
Acquiring the maximum value of the moving frequency in the time period, wherein the chaos degree analysis module of the limb action prestores a chaos evaluation value corresponding to the moving frequency interval and the moving frequency interval, and determines a moving frequency slave interval according to the maximum value of the moving frequency and finds out the corresponding chaos evaluation value to be recorded as the chaos evaluation value
Figure DEST_PATH_IMAGE045
And the limb action disorder degree analysis module takes the end time of a certain time period as the start time of the next time period and the mobile reference point position obtained at the start time as the second original point position when the certain time period ends, and analyzes the mobile frequency of all the second original point positions within the certain time period from the start time
Figure 480849DEST_PATH_IMAGE046
And analogizing until the video interview is finished, and analyzing to obtain the moving frequency of the nth original point position
Figure DEST_PATH_IMAGE047
The body action disorder degree analysis module calculates the disorder degree of the body action according to the movement frequency
Figure 125457DEST_PATH_IMAGE048
Wherein,
Figure DEST_PATH_IMAGE049
a chaos evaluation value corresponding to the maximum value of the moving frequency of the nth original point;
s3-1: the sight line position positioning module is connected with the sight line position deviation analysis module, the limb position positioning module and the voice recognition module,
the sight line position positioning module locks the sight line position of the interviewer when the interviewer connects the online video interview, the voice recognition module recognizes the voices of the interviewer and the interviewer,
if the current recognized voice is sent out by the interviewer, the limb position of the interviewer is obtained through the limb position positioning module, when the moving distance of the limb position is larger than or equal to a first preset value, a limb action is recorded, the sight line position deviation analysis module obtains the sight line position of the current interviewer through the sight line position positioning module, the limb moving position is further obtained, if the moving position of the current interviewer is a hand, whether the sight line position corresponds to the hand position of the interviewer is judged, and the limb moving times of the interviewer is recorded
Figure 584776DEST_PATH_IMAGE050
And the number of times that the sight line position corresponds to the hand position of the interviewer
Figure DEST_PATH_IMAGE051
If the current recognized voice is sent out by the interviewer, the sight line position of the current interviewer is obtained through the sight line position positioning module, and the sight line movement times of the interviewer is obtained through the sight line position deviation analysis module
Figure 340242DEST_PATH_IMAGE052
And the number of times that the sight line position corresponds to the upper screen position
Figure DEST_PATH_IMAGE053
The visual line position deviation analysis module further calculates the visual line deviation degree of the interviewer
Figure 932898DEST_PATH_IMAGE054
Wherein,
Figure DEST_PATH_IMAGE055
all are the coefficients of the linear vibration wave,
Figure 748407DEST_PATH_IMAGE056
s4-1: the voice recognition module is connected with the voice continuity analysis module, when the interviewer is connected with the online video interview, the voice recognition module recognizes the voice in the whole interview process, judges whether the interviewer speaks or the interviewer speaks at present,
if the current voice is sent by the interviewer, the interviewer speaks in the time period
Figure DEST_PATH_IMAGE057
The voice consistency analysis module counts the pause times of the current interviewer and the pause time of each time, and further calculates the total pause time of the interviewer
Figure 692092DEST_PATH_IMAGE058
The voice coherence degree analysis module analyzes the voice coherence degree according to the total pause time
Figure 251250DEST_PATH_IMAGE058
And time period
Figure 698411DEST_PATH_IMAGE057
Calculating the ratio between them, and analyzing the voice coherenceThe block further calculates the ratio of the total pause time to the speaking time of the interviewer in each speaking time in the whole interviewing process, and calculates the average value of all the ratios
Figure DEST_PATH_IMAGE059
The voice coherence degree analysis module is prestored with a ratio average value interval and a coherence degree evaluation value corresponding to the ratio average value interval
Figure 419243DEST_PATH_IMAGE060
If the current voice is sent by the interviewer, the interviewer speaks in the time period
Figure DEST_PATH_IMAGE061
The voice continuity analysis module is used for counting the voice occurrence times of the current interviewer, and the voice continuity analysis module is prestored with a voice occurrence time interval and a continuity evaluation value corresponding to the voice occurrence time interval
Figure 850224DEST_PATH_IMAGE062
The voice coherence degree analysis module further calculates the voice coherence degree of the interviewer
Figure DEST_PATH_IMAGE063
Wherein
Figure 947493DEST_PATH_IMAGE064
all are the coefficients of the linear vibration wave,
Figure DEST_PATH_IMAGE065
s5-1: the interview data comprehensive analysis module is connected with the interview background chaos degree analysis module, the limb action chaos degree analysis module, the sight line position deviation degree analysis module and the voice continuity degree analysis module,
the interview data comprehensive analysis module further obtainsDegree of disorder of background of interview
Figure 780320DEST_PATH_IMAGE066
Degree of disorder of limbs and trunk movements
Figure DEST_PATH_IMAGE067
Degree of deviation of visual line position
Figure 672053DEST_PATH_IMAGE068
And degree of speech coherence
Figure DEST_PATH_IMAGE069
And comprehensively analyzing and further calculating the interview data to obtain comprehensive analysis evaluation value
Figure 855909DEST_PATH_IMAGE070
Wherein,
Figure DEST_PATH_IMAGE071
as a function of the number of the coefficients,
Figure 756869DEST_PATH_IMAGE072
CN202110821797.1A 2021-07-21 2021-07-21 Talent online interview data analysis system and method based on Internet Active CN113269541B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110821797.1A CN113269541B (en) 2021-07-21 2021-07-21 Talent online interview data analysis system and method based on Internet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110821797.1A CN113269541B (en) 2021-07-21 2021-07-21 Talent online interview data analysis system and method based on Internet

Publications (2)

Publication Number Publication Date
CN113269541A CN113269541A (en) 2021-08-17
CN113269541B true CN113269541B (en) 2021-11-02

Family

ID=77236937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110821797.1A Active CN113269541B (en) 2021-07-21 2021-07-21 Talent online interview data analysis system and method based on Internet

Country Status (1)

Country Link
CN (1) CN113269541B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101848378A (en) * 2010-06-07 2010-09-29 中兴通讯股份有限公司 Domestic video monitoring device, system and method
CN111553364A (en) * 2020-04-28 2020-08-18 支付宝(杭州)信息技术有限公司 Picture processing method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180158027A1 (en) * 2015-11-27 2018-06-07 Prasad Venigalla System and method for evaluating a candidate curriculum vitae intelligent quotient (cviq) score
CN110135800A (en) * 2019-04-23 2019-08-16 南京葡萄诚信息科技有限公司 A kind of artificial intelligence video interview method and system
CN112651714A (en) * 2020-12-25 2021-04-13 北京理工大学深圳研究院 Interview evaluation method and system based on multi-mode information
CN112818741A (en) * 2020-12-29 2021-05-18 南京智能情资创新科技研究院有限公司 Behavior etiquette dimension evaluation method and device for intelligent interview
CN112884326A (en) * 2021-02-23 2021-06-01 无锡爱视智能科技有限责任公司 Video interview evaluation method and device based on multi-modal analysis and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101848378A (en) * 2010-06-07 2010-09-29 中兴通讯股份有限公司 Domestic video monitoring device, system and method
CN111553364A (en) * 2020-04-28 2020-08-18 支付宝(杭州)信息技术有限公司 Picture processing method and device

Also Published As

Publication number Publication date
CN113269541A (en) 2021-08-17

Similar Documents

Publication Publication Date Title
US11783645B2 (en) Multi-camera, multi-sensor panel data extraction system and method
US20180308114A1 (en) Method, device and system for evaluating product recommendation degree
US9131053B1 (en) Method and system for improving call-participant behavior through game mechanics
Hammal et al. Interpersonal coordination of headmotion in distressed couples
Shen et al. Understanding nonverbal communication cues of human personality traits in human-robot interaction
US20230177834A1 (en) Relationship modeling and evaluation based on video data
Sun et al. Towards visual and vocal mimicry recognition in human-human interactions
US20180168498A1 (en) Computer Automated Method and System for Measurement of User Energy, Attitude, and Interpersonal Skills
JP2020113197A (en) Information processing apparatus, information processing method, and information processing program
CN113076770A (en) Intelligent figure portrait terminal based on dialect recognition
CN109697556A (en) Evaluate method, system and the intelligent terminal of effect of meeting
CN113269541B (en) Talent online interview data analysis system and method based on Internet
Dunbar et al. Automated methods to examine nonverbal synchrony in dyads
Mizuno et al. Next-speaker prediction based on non-verbal information in multi-party video conversation
KR101996630B1 (en) Method, system and non-transitory computer-readable recording medium for estimating emotion for advertising contents based on video chat
Ishii et al. Analyzing gaze behavior during turn-taking for estimating empathy skill level
Geenen et al. Visual transcription-A method to analyze the visual and visualize the audible in interaction
Nishimura et al. Speech-driven facial animation by lstm-rnn for communication use
Fang et al. Estimation of cohesion with feature categorization on small scale groups
Sánchez-Ancajima et al. Gesture Phase Segmentation Dataset: An Extension for Development of Gesture Analysis Models.
Panagakis et al. Audiovisual conflict detection in political debates
Gault et al. Continuities and transformations: challenges to capturing information about the'Information Society'
Ghazal et al. Intellimeet: Collaborative mobile framework for automated participation assessment
US12107699B2 (en) Systems and methods for creation and application of interaction analytics
Shiota et al. Leader identification using multimodal information in multi-party conversations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant