CN113269541B - Talent online interview data analysis system and method based on Internet - Google Patents
Talent online interview data analysis system and method based on Internet Download PDFInfo
- Publication number
- CN113269541B CN113269541B CN202110821797.1A CN202110821797A CN113269541B CN 113269541 B CN113269541 B CN 113269541B CN 202110821797 A CN202110821797 A CN 202110821797A CN 113269541 B CN113269541 B CN 113269541B
- Authority
- CN
- China
- Prior art keywords
- interviewer
- interview
- degree
- analysis module
- voice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000007405 data analysis Methods 0.000 title claims abstract description 23
- 238000004458 analytical method Methods 0.000 claims abstract description 236
- 230000009471 action Effects 0.000 claims abstract description 122
- 238000011156 evaluation Methods 0.000 claims abstract description 50
- 230000008569 process Effects 0.000 claims abstract description 30
- 230000000007 visual effect Effects 0.000 claims description 23
- 239000003086 colorant Substances 0.000 claims description 19
- 238000010586 diagram Methods 0.000 claims description 14
- 238000012360 testing method Methods 0.000 claims description 10
- 239000000284 extract Substances 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 2
- -1 period Chemical class 0.000 claims 1
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 54
- 238000004891 communication Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 2
- 230000003245 working effect Effects 0.000 description 2
- 208000016285 Movement disease Diseases 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000007115 recruitment Effects 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
- G06Q10/105—Human resources
- G06Q10/1053—Employment or hiring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06393—Score-carding, benchmarking or key performance indicator [KPI] analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Physics & Mathematics (AREA)
- Strategic Management (AREA)
- Entrepreneurship & Innovation (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Tourism & Hospitality (AREA)
- Quality & Reliability (AREA)
- Educational Administration (AREA)
- Development Economics (AREA)
- General Business, Economics & Management (AREA)
- General Health & Medical Sciences (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Psychiatry (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Social Psychology (AREA)
- Data Mining & Analysis (AREA)
- Game Theory and Decision Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a talent online interview data analysis system and method based on the Internet, which comprises an interview background acquisition module, an interview background chaos degree analysis module, a limb action acquisition module, a limb position positioning module, a limb action chaos degree analysis module, a sight line position positioning module, a sight line position deviation analysis module, a voice recognition module, a voice continuity degree analysis module and an interview data comprehensive analysis module, and has the beneficial effects that: through the chaos degree of the interview background, the limb action chaos degree of the interviewer, the sight line position deviation degree of the interviewer and the voice coherence degree, the comprehensive evaluation of the interview on the interview is calculated according to the chaos degree of the interview background, the limb action chaos degree, the sight line position deviation degree and the voice coherence degree, the difficulty of obtaining the limb language information in the traditional online interview process can be solved, and the evaluation on the interviewer is more comprehensive.
Description
Technical Field
The invention relates to the technical field of Internet, in particular to a talent online interview data analysis system and method based on the Internet.
Background
At present, on-line interview form adopted by campus recruitment is mainstream, the figure of the on-line interview can be seen through the school enrollment announcements of a plurality of enterprises at present, the on-line interview is generally voice or video, but the video accounts for most of the recruiting information issued by each company at present, and scientific research shows that all expressions of the information are divided into three parts, namely language, voice and tone, body language and the like, wherein the language is 7%, the voice and tone are 38%, and the body language is 55%, so for the acquisition of the information, the body language accounts for a large part, but in the process of the on-line interview, the body language information is just the most difficult to acquire.
The prior art still has a plurality of defects, in the process of online interviewing by an enterprise, an interviewer asks an interviewer to answer, and the problem of the interviewer is limited to a certain extent, if a fixed question bank exists, problem information is obtained from the question bank, and meanwhile, the biggest difference between online interviewing and offline interviewing is that the overall image of the interviewer cannot be intuitively felt by the interviewer in the online interviewing process, and the problems can be more random due to face-to-face communication in the online interviewing process and cannot be limited in the scope of the question bank, so that the comprehensive evaluation of the interviewer can be more comprehensive, and the question-and-answer form adopted by the online interviewer cannot achieve the effect of the offline interviewing, and the scientificity and objectivity to a certain extent are lacked, so that the comprehensive evaluation of the interviewer to the interviewer is influenced.
Based on the above problems, there is a need for providing a talent online interview data analysis system and method based on the internet, by obtaining an interview background of an interviewer and analyzing the confusion degree of the interview background, further analyzing the limb movement confusion degree of the interviewer, the line-of-sight position deviation degree of the interviewer and the voice continuity degree, so as to calculate the comprehensive evaluation of the interview on the interviewer according to the interview background confusion degree, the limb movement confusion degree, the line-of-sight position deviation degree and the voice continuity degree, thereby solving the difficulty of obtaining limb language information in the traditional online interview process and making the evaluation on the interviewer more comprehensive.
Disclosure of Invention
The invention aims to provide a talent online interview data analysis system and method based on the Internet, so as to solve the problems in the background technology.
In order to solve the technical problems, the invention provides the following technical scheme:
an internet-based talent online interview data analysis system comprises an interview background acquisition module, an interview background chaos degree analysis module, a limb action acquisition module, a limb position positioning module, a limb action chaos degree analysis module, a sight line position positioning module, a sight line position deviation analysis module, a voice recognition module, a voice continuity degree analysis module and an interview data comprehensive analysis module,
the interview background acquisition module is used for acquiring a background picture in an interview environment of an interviewer, the interview background confusion degree analysis module is used for acquiring the acquired background picture through the interview background acquisition module and analyzing the confusion degree of the background picture, the limb action acquisition module is used for acquiring limb actions of the interviewer, the limb position positioning module is used for positioning the limb position, the limb action confusion degree analysis module is used for analyzing the confusion degree of the limb actions according to the acquired limb actions and the positioning of the limb position, the sight line position positioning module is used for positioning the sight line touching position of the interviewer in real time, the sight line position deviation analysis module is used for analyzing the position deviation of the sight line according to the real-time positioning change of the sight line touching position, and the voice recognition module is used for recognizing the voice difference between the interviewer and the interviewer, the voice coherence degree analysis module is used for analyzing the voice coherence degree in the interview process, and the interview data comprehensive analysis module is used for analyzing the background confusion degree of the interviewDegree of disorder of limbs and trunk movementsDegree of deviation of visual line positionAnd degree of speech coherenceAnd comprehensively analyzing the interview data.
Furthermore, the interview background acquisition module is connected with the interview background chaos degree analysis module, the interview background acquisition module acquires an interview background picture of an interviewer when the interviewer is connected with an online video interview, the interview background chaos degree analysis module acquires the acquired interview background picture through the interview background acquisition module, the interview background chaos degree analysis module analyzes color distribution in the interview background picture, the interview background chaos degree analysis module extracts object colors in the interview background picture, draws a color distribution diagram according to the extracted object colors, and further analyzes distribution areas of various colors in the color distribution diagram and shapes of the distribution areas,
the analysis module for analyzing the confusion degree of the interview background counts the number of the areas according to the color distribution areas and further acquires the shape of any color distribution area to judge the symmetry of any color distribution area, in the process of online interview, an interviewer cannot have a visual feeling on the overall image of an interviewer, but in the process of interview, the overall image of a person is a more critical influence factor, the requirement on the personal image is high for certain working properties, the attention degree of the interviewer to the interview can be seen from the personal image, the same dressing can reflect the personality of the person from the side, and meanwhile, because of different working properties, the personality of the person is also a key for determining whether the person can work adequately or not, so when the overall image cannot be acquired, the confusion degree of the interview background is analyzed, the general online interview is about good time, so the interview background is selected with the subjectivity of an interviewer, if the interview background is disordered, the problem of the lateral reaction is that the interviewer does not pay attention to the interview or the interviewer is not careful, and for some delicate work, the quality which must be possessed is carefully, so the symmetry of each color distribution area is judged by extracting the colors in the interview background, and a value reflecting the disordered degree of the interview background is obtained.
Furthermore, the interview background confusion degree analysis module takes a vertical line as a symmetry axis of any color distribution area, establishes a plurality of first symmetry points at the edge of the color distribution area on any side of the symmetry axis, and analyzes the interview background confusion degree according to the symmetry axis and the first symmetry pointsDetermining the position of a second symmetric point on the other side of the symmetric axis, and counting the number of the second symmetric points on the edge of the color distribution region on the other side, wherein the number is recorded as,
Marking a second symmetric point which is not positioned on the edge of the color distribution area on the other side, further obtaining the distance between the marked symmetric point and the edge of the color distribution area on the other side, counting the number of the marked symmetric points with the distance less than or equal to a distance threshold value, and recording the number as,
The interview background chaos degree analysis module records the number of the first symmetric points asCalculating the symmetry of any color distribution regionThe face test background chaos degree analysis module further calculates the chaos degree of the face test backgroundAnd N is the number of the color distribution areas, one color distribution area is analyzed firstly, then the whole interview background is analyzed, the symmetry of the color area is judged, namely the symmetry of an object is judged, the objects in general real life are symmetrical, therefore, the symmetry analysis of the color distribution area reflects the overall neatness degree of the interview background, and an interviewer arranges the interview background completely before about good interview time or selects the neat interview background.
Further, the body action disorder degree analysis module is connected with the body action acquisition module and the body position positioning module, the body action acquisition module acquires the body action of the interviewer when the interviewer is put through an on-line video interview, the body position positioning module establishes a plurality of mobile reference points on the body of the interviewer according to the acquired body action, and the body action disorder degree analysis module analyzes the body action disorder degree of the interviewer according to the position moving frequency and the position moving rule of the mobile reference points.
Further, the limb action disorder degree analysis module takes the moving reference point positions on all the limbs acquired first as first original point positions, analyzes the moving frequency of all the first original point positions within a certain time period from the moment of establishing the first original point positions, records the movement of the first original point once when the moving distance of the first original point is greater than or equal to a first preset value, counts the moving times of all the first original point positions, and further calculates the moving frequency of each first original point within the time period according to the moving times and the certain time periodAcquiring the maximum value of the moving frequency in the time period, wherein the body action disorder degree analysis module stores a moving frequency interval and a disorder evaluation value corresponding to the moving frequency interval in advance, determines a moving frequency slave interval according to the maximum value of the moving frequency, finds the corresponding disorder evaluation value and records the corresponding disorder evaluation value as the maximum value,
When the certain time period ends, the limb action disorder degree analysis module takes the ending time of the certain time period as the starting time of the next time period, takes the position of the moving reference point acquired at the starting time as the position of a second original point, and analyzes the moving frequency of all the positions of the second original point within the certain time period from the starting timeAnd so on, up to the videoAfter the interview is finished, the moving frequency of the nth original point position is obtained through analysisThe body action disorder degree analysis module calculates the disorder degree of the body action according to the movement frequencyWhereinscientific research shows that the whole expression of the information is divided into three parts of language, voice and tone, body language and the like, wherein the language is 7%, the voice and tone is 38%, and the body language is 55%, so that the body language occupies a large part for the acquisition of the information, but in the process of online interviewing, the body language information is just the most difficult to acquire, a plurality of mobile reference points are established for the body part by acquiring the body action, the body action of an interviewer can not be regarded as one body action by small jitter generally, so that a distance threshold value is set, the body action of the interviewer is judged by comparing the mobile distance of the mobile reference point with the distance threshold value, the mobile frequency of the mobile reference point in a certain time period is counted according to the mobile reference point, and the maximum mobile frequency is acquired, the maximum moving frequency in each time period is obtained, so that an average value is calculated, the average value is used as the limb movement disorder degree of the interviewee in the whole interviewing process, and the larger the moving frequency value is, the more frequent the limb movement of the interviewee in a certain time period is.
Further, the sight line position positioning module is connected with the sight line position deviation analysis module, the limb position positioning module and the voice recognition module,
the sight line position positioning module locks the sight line position of the interviewer when the interviewer connects the online video interview, the voice recognition module recognizes the voices of the interviewer and the interviewer,
if the current recognized voice is sent by the interviewer, the limb position of the interviewer is obtained through the limb position positioning module, when the moving distance of the limb position is larger than or equal to a first preset value, a limb action is recorded, the sight line position deviation analysis module obtains the sight line position of the current interviewer through the sight line position positioning module, and further obtains the limb moving part, if the moving part of the current interviewer is a hand, whether the sight line position corresponds to the hand position of the interviewer is judged, and the limb moving times of the interviewer are recordedAnd the number of times that the sight line position corresponds to the hand position of the interviewer,
If the current recognized voice is sent out by the interviewer, the sight line position of the current interviewer is obtained through the sight line position positioning module, and the sight line position deviation analysis module obtains the sight line movement times of the interviewerAnd the number of times that the sight line position corresponds to the upper screen position,
The sight line position deviation analysis module further calculates the sight line deviation degree of the interviewerWherein、all are the coefficients of the linear vibration wave,the method comprises the steps that a person watches the opposite side to respect the opposite side during conversation, and the attention degree of the conversation is displayed, because the technical requirements of some technical posts may require that an interviewee uses body language during technical communication in the interviewing process, then the sight touching position of the interviewee is further obtained, a general person watches the hand when the hand moves and watches the opposite side during the conversation, the current speaker is firstly identified, if the interviewee is speaking currently, the body movement and the sight position are obtained, if the interviewee is speaking currently, the sight position of the interviewee is obtained, whether the interviewee is watching a screen is judged, and the sight deviation degree of the interviewee is analyzed according to collected data information.
Further, the voice recognition module is connected with the voice continuity analysis module, and when the interviewer connects the online video interview, the voice recognition module recognizes the voice in the whole interview process, judges whether the interviewer speaks or the interviewer speaks at present,
if the current voice is sent by the interviewer, the interviewer speaks in the time periodThe voice consistency analysis module counts the pause times of the current interviewer and the pause time of each time, and further calculates the total pause time of the interviewerThe voice coherence degree analysis module analyzes the voice coherence degree according to the total pause timeAnd time periodThe ratio between the two is calculated, and the voice coherence degree analysis module further calculates the stopping time of the interviewer when the interviewer speaks every time in the whole interviewing processThe ratio of the total pause time to the speaking time is calculated to obtain the average value of all the ratiosThe voice coherence degree analysis module is prestored with a ratio average value interval and a coherence degree evaluation value corresponding to the ratio average value interval,
If the current voice is sent by the interviewer, the interviewer speaks in the time periodCounting the voice occurrence times of the current interviewer by a voice continuity analysis module, wherein the voice continuity analysis module stores a voice occurrence time interval and a continuity evaluation value corresponding to the voice occurrence time interval in advance,
The voice coherence degree analysis module further calculates the voice coherence degree of the interviewerWherein、all are the coefficients of the linear vibration wave,generally, when talking, it is a very impolite thing to interrupt others' speech, and because of the requirement of working nature, it needs interviewer to have a good communication logic, clear thought, strong language organization ability, similar to the service industry, and needs to make contact with clients frequently, or research and development industry, needs to make contact with clients frequentlyThe technical scheme is discussed, the language expression ability of the interviewee can be embodied according to the pause times in one utterance, whether the thought is clear or not is judged, and when the interviewee is identified to be speaking at present, whether the interviewee interrupts the interviewee in the speaking process or not is analyzed, and the interviewee is comprehensively evaluated according to the behavior of the interviewee.
Furthermore, the interview data comprehensive analysis module is connected with the interview background chaos degree analysis module, the limb action chaos degree analysis module, the sight line position deviation degree analysis module and the voice coherence degree analysis module,
the comprehensive analysis module for interview data further obtains the background chaos degree of the interviewDegree of disorder of limbs and trunk movementsDegree of deviation of visual line positionAnd degree of speech coherenceAnd the comprehensive analysis and the further calculation of the interview data obtain a comprehensive analysis evaluation valueWherein、、、to be aThe number of the first and second groups is,。
further, the online interview data analysis method based on the internet for the talents comprises the following steps:
s1: analyzing the degree of confusion of the interviewer background;
s2: analyzing the degree of disorder of the body movements of the interviewer;
s3: analyzing the visual line position deviation degree of the interviewer;
s4: analyzing the voice coherence degree of the interviewer;
s5: and calculating the comprehensive evaluation of the interviewer according to the background disorder degree, the limb action disorder degree, the sight line position deviation degree and the voice continuity degree of the interview.
Further, the interview data analysis method further comprises the following steps:
s1-1: the interview background acquisition module is connected with the interview background chaos degree analysis module, the interview background acquisition module acquires an interview background picture of an interviewer when the interviewer connects an on-line video interview, the interview background chaos degree analysis module acquires the acquired interview background picture through the interview background acquisition module, the interview background chaos degree analysis module analyzes color distribution in the interview background picture, the interview background chaos degree analysis module extracts object colors in the interview background picture, draws a color distribution diagram according to the extracted object colors, and further analyzes distribution areas of various colors in the color distribution diagram and shapes of the distribution areas,
the interview background chaos degree analysis module counts the number of the areas according to the color distribution areas, further obtains the shape of any color distribution area, and judges the symmetry of any color distribution area, the interview background chaos degree analysis module takes a vertical line as a symmetry axis of any color distribution area, establishes a plurality of first symmetry points on the edge of the color distribution area on any side of the symmetry axis, and the interview background chaos degree analysis module counts the number of the areas according to the symmetry axis and the first symmetry pointsThe position of a second symmetric point is determined on the other side of the symmetric axis by the symmetric point, and the number of the second symmetric points on the edge of the color distribution area on the other side is counted as,
Marking a second symmetric point which is not positioned on the edge of the color distribution area on the other side, further obtaining the distance between the marked symmetric point and the edge of the color distribution area on the other side, counting the number of the marked symmetric points with the distance less than or equal to a distance threshold value, and recording the number as,
The interview background chaos degree analysis module records the number of the first symmetric points asCalculating the symmetry of any color distribution regionThe face test background disorder degree analysis module further calculates the disorder degree of the face test backgroundWherein N is the number of color distribution areas;
s2-1: the limb action disorder degree analysis module is connected with the limb action acquisition module and the limb position positioning module,
the limb action acquisition module acquires the limb actions of the interviewer when the interviewer connects the online video interview, the limb position positioning module establishes a plurality of mobile reference points on the limbs of the interviewer according to the acquired limb actions, the limb action disorder degree analysis module analyzes the limb action disorder degree of the interviewer according to the position moving frequency and the position moving rule of the mobile reference points,
the limb action disorder degree analysis module is used for acquiring the moving reference points on all the limbs firstlyThe position is taken as a first original point position, the moving frequency of all the first original point positions is analyzed within a certain time period from the moment of establishing the first original point position, when the moving distance of the first original point is larger than or equal to a first preset value, the first original point is recorded to move once, the moving times of all the first original point positions are counted, and the moving frequency of each first original point within the time period is further calculated according to the moving times and the certain time periodAcquiring the maximum value of the moving frequency in the time period, wherein the chaos degree analysis module of the limb action prestores a chaos evaluation value corresponding to the moving frequency interval and the moving frequency interval, and determines a moving frequency slave interval according to the maximum value of the moving frequency and finds out the corresponding chaos evaluation value to be recorded as the chaos evaluation value,
And the limb action disorder degree analysis module takes the end time of a certain time period as the start time of the next time period and the mobile reference point position obtained at the start time as the second original point position when the certain time period ends, and analyzes the mobile frequency of all the second original point positions within the certain time period from the start timeAnd repeating the steps until the video interview is finished, and analyzing to obtain the moving frequency of the nth original point positionThe body action disorder degree analysis module calculates the disorder degree of the body action according to the movement frequencyWhereina chaos evaluation value corresponding to the maximum value of the moving frequency of the nth original point;
s3-1: the sight line position positioning module is connected with the sight line position deviation analysis module, the limb position positioning module and the voice recognition module,
the sight line position positioning module locks the sight line position of the interviewer when the interviewer connects the online video interview, the voice recognition module recognizes the voices of the interviewer and the interviewer,
if the current recognized voice is sent out by the interviewer, the limb position of the interviewer is obtained through the limb position positioning module, when the moving distance of the limb position is larger than or equal to a first preset value, a limb action is recorded, the sight line position deviation analysis module obtains the sight line position of the current interviewer through the sight line position positioning module, the limb moving position is further obtained, if the moving position of the current interviewer is a hand, whether the sight line position corresponds to the hand position of the interviewer is judged, and the limb moving times of the interviewer is recordedAnd the number of times that the sight line position corresponds to the hand position of the interviewer,
If the current recognized voice is sent out by the interviewer, the sight line position of the current interviewer is obtained through the sight line position positioning module, and the sight line movement times of the interviewer is obtained through the sight line position deviation analysis moduleAnd the number of times that the sight line position corresponds to the upper screen position,
The visual line position deviation analysis module further calculates the visual line deviation degree of the interviewerWherein、all are the coefficients of the linear vibration wave,;
s4-1: the voice recognition module is connected with the voice continuity analysis module, when the interviewer is connected with the online video interview, the voice recognition module recognizes the voice in the whole interview process, judges whether the interviewer speaks or the interviewer speaks at present,
if the current voice is sent by the interviewer, the interviewer speaks in the time periodThe voice consistency analysis module counts the pause times of the current interviewer and the pause time of each time, and further calculates the total pause time of the interviewerThe voice coherence degree analysis module analyzes the voice coherence degree according to the total pause timeAnd time periodThe voice coherence degree analysis module further calculates the ratio of the total pause time and the speaking time of the interviewer during each speaking in the whole interview process, and calculates the average value of all the ratiosThe voice coherence degree analysis module prestores a ratio average value interval and a value corresponding to the ratio average value intervalCoherence assessment value,
If the current voice is sent by the interviewer, the interviewer speaks in the time periodThe voice continuity analysis module is used for counting the voice occurrence times of the current interviewer, and the voice continuity analysis module is prestored with a voice occurrence time interval and a continuity evaluation value corresponding to the voice occurrence time interval,
The voice coherence degree analysis module further calculates the voice coherence degree of the interviewerWherein、all are the coefficients of the linear vibration wave,;
s5-1: the interview data comprehensive analysis module is connected with the interview background chaos degree analysis module, the limb action chaos degree analysis module, the sight line position deviation degree analysis module and the voice continuity degree analysis module,
the comprehensive analysis module of interview data further acquires the background chaos degree of the interviewDegree of disorder of limbs and trunk movementsDegree of deviation of visual line positionAnd degree of speech coherenceAnd comprehensively analyzing and further calculating the interview data to obtain comprehensive analysis evaluation valueWherein、、、as a function of the number of the coefficients,。
compared with the prior art, the invention has the following beneficial effects: according to the comprehensive evaluation method and the comprehensive evaluation system, the interview background of the interviewer is obtained, the confusion degree of the interviewer background is analyzed, the limb action confusion degree of the interviewer, the line-of-sight position deviation degree of the interviewer and the voice coherence degree are further analyzed, so that the comprehensive evaluation of the interview on the interviewer is calculated according to the confusion degree of the interview background, the limb action confusion degree, the line-of-sight position deviation degree and the voice coherence degree, the difficulty in obtaining limb language information in the traditional online interview process can be solved, and the evaluation on the interviewer is more comprehensive.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a block schematic diagram of an Internet-based talent online interview data analysis system of the present invention;
FIG. 2 is a schematic diagram of the steps of the online talent interview data analysis method based on the Internet.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-2, the present invention provides a technical solution:
an internet-based talent online interview data analysis system comprises an interview background acquisition module, an interview background chaos degree analysis module, a limb action acquisition module, a limb position positioning module, a limb action chaos degree analysis module, a sight line position positioning module, a sight line position deviation analysis module, a voice recognition module, a voice continuity degree analysis module and an interview data comprehensive analysis module,
the interview background acquisition module is used for acquiring a background picture in an interview environment of an interviewer, the interview background disorder degree analysis module acquires the acquired background picture through the interview background acquisition module and analyzes the disorder degree of the background picture, the limb action acquisition module is used for acquiring limb actions of the interviewer, the limb position positioning module is used for positioning the limb positions, the limb action disorder degree analysis module is used for analyzing the disorder degree of the limb actions according to the acquired limb actions and the positioning of the limb positions, the sight line position positioning module is used for positioning the sight line touching position of the interviewer in real time, the sight line position deviation analysis module is used for analyzing the position deviation of the sight line according to the real-time positioning change of the sight line touching position, and the voice recognition module is used for recognizing the interview background picture in the interview environment of the interviewerThe voice of the testee and the interviewer is distinguished, the voice consistency degree analysis module is used for analyzing the voice consistency degree in the interview process, and the interview data comprehensive analysis module is used for analyzing the background chaos degree of the interviewDegree of disorder of limbs and trunk movementsDegree of deviation of visual line positionAnd degree of speech coherenceAnd comprehensively analyzing the interview data.
The interview background acquisition module is connected with the interview background chaos degree analysis module, the interview background acquisition module acquires an interview background picture of an interviewer when the interviewer connects an on-line video interview, the interview background chaos degree analysis module acquires the acquired interview background picture through the interview background acquisition module, the interview background chaos degree analysis module analyzes color distribution in the interview background picture, the interview background chaos degree analysis module extracts object colors in the interview background picture, draws a color distribution diagram according to the extracted object colors, and further analyzes distribution areas of various colors in the color distribution diagram and shapes of the distribution areas,
the interview background chaos degree analysis module counts the number of the areas according to the color distribution areas, further obtains the shape of any color distribution area, and judges the symmetry of any color distribution area.
The interview background confusion degree analysis module takes a vertical line as a symmetrical axis of any color distribution area, establishes a plurality of first symmetrical points at the edge of the color distribution area on any side of the symmetrical axis, determines the position of a second symmetrical point on the other side of the symmetrical axis according to the symmetrical axis and the first symmetrical points, and counts the positions of the second symmetrical points on the other side of the symmetrical axisThe number of the second symmetrical points on the edge of the color distribution area on one side is recorded as,
Marking a second symmetric point which is not positioned on the edge of the color distribution area on the other side, further obtaining the distance between the marked symmetric point and the edge of the color distribution area on the other side, counting the number of the marked symmetric points with the distance less than or equal to a distance threshold value, and recording the number as,
The interview background chaos degree analysis module records the number of the first symmetric points asCalculating the symmetry of any color distribution regionThe face test background disorder degree analysis module further calculates the disorder degree of the face test backgroundWhere N is the number of color distribution regions.
The limb action disorder degree analysis module is connected with the limb action acquisition module and the limb position positioning module, the limb action acquisition module acquires the limb action of an interviewer when the interviewer is put on line for video interviewing, the limb position positioning module establishes a plurality of mobile reference points on the limb of the interviewer according to the acquired limb action, and the limb action disorder degree analysis module analyzes the limb action disorder degree of the interviewer according to the position moving frequency and the position moving rule of the mobile reference points.
The limb action disorder degree analysis module takes the mobile reference point positions on all the limbs acquired firstly as first original point positions, and analyzes all the first original point positions within a certain time period from the moment of establishing the first original point positionsThe moving frequency of the position of an original point is recorded as the moving of the first original point once when the moving distance of the first original point is more than or equal to a first preset value, the moving times of all the positions of the first original point are counted, and the moving frequency of each first original point in a certain time period is further calculated according to the moving times and the certain time periodAcquiring the maximum value of the moving frequency in the time period, wherein the chaos degree analysis module of the limb action prestores a chaos evaluation value corresponding to the moving frequency interval and the moving frequency interval, and determines a moving frequency slave interval according to the maximum value of the moving frequency and finds out the corresponding chaos evaluation value to be recorded as the chaos evaluation value,
And the limb action disorder degree analysis module takes the end time of a certain time period as the start time of the next time period and the mobile reference point position obtained at the start time as the second original point position when the certain time period ends, and analyzes the mobile frequency of all the second original point positions within the certain time period from the start timeAnd repeating the steps until the video interview is finished, and analyzing to obtain the moving frequency of the nth original point positionThe body action disorder degree analysis module calculates the disorder degree of the body action according to the movement frequencyWhereinthe estimated value of the disturbance corresponding to the maximum value of the moving frequency of the nth original point.
The sight line position positioning module is connected with the sight line position deviation analysis module, the limb position positioning module and the voice recognition module,
the sight line position positioning module locks the sight line position of the interviewer when the interviewer connects the online video interview, the voice recognition module recognizes the voices of the interviewer and the interviewer,
if the current recognized voice is sent out by the interviewer, the limb position of the interviewer is obtained through the limb position positioning module, when the moving distance of the limb position is larger than or equal to a first preset value, a limb action is recorded, the sight line position deviation analysis module obtains the sight line position of the current interviewer through the sight line position positioning module, the limb moving position is further obtained, if the moving position of the current interviewer is a hand, whether the sight line position corresponds to the hand position of the interviewer is judged, and the limb moving times of the interviewer is recordedAnd the number of times that the sight line position corresponds to the hand position of the interviewer,
If the current recognized voice is sent out by the interviewer, the sight line position of the current interviewer is obtained through the sight line position positioning module, and the sight line movement times of the interviewer is obtained through the sight line position deviation analysis moduleAnd the number of times that the sight line position corresponds to the upper screen position,
The visual line position deviation analysis module further calculates the visual line deviation degree of the interviewerWherein、all are the coefficients of the linear vibration wave,。
the voice recognition module is connected with the voice continuity analysis module, when the interviewer is connected with the online video interview, the voice recognition module recognizes the voice in the whole interview process, judges whether the interviewer speaks or the interviewer speaks at present,
if the current voice is sent by the interviewer, the interviewer speaks in the time periodThe voice consistency analysis module counts the pause times of the current interviewer and the pause time of each time, and further calculates the total pause time of the interviewerThe voice coherence degree analysis module analyzes the voice coherence degree according to the total pause timeAnd time periodThe voice coherence degree analysis module further calculates the ratio of the total pause time and the speaking time of the interviewer during each speaking in the whole interview process, and calculates the average value of all the ratiosThe voice coherence degree analysis module is prestored with a ratio average value interval and a coherence degree evaluation value corresponding to the ratio average value interval,
If the current voice is sent by the interviewer, the interviewer speaks in the time periodThe voice continuity analysis module is used for counting the voice occurrence times of the current interviewer, and the voice continuity analysis module is prestored with a voice occurrence time interval and a continuity evaluation value corresponding to the voice occurrence time interval,
The voice coherence degree analysis module further calculates the voice coherence degree of the interviewerWherein、all are the coefficients of the linear vibration wave,。
the interview data comprehensive analysis module is connected with the interview background chaos degree analysis module, the limb action chaos degree analysis module, the sight line position deviation degree analysis module and the voice continuity degree analysis module,
the comprehensive analysis module of interview data further acquires the background chaos degree of the interviewDegree of disorder of limbs and trunk movementsDegree of deviation of visual line positionAnd degree of speech coherenceAnd comprehensively analyzing and further calculating the interview data to obtain comprehensive analysis evaluation valueWherein、、、as a function of the number of the coefficients,。
an online interview data analysis method based on Internet for talents, comprising the following steps:
s1: analyzing the degree of confusion of the interviewer background;
s2: analyzing the degree of disorder of the body movements of the interviewer;
s3: analyzing the visual line position deviation degree of the interviewer;
s4: analyzing the voice coherence degree of the interviewer;
s5: and calculating the comprehensive evaluation of the interviewer according to the background disorder degree, the limb action disorder degree, the sight line position deviation degree and the voice continuity degree of the interview.
The interview data analysis method further comprises the following steps:
s1-1: the interview background acquisition module is connected with the interview background chaos degree analysis module, the interview background acquisition module acquires an interview background picture of an interviewer when the interviewer connects an on-line video interview, the interview background chaos degree analysis module acquires the acquired interview background picture through the interview background acquisition module, the interview background chaos degree analysis module analyzes color distribution in the interview background picture, the interview background chaos degree analysis module extracts object colors in the interview background picture, draws a color distribution diagram according to the extracted object colors, and further analyzes distribution areas of various colors in the color distribution diagram and shapes of the distribution areas,
the interview background disorder degree analysis module counts the number of areas according to the color distribution areas, further obtains the shape of any color distribution area, and judges the symmetry of any color distribution area, the interview background disorder degree analysis module takes a vertical line as a symmetrical axis of any color distribution area, establishes a plurality of first symmetrical points on the edge of the color distribution area on any side of the symmetrical axis, determines the position of a second symmetrical point on the other side of the symmetrical axis according to the symmetrical axis and the first symmetrical points, counts the number of the second symmetrical points on the edge of the color distribution area on the other side, and counts the number of the second symmetrical points,
Marking a second symmetric point which is not positioned on the edge of the color distribution area on the other side, further obtaining the distance between the marked symmetric point and the edge of the color distribution area on the other side, counting the number of the marked symmetric points with the distance less than or equal to a distance threshold value, and recording the number as,
The interview background chaos degree analysis module records the number of the first symmetric points asCalculating the symmetry of any color distribution regionAnalysis model for confusion degree of interview backgroundThe blocks further calculate the degree of confusion of the interview backgroundWherein N is the number of color distribution areas;
s2-1: the limb action disorder degree analysis module is connected with the limb action acquisition module and the limb position positioning module,
the limb action acquisition module acquires the limb actions of the interviewer when the interviewer connects the online video interview, the limb position positioning module establishes a plurality of mobile reference points on the limbs of the interviewer according to the acquired limb actions, the limb action disorder degree analysis module analyzes the limb action disorder degree of the interviewer according to the position moving frequency and the position moving rule of the mobile reference points,
the limb action disorder degree analysis module takes the mobile reference point positions on all limbs acquired firstly as first original point positions, analyzes the mobile frequency of all the first original point positions within a certain time period from the moment of establishing the first original point positions, records the movement of the first original point once when the moving distance of the first original point is greater than or equal to a first preset value, counts the moving times of all the first original point positions, and further calculates the mobile frequency of each first original point within the time period according to the moving times and the certain time periodAcquiring the maximum value of the moving frequency in the time period, wherein the chaos degree analysis module of the limb action prestores a chaos evaluation value corresponding to the moving frequency interval and the moving frequency interval, and determines a moving frequency slave interval according to the maximum value of the moving frequency and finds out the corresponding chaos evaluation value to be recorded as the chaos evaluation value,
When a certain time period is over, the limb action disorder degree analysis module takes the end time of the certain time period as the next timeThe starting time of the time interval takes the position of the moving reference point acquired at the starting time as the position of the second original point, and the moving frequency of all the positions of the second original point is analyzed within a certain time interval from the starting timeAnd repeating the steps until the video interview is finished, and analyzing to obtain the moving frequency of the nth original point positionThe body action disorder degree analysis module calculates the disorder degree of the body action according to the movement frequencyWhereina chaos evaluation value corresponding to the maximum value of the moving frequency of the nth original point;
s3-1: the sight line position positioning module is connected with the sight line position deviation analysis module, the limb position positioning module and the voice recognition module,
the sight line position positioning module locks the sight line position of the interviewer when the interviewer connects the online video interview, the voice recognition module recognizes the voices of the interviewer and the interviewer,
if the current recognized voice is sent out by the interviewer, the limb position of the interviewer is obtained through the limb position positioning module, when the moving distance of the limb position is larger than or equal to a first preset value, a limb action is recorded, the sight line position deviation analysis module obtains the sight line position of the current interviewer through the sight line position positioning module, the limb moving position is further obtained, if the moving position of the current interviewer is a hand, whether the sight line position corresponds to the hand position of the interviewer is judged, and the limb moving times of the interviewer is recordedAnd the person whose sight line position corresponds to the person to be interviewedNumber of hand positions of,
If the current recognized voice is sent out by the interviewer, the sight line position of the current interviewer is obtained through the sight line position positioning module, and the sight line movement times of the interviewer is obtained through the sight line position deviation analysis moduleAnd the number of times that the sight line position corresponds to the upper screen position,
The visual line position deviation analysis module further calculates the visual line deviation degree of the interviewerWherein、all are the coefficients of the linear vibration wave,;
s4-1: the voice recognition module is connected with the voice continuity analysis module, when the interviewer is connected with the online video interview, the voice recognition module recognizes the voice in the whole interview process, judges whether the interviewer speaks or the interviewer speaks at present,
if the current voice is sent by the interviewer, the interviewer speaks in the time periodThe voice consistency analysis module counts the pause times of the current interviewer and the pause time of each time, and further calculates the total pause time of the interviewerThe voice coherence degree analysis module analyzes the voice coherence degree according to the total pause timeAnd time periodThe voice coherence degree analysis module further calculates the ratio of the total pause time and the speaking time of the interviewer during each speaking in the whole interview process, and calculates the average value of all the ratiosThe voice coherence degree analysis module is prestored with a ratio average value interval and a coherence degree evaluation value corresponding to the ratio average value interval,
If the current voice is sent by the interviewer, the interviewer speaks in the time periodThe voice continuity analysis module is used for counting the voice occurrence times of the current interviewer, and the voice continuity analysis module is prestored with a voice occurrence time interval and a continuity evaluation value corresponding to the voice occurrence time interval,
The voice coherence degree analysis module further calculates the voice coherence degree of the interviewerWherein、all are the coefficients of the linear vibration wave,;
s5-1: the interview data comprehensive analysis module is connected with the interview background chaos degree analysis module, the limb action chaos degree analysis module, the sight line position deviation degree analysis module and the voice continuity degree analysis module,
the comprehensive analysis module of interview data further acquires the background chaos degree of the interviewDegree of disorder of limbs and trunk movementsDegree of deviation of visual line positionAnd degree of speech coherenceAnd comprehensively analyzing and further calculating the interview data to obtain comprehensive analysis evaluation valueWherein、、、as a function of the number of the coefficients,。
it is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (7)
1. Talent online interview data analysis system based on internet, its characterized in that: comprises an interview background acquisition module, an interview background chaos degree analysis module, a limb action acquisition module, a limb position positioning module, a limb action chaos degree analysis module, a sight line position positioning module, a sight line position deviation analysis module, a voice recognition module, a voice coherence degree analysis module and an interview data comprehensive analysis module,
the interview background acquisition module is used for acquiring background pictures in the interview environment of an interviewer, the interview background chaos degree analysis module acquires the acquired background pictures through the interview background acquisition module and analyzes the chaos degree of the background pictures, and the limb action acquisition module is used for acquiring limb actions of the interviewerThe body position positioning module is used for positioning the body position, the body action disorder degree analysis module is used for analyzing the disorder degree of the body action according to the acquired body action and the positioning of the body position, the sight line position positioning module is used for positioning the sight line touching position of the interviewer in real time, the sight line position deviation analysis module is used for analyzing the position deviation of the sight line according to the real-time positioning change of the sight line touching position, the voice recognition module is used for recognizing the voice difference of the interviewer and the interviewee, the voice coherence degree analysis module is used for analyzing the voice coherence degree in the interview process, and the interview data comprehensive analysis module is used for analyzing the voice coherence degree in the interview process according to the interview background disorder degreeDegree of disorder of limbs and trunk movementsDegree of deviation of visual line positionAnd degree of speech coherenceComprehensively analyzing the interview data;
the interview background acquisition module is connected with the interview background chaos degree analysis module,
the interview background acquisition module acquires an interview background picture of an interviewer when the interviewer connects an online video interview, the interview background confusion degree analysis module acquires the acquired interview background picture through the interview background acquisition module, the interview background confusion degree analysis module analyzes color distribution in the interview background picture, the interview background confusion degree analysis module extracts object colors in the interview background picture, draws a color distribution diagram according to the extracted object colors, and further analyzes distribution areas of various colors in the color distribution diagram and shapes of the distribution areas,
the interview background chaos degree analysis module counts the number of the areas according to the color distribution areas, further obtains the shape of any color distribution area, and judges the symmetry of any color distribution area;
the interview background confusion degree analysis module takes a vertical line as a symmetrical axis of any color distribution area, establishes a plurality of first symmetrical points at the edge of the color distribution area on any side of the symmetrical axis, determines the position of a second symmetrical point on the other side of the symmetrical axis according to the symmetrical axis and the first symmetrical points, and counts the number of the second symmetrical points on the edge of the color distribution area on the other side, wherein the number is recorded as,
Marking a second symmetric point which is not positioned on the edge of the color distribution area on the other side, further obtaining the distance between the marked symmetric point and the edge of the color distribution area on the other side, counting the number of the marked symmetric points with the distance less than or equal to a distance threshold value, and recording the number as,
The interview background chaos degree analysis module records the number of the first symmetric points asCalculating the symmetry of any color distribution regionThe face test background chaos degree analysis module further calculates the chaos degree of the face test backgroundWherein N is the number of the color distribution regions.
2. The internet-based talent online interview data analysis system of claim 1, wherein: the limb action disorder degree analysis module is connected with the limb action acquisition module and the limb position positioning module,
the body action obtaining module obtains the body action of the interviewer when the interviewer connects an online video interview, the body position positioning module establishes a plurality of mobile reference points on the body of the interviewer according to the obtained body action, and the body action disorder degree analyzing module analyzes the body action disorder degree of the interviewer according to the position moving frequency and the position moving rule of the mobile reference points.
3. The internet-based talent online interview data analysis system of claim 2, wherein: the limb action disorder degree analysis module takes the mobile reference point positions on all the limbs acquired firstly as first original point positions, analyzes the moving frequency of all the first original point positions within a certain time period from the moment of establishing the first original point positions, records the movement of the first original point once when the moving distance of the first original point is greater than or equal to a first preset value, counts the moving times of all the first original point positions, and further calculates the moving frequency of each first original point within the time period according to the moving times and the certain time periodObtaining the maximum value of the moving frequency in the time period, wherein the body action disorder degree analysis module stores a moving frequency interval and a disorder evaluation value corresponding to the moving frequency interval in advance, the body action disorder degree analysis module determines a moving frequency slave interval according to the maximum value of the moving frequency, searches the corresponding disorder evaluation value and records the corresponding disorder evaluation value,
When the certain time period ends, the limb action disorder degree analysis module takes the ending time of the certain time period as the starting time of the next time period, takes the position of the moving reference point acquired at the starting time as the position of a second original point, and analyzes the moving frequency of all the positions of the second original point within the certain time period from the starting timeAnd analogizing until the video interview is finished, and analyzing to obtain the moving frequency of the nth original point positionThe body action disorder degree analysis module calculates the disorder degree of the body action according to the movement frequencyWherein,the estimated value of the disturbance corresponding to the maximum value of the moving frequency of the nth original point.
4. The internet-based talent online interview data analysis system of claim 1, wherein: the sight line position positioning module is connected with the sight line position deviation analysis module, the limb position positioning module and the voice recognition module,
the sight line position positioning module locks the sight line position of the interviewer when the interviewer connects the online video interview, the voice recognition module recognizes the voices of the interviewer and the interviewer,
if the current recognized voice is sent by the interviewer, the limb position of the interviewer is obtained through the limb position positioning module, when the moving distance of the limb position is larger than or equal to a first preset value, a limb action is recorded, and the sight line position deviation analysis module obtains the current position of the interviewer through the sight line position positioning moduleThe visual line position of the interviewer is further obtained, if the moving part of the interviewer is a hand, whether the visual line position corresponds to the hand position of the interviewer is judged, and the limb moving times of the interviewer are recordedAnd the number of times that the sight line position corresponds to the hand position of the interviewer,
If the current recognized voice is sent out by the interviewer, the sight line position of the current interviewer is obtained through the sight line position positioning module, and the sight line position deviation analysis module obtains the sight line movement times of the interviewerAnd the number of times that the sight line position corresponds to the upper screen position,
The sight line position deviation analysis module further calculates the sight line deviation degree of the interviewer
5. the internet-based talent online interview data analysis system of claim 1, wherein: the voice recognition module is connected with the voice coherence degree analysis module,
the voice recognition module recognizes the voice in the whole interview process when the interviewer connects the online video interview, judges whether the interviewer speaks or the interviewer speaks at present,
if the current voice is sent by the interviewer, the interviewer speaks in the time periodThe voice consistency analysis module counts the pause times of the current interviewer and the pause time of each time, and further calculates the total pause time of the interviewerThe voice coherence degree analysis module analyzes the voice coherence degree according to the total pause timeAnd time periodThe voice coherence degree analysis module further calculates the ratio of the total pause time and the speaking time of the interviewer during each speaking in the whole interviewing process, and calculates the average value of all the ratiosThe voice coherence degree analysis module is prestored with a ratio average value interval and a coherence degree evaluation value corresponding to the ratio average value interval,
If the current voice is sent by the interviewer, the interviewer speaks in the time periodCounting the occurrence frequency of the voice of the current interviewer by a voice continuity analysis module which stores the occurrence frequency in advanceThere are interval of number of times of occurrence of voice and estimated value of degree of coherence corresponding to interval of number of times of occurrence of voice,
6. the internet-based talent online interview data analysis system of claim 1, wherein: the interview data comprehensive analysis module is connected with the interview background chaos degree analysis module, the limb action chaos degree analysis module, the sight line position deviation degree analysis module and the voice coherence degree analysis module,
the comprehensive analysis module for interview data further obtains the background chaos degree of the interviewDegree of disorder of limbs and trunk movementsDegree of deviation of visual line positionAnd degree of speech coherenceAnd the comprehensive analysis and the further calculation of the interview data obtain a comprehensive analysis evaluation valueWhereinas a function of the number of the coefficients,are all less than 1 and greater than 0.
7. An online talent interview data analysis method based on the Internet is characterized in that: the interview data analysis method comprises the following steps:
s1: analyzing the degree of confusion of the interviewer background;
s2: analyzing the degree of disorder of the body movements of the interviewer;
s3: analyzing the visual line position deviation degree of the interviewer;
s4: analyzing the voice coherence degree of the interviewer;
s5: calculating the comprehensive evaluation of the interviewer on the interviewer according to the background chaos degree, the limb action chaos degree, the sight line position deviation degree and the voice coherence degree of the interview;
the interview data analysis method further comprises the following steps:
s1-1: the interview background acquisition module is connected with the interview background chaos degree analysis module, the interview background acquisition module acquires an interview background picture of an interviewer when the interviewer connects an on-line video interview, the interview background chaos degree analysis module acquires the acquired interview background picture through the interview background acquisition module, the interview background chaos degree analysis module analyzes color distribution in the interview background picture, the interview background chaos degree analysis module extracts object colors in the interview background picture, draws a color distribution diagram according to the extracted object colors, and further analyzes distribution areas of various colors in the color distribution diagram and shapes of the distribution areas,
area system for analyzing chaos degree of interview background according to color distributionCounting the number of areas, further acquiring the shape of any color distribution area, judging the symmetry of any color distribution area, establishing a plurality of first symmetric points at the edge of the color distribution area on any side of a symmetric axis by using a vertical line as the symmetric axis of any color distribution area by using an interview background disorder degree analysis module, determining the position of a second symmetric point at the other side of the symmetric axis according to the symmetric axis and the first symmetric points by using the interview background disorder degree analysis module, counting the number of the second symmetric points on the edge of the color distribution area on the other side, and recording the number as,
Marking a second symmetric point which is not positioned on the edge of the color distribution area on the other side, further obtaining the distance between the marked symmetric point and the edge of the color distribution area on the other side, counting the number of the marked symmetric points with the distance less than or equal to a distance threshold value, and recording the number as,
The interview background chaos degree analysis module records the number of the first symmetric points asCalculating the symmetry of any color distribution regionThe face test background chaos degree analysis module further calculates the chaos degree of the face test backgroundWherein N is the number of color distribution regions;
s2-1: the limb action disorder degree analysis module is connected with the limb action acquisition module and the limb position positioning module,
the limb action acquisition module acquires the limb actions of the interviewer when the interviewer connects the online video interview, the limb position positioning module establishes a plurality of mobile reference points on the limbs of the interviewer according to the acquired limb actions, the limb action disorder degree analysis module analyzes the limb action disorder degree of the interviewer according to the position moving frequency and the position moving rule of the mobile reference points,
the limb action disorder degree analysis module takes the mobile reference point positions on all limbs acquired firstly as first original point positions, analyzes the mobile frequency of all the first original point positions within a certain time period from the moment of establishing the first original point positions, records the movement of the first original point once when the moving distance of the first original point is greater than or equal to a first preset value, counts the moving times of all the first original point positions, and further calculates the mobile frequency of each first original point within the time period according to the moving times and the certain time periodAcquiring the maximum value of the moving frequency in the time period, wherein the chaos degree analysis module of the limb action prestores a chaos evaluation value corresponding to the moving frequency interval and the moving frequency interval, and determines a moving frequency slave interval according to the maximum value of the moving frequency and finds out the corresponding chaos evaluation value to be recorded as the chaos evaluation value,
And the limb action disorder degree analysis module takes the end time of a certain time period as the start time of the next time period and the mobile reference point position obtained at the start time as the second original point position when the certain time period ends, and analyzes the mobile frequency of all the second original point positions within the certain time period from the start timeAnd analogizing until the video interview is finished, and analyzing to obtain the moving frequency of the nth original point positionThe body action disorder degree analysis module calculates the disorder degree of the body action according to the movement frequencyWherein,a chaos evaluation value corresponding to the maximum value of the moving frequency of the nth original point;
s3-1: the sight line position positioning module is connected with the sight line position deviation analysis module, the limb position positioning module and the voice recognition module,
the sight line position positioning module locks the sight line position of the interviewer when the interviewer connects the online video interview, the voice recognition module recognizes the voices of the interviewer and the interviewer,
if the current recognized voice is sent out by the interviewer, the limb position of the interviewer is obtained through the limb position positioning module, when the moving distance of the limb position is larger than or equal to a first preset value, a limb action is recorded, the sight line position deviation analysis module obtains the sight line position of the current interviewer through the sight line position positioning module, the limb moving position is further obtained, if the moving position of the current interviewer is a hand, whether the sight line position corresponds to the hand position of the interviewer is judged, and the limb moving times of the interviewer is recordedAnd the number of times that the sight line position corresponds to the hand position of the interviewer,
If the current recognized voice is sent out by the interviewer, the sight line position of the current interviewer is obtained through the sight line position positioning module, and the sight line movement times of the interviewer is obtained through the sight line position deviation analysis moduleAnd the number of times that the sight line position corresponds to the upper screen position,
The visual line position deviation analysis module further calculates the visual line deviation degree of the interviewerWherein,all are the coefficients of the linear vibration wave,
s4-1: the voice recognition module is connected with the voice continuity analysis module, when the interviewer is connected with the online video interview, the voice recognition module recognizes the voice in the whole interview process, judges whether the interviewer speaks or the interviewer speaks at present,
if the current voice is sent by the interviewer, the interviewer speaks in the time periodThe voice consistency analysis module counts the pause times of the current interviewer and the pause time of each time, and further calculates the total pause time of the interviewerThe voice coherence degree analysis module analyzes the voice coherence degree according to the total pause timeAnd time periodCalculating the ratio between them, and analyzing the voice coherenceThe block further calculates the ratio of the total pause time to the speaking time of the interviewer in each speaking time in the whole interviewing process, and calculates the average value of all the ratiosThe voice coherence degree analysis module is prestored with a ratio average value interval and a coherence degree evaluation value corresponding to the ratio average value interval,
If the current voice is sent by the interviewer, the interviewer speaks in the time periodThe voice continuity analysis module is used for counting the voice occurrence times of the current interviewer, and the voice continuity analysis module is prestored with a voice occurrence time interval and a continuity evaluation value corresponding to the voice occurrence time interval,
The voice coherence degree analysis module further calculates the voice coherence degree of the interviewerWhereinall are the coefficients of the linear vibration wave,
s5-1: the interview data comprehensive analysis module is connected with the interview background chaos degree analysis module, the limb action chaos degree analysis module, the sight line position deviation degree analysis module and the voice continuity degree analysis module,
the interview data comprehensive analysis module further obtainsDegree of disorder of background of interviewDegree of disorder of limbs and trunk movementsDegree of deviation of visual line positionAnd degree of speech coherenceAnd comprehensively analyzing and further calculating the interview data to obtain comprehensive analysis evaluation valueWherein,as a function of the number of the coefficients,
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110821797.1A CN113269541B (en) | 2021-07-21 | 2021-07-21 | Talent online interview data analysis system and method based on Internet |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110821797.1A CN113269541B (en) | 2021-07-21 | 2021-07-21 | Talent online interview data analysis system and method based on Internet |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113269541A CN113269541A (en) | 2021-08-17 |
CN113269541B true CN113269541B (en) | 2021-11-02 |
Family
ID=77236937
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110821797.1A Active CN113269541B (en) | 2021-07-21 | 2021-07-21 | Talent online interview data analysis system and method based on Internet |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113269541B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101848378A (en) * | 2010-06-07 | 2010-09-29 | 中兴通讯股份有限公司 | Domestic video monitoring device, system and method |
CN111553364A (en) * | 2020-04-28 | 2020-08-18 | 支付宝(杭州)信息技术有限公司 | Picture processing method and device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180158027A1 (en) * | 2015-11-27 | 2018-06-07 | Prasad Venigalla | System and method for evaluating a candidate curriculum vitae intelligent quotient (cviq) score |
CN110135800A (en) * | 2019-04-23 | 2019-08-16 | 南京葡萄诚信息科技有限公司 | A kind of artificial intelligence video interview method and system |
CN112651714A (en) * | 2020-12-25 | 2021-04-13 | 北京理工大学深圳研究院 | Interview evaluation method and system based on multi-mode information |
CN112818741A (en) * | 2020-12-29 | 2021-05-18 | 南京智能情资创新科技研究院有限公司 | Behavior etiquette dimension evaluation method and device for intelligent interview |
CN112884326A (en) * | 2021-02-23 | 2021-06-01 | 无锡爱视智能科技有限责任公司 | Video interview evaluation method and device based on multi-modal analysis and storage medium |
-
2021
- 2021-07-21 CN CN202110821797.1A patent/CN113269541B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101848378A (en) * | 2010-06-07 | 2010-09-29 | 中兴通讯股份有限公司 | Domestic video monitoring device, system and method |
CN111553364A (en) * | 2020-04-28 | 2020-08-18 | 支付宝(杭州)信息技术有限公司 | Picture processing method and device |
Also Published As
Publication number | Publication date |
---|---|
CN113269541A (en) | 2021-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11783645B2 (en) | Multi-camera, multi-sensor panel data extraction system and method | |
US20180308114A1 (en) | Method, device and system for evaluating product recommendation degree | |
US9131053B1 (en) | Method and system for improving call-participant behavior through game mechanics | |
Hammal et al. | Interpersonal coordination of headmotion in distressed couples | |
Shen et al. | Understanding nonverbal communication cues of human personality traits in human-robot interaction | |
US20230177834A1 (en) | Relationship modeling and evaluation based on video data | |
Sun et al. | Towards visual and vocal mimicry recognition in human-human interactions | |
US20180168498A1 (en) | Computer Automated Method and System for Measurement of User Energy, Attitude, and Interpersonal Skills | |
JP2020113197A (en) | Information processing apparatus, information processing method, and information processing program | |
CN113076770A (en) | Intelligent figure portrait terminal based on dialect recognition | |
CN109697556A (en) | Evaluate method, system and the intelligent terminal of effect of meeting | |
CN113269541B (en) | Talent online interview data analysis system and method based on Internet | |
Dunbar et al. | Automated methods to examine nonverbal synchrony in dyads | |
Mizuno et al. | Next-speaker prediction based on non-verbal information in multi-party video conversation | |
KR101996630B1 (en) | Method, system and non-transitory computer-readable recording medium for estimating emotion for advertising contents based on video chat | |
Ishii et al. | Analyzing gaze behavior during turn-taking for estimating empathy skill level | |
Geenen et al. | Visual transcription-A method to analyze the visual and visualize the audible in interaction | |
Nishimura et al. | Speech-driven facial animation by lstm-rnn for communication use | |
Fang et al. | Estimation of cohesion with feature categorization on small scale groups | |
Sánchez-Ancajima et al. | Gesture Phase Segmentation Dataset: An Extension for Development of Gesture Analysis Models. | |
Panagakis et al. | Audiovisual conflict detection in political debates | |
Gault et al. | Continuities and transformations: challenges to capturing information about the'Information Society' | |
Ghazal et al. | Intellimeet: Collaborative mobile framework for automated participation assessment | |
US12107699B2 (en) | Systems and methods for creation and application of interaction analytics | |
Shiota et al. | Leader identification using multimodal information in multi-party conversations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |