CN110880081A - Employee management method and device based on voice recognition, computer equipment and medium - Google Patents

Employee management method and device based on voice recognition, computer equipment and medium Download PDF

Info

Publication number
CN110880081A
CN110880081A CN201911198422.3A CN201911198422A CN110880081A CN 110880081 A CN110880081 A CN 110880081A CN 201911198422 A CN201911198422 A CN 201911198422A CN 110880081 A CN110880081 A CN 110880081A
Authority
CN
China
Prior art keywords
target
employee
determining
voice
voice information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911198422.3A
Other languages
Chinese (zh)
Inventor
艾潇
张明洋
徐浩
梁志婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Miaozhen Information Technology Co Ltd
Miaozhen Systems Information Technology Co Ltd
Original Assignee
Miaozhen Systems Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Miaozhen Systems Information Technology Co Ltd filed Critical Miaozhen Systems Information Technology Co Ltd
Priority to CN201911198422.3A priority Critical patent/CN110880081A/en
Publication of CN110880081A publication Critical patent/CN110880081A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • G06Q10/063118Staff planning in a project environment
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L2015/088Word spotting

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Tourism & Hospitality (AREA)
  • Theoretical Computer Science (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Game Theory and Decision Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method, a device, computer equipment and a medium for managing employees based on voice recognition, wherein the method comprises the following steps: acquiring a voice set; the voice set comprises dialogue voice information corresponding to the target employee within a specified time period; determining the occurrence condition of a target keyword in the dialogue voice information; and determining the working strategy of the target employee according to the occurrence condition of the target keyword. According to the embodiment of the application, the conversation voice information of the target employee during working is obtained, the working strategy of the target employee is determined according to the occurrence condition of the target keyword in the conversation voice information, equivalently, the working strategy of the employee is determined according to the actual working condition of the employee, namely, the working strategy is utilized to arrange the work of the target employee, and the maximum working efficiency of the target employee can be mobilized.

Description

Employee management method and device based on voice recognition, computer equipment and medium
Technical Field
The present application relates to the field of data analysis, and in particular, to a method, an apparatus, a computer device, and a medium for employee management based on voice recognition.
Background
The social development rhythm is faster and faster, and the physical stores in the market are more and more, and there are many employees working in the stores, there are many work posts in the stores, and the working time of each work post is also different, so when the employees work in the stores, the merchants (employers) need to arrange the work strategies of the employees correspondingly.
The business arranges the work strategy of the employee according to the employee information provided by the employee, such as the work experience of the employee, the work duration of the employee, the academic calendar of the employee, and the like. However, in the above work policy arrangement method for the employee, false information may occur in the employee information, and when the employee information provided by the employee is far from the real information, the situation that the work policy arrangement is unreasonable is easily generated, and the work efficiency of the employee is reduced.
Disclosure of Invention
In view of the above, an object of the present application is to provide a method, an apparatus, a computer device and a medium for managing employees based on voice recognition, so as to solve the problem of how to improve the work efficiency of the employees in the prior art.
In a first aspect, an embodiment of the present application provides an employee management method based on voice recognition, including:
acquiring a voice set; the voice set comprises dialogue voice information corresponding to the target employee within a specified time period;
determining the occurrence condition of a target keyword in the dialogue voice information;
and determining the working strategy of the target employee according to the occurrence condition of the target keyword.
Optionally, the work policy includes work time, and determining the work policy of the target employee according to the occurrence of the target keyword includes:
determining the optimal working time of the target employee according to the distribution condition of the dialogue voice information containing the target keyword in different time periods;
and determining the work shops of the target employees according to the preferred working time and the passenger flow peak time of each shop.
Optionally, the target keywords include positive keywords and negative keywords, and determining the work policy of the target employee according to the occurrence of the target keywords includes:
and determining the working strategy of the target employee according to the appearance condition of the positive keywords and the appearance condition of the negative keywords in the dialogue voice information.
Optionally, the determining the work policy of the target employee according to the occurrence of the target keyword includes:
acquiring a monitoring video of the target employee;
determining the number of violations made by the target employee in the surveillance video;
and determining the working strategy of the target employee according to the occurrence condition of the target keyword and the number of the illegal operations.
Optionally, the obtaining of the dialog voice information includes the following steps:
acquiring voice data corresponding to the target staff in a specified time period;
judging whether the duration of each dialogue blank in the voice data exceeds a dialogue blank threshold value or not;
and if the duration of the dialogue blank exceeds the dialogue blank threshold, dividing the voice data by using the dialogue blank to obtain dialogue voice information.
Optionally, the determining the occurrence of the target keyword in the dialog voice message includes:
separating first voice information sent by the target employee from the conversation voice information;
and determining the occurrence condition of the target keyword in the first voice message.
Optionally, the determining the work policy of the target employee according to the occurrence of the target keyword includes:
separating second voice information sent by a client from the conversation voice information;
determining the occurrence condition of an evaluation keyword in the second voice message;
and determining the working strategy of the target employee according to the occurrence condition of the target keyword and the occurrence condition of the evaluation keyword.
In a second aspect, an embodiment of the present application provides an employee management device based on voice recognition, including:
the acquisition module is used for acquiring a voice set; the voice set comprises dialogue voice information corresponding to the target employee within a specified time period;
the processing module is used for determining the occurrence condition of the target keywords in the dialogue voice information;
and the determining module is used for determining the working strategy of the target employee according to the occurrence condition of the target keyword.
In a third aspect, an embodiment of the present application provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the above determination method when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the above-mentioned determining method.
The embodiment of the application provides a staff management method based on voice recognition, and the method comprises the following steps of firstly, acquiring a voice set; the voice set comprises dialogue voice information corresponding to the target employee within a specified time period; then, determining the occurrence condition of the target keyword in the dialogue voice information; and finally, determining the working strategy of the target employee according to the occurrence condition of the target keyword. In the prior art, the corresponding work strategy is determined for the employee through the employee information provided by the employee, however, false information may occur to the employee information provided by the employee, and after the work strategy is determined for the employee according to the false information, the work strategy is not matched with the work efficiency of the employee, so that the work efficiency of the employee is reduced. According to the method and the device, the conversation voice information of the target employee during working is obtained, the working strategy of the target employee is determined according to the occurrence condition of the target keyword in the conversation voice information, equivalently, the working strategy of the employee is determined according to the actual working condition of the employee, namely, the working strategy is used for arranging the work of the target employee, and the maximum working efficiency of the target employee can be mobilized.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic basic flow chart of an employee management method based on speech recognition according to an embodiment of the present application;
fig. 2 is a schematic basic flow chart of another employee management method based on speech recognition according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an employee management device based on voice recognition according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a computer device 400 according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
In the prior art, the staff information provided by the staff is used for determining the corresponding work strategy for the staff, which is a necessary process after each staff works, but the staff information provided by the staff is likely to have false information, and after the work strategy is determined for the staff according to the false information, the work strategy is not matched with the work efficiency of the staff, so that the work efficiency of the staff is reduced.
In order to solve the above problem, as shown in fig. 1, the present application provides a method for managing employees based on voice recognition, including:
s101, acquiring a voice set; the voice set comprises dialogue voice information corresponding to the target employee within a specified time period;
s102, determining the occurrence condition of the target keywords in the dialogue voice information;
and S103, determining the working strategy of the target employee according to the occurrence condition of the target keyword.
In step S101, a plurality of dialogue voices are stored in the voice set, and dialogue voice information is a voice recorded when the target employee and the client have a dialogue. In order to prevent confusion of the acquired conversation voice information, each employee has an ID number, and the ID number is bound to the voice acquiring device corresponding to the employee, so that the conversation voice information corresponding to each employee is only the conversation voice information collected by the voice acquiring device corresponding to the ID number of the employee. The voice acquiring device may be a mobile terminal with a pickup microphone, and the application is not limited herein. The specified time period reflects the time for acquiring the conversation voice information of the employee, the conversation voice information acquired only in the specified time period is valid conversation voice information, the valid conversation voice information can be used as a basis for determining the work strategy of the target employee, the specified time period can be manually specified, and can be 12 hours, one day, one week, one month and the like, and the application is not limited herein.
In step S102, the target keyword may be specified manually, and the appearance of the target keyword in the dialogue voice message may reflect the work status of the target employee. The occurrence of the target keyword may be represented by the occurrence frequency of the target keyword in the dialog voice message, or the occurrence frequency of the target keyword in the voice text corresponding to the dialog voice message. The target keywords can comprise positive keywords and negative keywords, the more the positive keywords occur, the better the working condition of the target staff is indicated, and conversely, the more the negative keywords occur, the worse the working condition of the target staff is indicated. The positive keywords may include welcome, this request, order, etc., and the application is not limited thereto. Negative keywords may include: i do not go, do not stand out, do not mean, etc., and the application is not limited herein.
In step S103, the work policy may reflect the work efficiency of the target employee, the reasonable work policy may improve the work efficiency of the target employee, and the unreasonable work policy may reduce the work efficiency of the target employee. The work strategy may include work time, work position, etc., and the application is not limited herein.
Further, the work policy of the target employee is determined according to target keywords, where the target keywords include positive keywords and negative keywords, and in step S103, the method includes:
and determining the working strategy of the target employee according to the appearance condition of the positive keywords and the appearance condition of the negative keywords in the first voice message.
The operating strategy may include the following two aspects.
When the working policy is working time, determining the working time of the target employee according to the appearance condition of the positive keywords and the appearance condition of the negative keywords in the dialogue voice information, wherein the working policy comprises the following steps:
and step 1031, determining the working time of the target employee according to the ratio of the occurrence frequency of the positive keywords in the dialogue voice information to the occurrence frequency of the negative keywords in the dialogue voice information for each time interval.
When the working policy is the working position, determining the working position of the target employee according to the occurrence condition of the positive keywords and the occurrence condition of the negative keywords in the dialogue voice information, wherein the working policy comprises the following steps:
step 1032, judging whether the target employee is suitable for each working post according to the ratio of the occurrence frequency of the positive keywords corresponding to the working post in the dialogue voice information to the occurrence frequency of the negative keywords corresponding to the working post in the dialogue voice information; if the ratio of the occurrence frequency of the positive keywords corresponding to the working post in the dialogue voice information to the occurrence frequency of the negative keywords corresponding to the working post in the dialogue voice information is larger than or equal to a preset threshold value, determining that the target employee is suitable for the working post; and if the ratio of the occurrence frequency of the positive keywords corresponding to the working post in the dialogue voice information to the occurrence frequency of the negative keywords corresponding to the working post in the dialogue voice information is smaller than a preset threshold value, determining that the target employee is not suitable for the working post.
In step S1031, the working condition of the target employee in the time period can be reflected according to the ratio of the number of occurrences of the positive keyword in the dialogue voice message to the number of occurrences of the negative keyword in the dialogue voice message, where a larger value of the ratio indicates a better working condition of the target employee, and a smaller value of the ratio indicates a poorer working condition of the target employee. And sequencing each time interval by utilizing the ratio of the occurrence times of the positive keywords in the conversation voice information to the occurrence times of the negative keywords in the conversation voice information, and taking the time interval in the front of the sequencing as the working time of the target staff.
For example, the ratio of the number of occurrences of the positive keyword in the dialog voice information in the time period 1 to the number of occurrences of the negative keyword in the dialog voice information is 5.8, the ratio of the number of occurrences of the positive keyword in the dialog voice information in the time period 2 to the number of occurrences of the negative keyword in the dialog voice information is 2.2, the ratio of the number of occurrences of the positive keyword in the dialog voice information in the time period 3 to the number of occurrences of the negative keyword in the dialog voice information is 0.6, the ratio of the number of occurrences of the positive keyword in the time period 4 to the number of occurrences of the negative keyword in the dialog voice information in the time period 4 is 3.4, the ratio of the number of occurrences of the positive keyword in the time period 5 to the number of occurrences of the negative keyword in the dialog voice information in the time period 5 is 1, and the descending order result after each time period is sorted by the ratio is time period 1, Period 2, period 5, period 3. The target employee's work may preferably be scheduled for periods 1, 4 and 2.
In the step S1032, the different work posts correspond to different active keywords, for example, when the work post is the welcome post, the active keywords are: welcome, please at home, be happy to serve you, etc.; when the working post is the ordering post, the positive keywords are as follows: this is a menu, you please order a meal, etc. The working condition of the target staff at the working post can be reflected according to the ratio of the occurrence frequency of the positive keywords in the conversation voice information to the occurrence frequency of the negative keywords in the conversation voice information, the larger the numerical value of the ratio is, the better the working condition of the target staff is, and the smaller the numerical value of the ratio is, the worse the working condition of the target staff is. And sequencing each post by utilizing the ratio of the occurrence times of the positive keywords in the conversation voice information to the occurrence times of the negative keywords in the conversation voice information, and taking the post which is sequenced at the front as the working post of the target employee.
For example, the ratio of the number of occurrences of the positive keyword in the dialog voice message to the number of occurrences of the negative keyword in the dialog voice message for job position a is 6.8, the ratio of the number of occurrences of the positive keyword in the dialog voice message for job position B to the number of occurrences of the negative keyword in the dialog voice message for job position B is 8.2, the ratio of the number of occurrences of the positive keyword in the dialog voice message for job position C to the number of occurrences of the negative keyword in the dialog voice message for job position C is 1.3, the ratio of the number of occurrences of the positive keyword in the dialog voice message for job position D to the number of occurrences of the negative keyword in the dialog voice message for job position D is 4.6, the ratio of the number of occurrences of the positive keyword in the dialog voice message for job position E to the number of occurrences of the negative keyword in the dialog voice message for job position E is 2.5, and the descending order result after each period is the job position B, Working post A, working post D, working post E and working post C. The work post of the target employee may preferably be arranged at work post B.
Through the steps, the conversation voice information of the target employee during working is obtained, and the working strategy of the target employee is determined according to the occurrence condition of the target keyword in the conversation voice information, which is equivalent to determining the working strategy of the employee according to the actual working condition of the employee, namely, the work of the target employee is arranged by utilizing the working strategy, so that the employee can mobilize the maximum working efficiency of the target employee.
After the time period that the work efficiency of the target employee is higher is determined, a work shop to be worked can be determined for the employee by using the time period, the working strategy can include working time, and step S102 includes:
step 1021, determining the preferred working time of the target employee according to the distribution condition of the dialogue voice information containing the target keyword in different time periods;
and 1022, determining the work shop of the target staff according to the preferred work time and the passenger flow peak time of each shop.
In step 1021, the distribution of the dialog voice message containing the target keyword in different time periods may be the number of occurrences or the frequency of occurrences of the target keyword in each time period, and the preferred working time may be the time period in which the target employee is located when the working efficiency is high. The process of determining the preferred working time of the target employee through the above distribution may refer to step 1031 above.
In step 1022 described above, the peak flow hours may reflect a time period when the volume of the store is high.
When the preferred working time of the staff is matched with the peak time of the passenger flow of the shop, the target staff is arranged in the shop to work, so that the staff works at the peak time of the passenger flow of the shop, the target staff serves more customers, and the working efficiency of the staff is improved.
For example, the working time periods of the target employees are sorted in a descending order according to the distribution conditions of the dialogue voice information containing the target keywords in different time periods, the sorting results are time period 2, time period 3, time period 1, time period 5 and time period 4, and the preferred working time of the target employees is time period 2. There are 3 stores, with the peak flow time of store a being time period 4, the peak flow time of store b being time period 2, and the peak flow time of store c being time period 1. It is determined that the target employee should be assigned to work at store b after matching the preferred work hours of the target employee with the peak hours of each store.
When determining the work strategy for the target employee, the voice acquisition device may be worn by the employee to acquire the dialogue voice information of the employee, and the monitoring video in the store may be used as another judgment basis, where step 103 includes:
step 1033, acquiring a monitoring video of the target employee;
step 1034, determining the number of the illegal behaviors made by the target employees in the monitoring video;
and 1035, determining the work strategy of the target employee according to the occurrence condition of the target keyword and the number of the illegal operations.
In step 1033, the surveillance video may monitor the behavior of the target employee. The surveillance video may be acquired by a camera device, which may be a surveillance camera.
In step 1034, the violation may be specified manually, and the violation may be eating by steal, sedentary, etc., which is not limited herein. The illegal behaviors can be recognized in the monitoring video by utilizing an image recognition technology, face recognition is carried out according to the image corresponding to the illegal behaviors, the illegal behaviors of the target staff are determined, and the number of the illegal behaviors of the target staff is further counted. The larger the number of the illegal behaviors, the worse the working state of the target staff is, the smaller the number of the illegal behaviors, and the better the working state of the target staff is.
In the step 1035, the target employees can be restricted in terms of language and behavior at the same time according to the occurrence condition of the target keywords and the number of illegal operations, so that the working efficiency of the target employees is improved.
Specifically, the work policy of the target employee may be determined according to a ratio of the number of occurrences of the target keyword to the number of illegal operations, where a larger numerical value of the ratio of the number of occurrences of the target keyword to the number of illegal operations indicates that the work efficiency of the target employee is higher, and a smaller numerical value of the ratio of the number of occurrences of the target keyword to the number of illegal operations indicates that the work efficiency of the target employee is lower. And then, the optimal working time of the target staff can be determined by utilizing the ratio of the occurrence times of the target keywords to the number of the illegal operations, or the optimal working position of the target staff is determined by utilizing the ratio of the occurrence times of the target keywords corresponding to each working position to the number of the illegal operations, and then the optimal working shop is matched for the target staff according to the optimal working time of the target staff and the optimal working position.
The occurrence frequency of the target keywords and the number of the illegal operations can be subjected to weighted summation to determine the working strategy of the target staff, the larger the weighted summation value is, the higher the working efficiency of the target staff is, and the smaller the weighted summation value is, the lower the working efficiency of the target staff is. And then, the optimal working time of the target staff can be determined by using the weighted sum of the occurrence frequency of the target keywords and the number of the illegal operations, or the optimal working post of the target staff is determined by using the weighted sum of the occurrence frequency of the target keywords corresponding to each working post and the number of the illegal operations, and then the optimal working shop is matched for the target staff according to the optimal working time of the target staff and the optimal working post.
In this application, two voices can be acquired through the voice acquisition device: the first is a voice having a duration equal to that of a specified time period; the second is a voice whose duration is shorter than the duration of a specified period of time and which occurs within the specified period of time.
For the first voice, in order to determine the occurrence of the target keyword in the voice, the following steps may be adopted to achieve the acquisition of the dialogue voice information:
step 1011, acquiring voice data corresponding to the target staff in a specified time period;
step 1012, determining whether the duration of each dialog blank exceeds a dialog blank threshold for each dialog blank in the voice data; and if the duration of the dialogue blank exceeds the dialogue blank threshold, dividing the voice data by using the dialogue blank to obtain dialogue voice information.
In the above step 1011, the voice data may be voice acquired by the voice acquiring apparatus within a specified time period and having a duration of the specified time period.
In step 1012, the dialog blank reflects that the employee did not speak in the current state, and if the dialog blank exists in the voice corresponding to the employee, it indicates that two sections of dialog are spoken between the employee and the client. The longer the session blank time, the more likely the employee will have a chance to pick up two customers, so the present application sets a session blank threshold (the session control threshold may be manually specified, and may be 15 seconds, 30 seconds, 1 minute, etc., and the present application is not limited herein). When the conversation blank time between the conversations at the two ends exceeds a conversation blank threshold value in the voice data, the employee is indicated to take hold of two customers; and when the dialogue blank time between the two end dialogs does not exceed the dialogue blank threshold value in the voice data, the staff is indicated to take a customer. The speech data is segmented into at least one conversational speech information by a conversational white space threshold. And dividing the voice data with the duration of a specified time period into a plurality of conversation voice messages so as to determine the occurrence condition of the target keyword in the conversation voice messages.
For example, the voice data uttered by the employee is a 10-hour voice, the threshold of the dialog blanks is 30 seconds, and there are 100 dialog blanks in the 10-hour voice, where the duration of 24 dialog blanks is less than or equal to 30 seconds, and the duration of 76 dialog blanks is greater than 30 seconds, and then the 10-hour voice is divided by using the dialog blanks whose duration is greater than 30 seconds, so as to generate 77 pieces of voice information.
Because the environment in the store is an environment with a large personal traffic and the environment is noisy, the conversation between the target staff and the client may exist in the conversation voice information acquired by the voice acquisition device, so that in order to obtain a more reasonable work strategy of the target staff, the application provides a method for more accurately determining the occurrence condition of the target keyword in the conversation voice information, and step 102 includes:
separating first voice information sent by the target employee from the conversation voice information;
and determining the occurrence condition of the target keyword in the first voice message.
In the above step, the first voice information is the voice information corresponding to the voice sent by the target employee, and the first voice information may reflect the working condition of the target employee. And the occurrence condition of the target keyword in the first voice message is not interfered by the voice sent by the client, so that the determined occurrence condition of the target keyword is more accurate, and the working strategy of the target employee determined by using the occurrence condition of the target keyword is more reasonable. In the process of separating the first voice information from the dialogue voice information, the first voice information in the dialogue voice information is reserved by a method for reducing noise for the dialogue voice information, and the method for reducing noise can comprise the following three methods: and voice noise reduction is carried out by utilizing artificial intelligence, voice noise reduction is carried out by utilizing a filter, and voice noise reduction is carried out by utilizing a spectral subtraction method.
The method for obtaining the voice and the monitoring video of the target employee determines the working strategy of the target employee by utilizing the voice and the behavior of the employee. The basis of the work strategy determining method is only obtained from the aspect of target employees, in order to more comprehensively understand the working conditions of the employees and further make reasonable work strategies for the target employees, and the work strategies can be determined for the target employees by utilizing voices expressed by clients, as shown in fig. 2, the application provides another employee management method based on voice recognition, and the step S103 includes:
s201, separating second voice information sent by a client from the conversation voice information;
s202, determining the occurrence condition of the evaluation keyword in the second voice message;
and S203, determining the work strategy of the target employee according to the appearance condition of the target keyword and the appearance condition of the evaluation keyword.
In step S201, the second voice information is the voice information corresponding to the voice uttered by the client, and the second voice information may reflect the evaluation of the employee by the client. Separating the second voice information from the conversational voice information may filter out the voice uttered by the target employee from the conversational voice information.
In step S202, the evaluation keyword reflects the evaluation of the work situation of the target employee by the client. The occurrence of the evaluation keyword may be the number of occurrences of the evaluation keyword in the second speech information or the probability of occurrence of the evaluation keyword in the second speech information. The evaluation keywords may include positive evaluation keywords and negative evaluation keywords. Positive evaluation keywords may be good, severe, etc., and the application is not limited herein. The negative evaluation keyword may be that you are too bad, you are not going, etc., and the application is not limited herein. The higher the occurrence frequency of the positive evaluation keywords or the higher the occurrence frequency of the positive evaluation keywords, the better the working state of the target employee is, and the higher the working efficiency is, and the higher the occurrence frequency of the negative evaluation keywords or the higher the occurrence frequency of the positive evaluation keywords is, the worse the working state of the target employee is, and the lower the working efficiency is.
In step S203, determining the working time or the working post of the target employee according to the occurrence of the target keyword and the occurrence of the evaluation keyword;
and when the working time is determined according to the occurrence condition of the target keywords and the occurrence condition of the evaluation keywords when the working strategy is the working time, taking a time period corresponding to the condition that the number of occurrences of the positive keywords in the target keywords is large (the occurrence frequency is high) and the number of occurrences of the positive keywords in the evaluation keywords is large (the occurrence frequency is high) as the working time of the target staff.
And when the working strategy is the working position, determining the working position of the target employee according to the appearance condition of the positive keywords and the appearance condition of the negative keywords in the dialogue voice information, and taking the working position corresponding to the target employee with more positive keywords (higher appearance frequency) in the target keywords and more positive keywords (higher appearance frequency) in the evaluation keywords as the working position of the target employee.
As shown in fig. 3, the present application provides an employee management device based on voice recognition, including:
an obtaining module 301, configured to obtain a first speech set; the first voice set comprises dialogue voice information corresponding to the target employee within a specified time period;
a processing module 302, configured to determine occurrence of a target keyword in the dialog voice message;
and the determining module 303 is configured to determine the work policy of the target employee according to the occurrence condition of the target keyword.
Optionally, the work policy includes a work time, and the determining module 303 includes:
a shop determination module: the system is used for determining the preferred working time of the target employee according to the distribution condition of the dialogue voice information containing the target keyword in different time periods; and determining the work shop of the target staff according to the preferred work time and the passenger flow peak time of each shop.
Optionally, the target keywords include positive keywords and negative keywords, and the determining module 303, when determining the work policy of the target employee according to the occurrence of the target keywords, includes:
and determining the working strategy of the target employee according to the appearance condition of the positive keywords and the appearance condition of the negative keywords in the dialogue voice information.
Optionally, the determining module 303 includes: the system comprises a first obtaining subunit, an illegal behavior statistical unit and a first working strategy determining unit;
the first acquisition subunit is used for acquiring the monitoring video of the target staff;
the violation behavior counting unit is used for determining the number of violation behaviors made by the target employee in the monitoring video;
and the first work strategy determining unit is used for determining the work strategy of the target employee according to the occurrence condition of the target keyword and the number of the illegal operations.
Optionally, the apparatus further comprises:
the conversation voice information acquisition module: acquiring voice data corresponding to the target staff in a specified time period; judging whether the duration of each dialogue blank exceeds a dialogue blank threshold value or not aiming at each dialogue blank in the voice data; and if the duration of the dialogue blank exceeds the dialogue blank threshold, dividing the voice data by using the dialogue blank to obtain dialogue voice information.
Optionally, when determining the occurrence of the target keyword in the dialog voice message, the processing module 302 includes:
separating first voice information sent by the target employee from the conversation voice information;
and determining the occurrence condition of the target keyword in the first voice message.
Optionally, the determining module 303 includes: the system comprises a second acquisition subunit, an evaluation keyword determining unit and a second working strategy determining unit;
the second acquisition subunit is used for separating second voice information sent by the client from the conversation voice information;
an evaluation keyword determination unit configured to determine an occurrence of an evaluation keyword in the second voice information;
and the second work strategy determining unit is used for determining the work strategy of the target employee according to the occurrence condition of the target keyword and the occurrence condition of the evaluation keyword.
Corresponding to the employee management method based on voice recognition in fig. 1, an embodiment of the present application further provides a computer device 400, as shown in fig. 4, the device includes a memory 401, a processor 402, and a computer program stored on the memory 401 and operable on the processor 402, where the processor 402 implements the steps of the employee management method based on voice recognition when executing the computer program.
Specifically, the memory 401 and the processor 402 can be general memories and processors, which are not specifically limited herein, and when the processor 402 runs a computer program stored in the memory 401, the above-mentioned employee management method based on voice recognition can be executed, so as to solve the problem of how to improve the work efficiency of employees in the prior art.
Corresponding to the employee management method based on voice recognition in fig. 1, an embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to execute the employee management method based on voice recognition.
Specifically, the storage medium can be a general storage medium, such as a mobile disk, a hard disk, and the like, and when a computer program on the storage medium is run, the employee management method based on voice recognition can be executed, so as to solve the problem of how to improve the work efficiency of the employee in the prior art, and by obtaining the conversation voice information of the target employee during working, and determining the work policy of the target employee according to the occurrence of the target keyword in the conversation voice information, which is equivalent to determining the work policy of the employee according to the actual working condition of the employee, that is, arranging the work of the target employee by using the work policy, the maximum work efficiency of the target employee can be mobilized.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided in the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the present disclosure, which should be construed in light of the above teachings. Are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A staff management method based on voice recognition is characterized by comprising the following steps:
acquiring a voice set; the voice set comprises dialogue voice information corresponding to the target employee within a specified time period;
determining the occurrence condition of a target keyword in the dialogue voice information;
and determining the working strategy of the target employee according to the occurrence condition of the target keyword.
2. The employee management method of claim 1, wherein said work policy includes work hours, and said determining a work policy for said target employee based on occurrences of target keywords comprises:
determining the optimal working time of the target employee according to the distribution condition of the dialogue voice information containing the target keyword in different time periods;
and determining the work shops of the target employees according to the preferred working time and the passenger flow peak time of each shop.
3. The employee management method of claim 1, wherein said target keywords comprise positive keywords and negative keywords, and said determining a work strategy for said target employee based on occurrences of the target keywords comprises:
and determining the working strategy of the target employee according to the appearance condition of the positive keywords and the appearance condition of the negative keywords in the dialogue voice information.
4. The employee management method of claim 1, wherein said determining a work policy for said target employee based on occurrences of target keywords comprises:
acquiring a monitoring video of the target employee;
determining the number of violations made by the target employee in the surveillance video;
and determining the working strategy of the target employee according to the occurrence condition of the target keyword and the number of the illegal operations.
5. The employee management method according to claim 1, wherein said acquisition of dialogue voice information comprises the steps of:
acquiring voice data corresponding to the target staff in a specified time period;
judging whether the duration of each dialogue blank in the voice data exceeds a dialogue blank threshold value or not;
and if the duration of the dialogue blank exceeds the dialogue blank threshold, dividing the voice data by using the dialogue blank to obtain dialogue voice information.
6. The employee management method of claim 1, wherein said determining the presence of a target keyword in said conversational speech message comprises:
separating first voice information sent by the target employee from the conversation voice information;
and determining the occurrence condition of the target keyword in the first voice message.
7. The employee management method of claim 1, wherein said determining a work policy for said target employee based on occurrences of target keywords comprises:
separating second voice information sent by a client from the conversation voice information;
determining the occurrence condition of an evaluation keyword in the second voice message;
and determining the working strategy of the target employee according to the occurrence condition of the target keyword and the occurrence condition of the evaluation keyword.
8. An employee management device based on speech recognition, comprising:
the acquisition module is used for acquiring a first voice set; conversation voice information corresponding to the target employee in the specified time period is in the first voice set;
the processing module is used for determining the occurrence condition of the target keywords in the dialogue voice information;
and the determining module is used for determining the working strategy of the target employee according to the occurrence condition of the target keyword.
9. Computer arrangement comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the determination method as claimed in any of the preceding claims 1 to 7 are implemented by the processor when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the determination method according to any one of the preceding claims 1 to 7.
CN201911198422.3A 2019-11-29 2019-11-29 Employee management method and device based on voice recognition, computer equipment and medium Pending CN110880081A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911198422.3A CN110880081A (en) 2019-11-29 2019-11-29 Employee management method and device based on voice recognition, computer equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911198422.3A CN110880081A (en) 2019-11-29 2019-11-29 Employee management method and device based on voice recognition, computer equipment and medium

Publications (1)

Publication Number Publication Date
CN110880081A true CN110880081A (en) 2020-03-13

Family

ID=69729666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911198422.3A Pending CN110880081A (en) 2019-11-29 2019-11-29 Employee management method and device based on voice recognition, computer equipment and medium

Country Status (1)

Country Link
CN (1) CN110880081A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149586A (en) * 2020-09-28 2020-12-29 上海翰声信息技术有限公司 Automatic video clip extraction system and method based on neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109003624A (en) * 2018-06-29 2018-12-14 北京百度网讯科技有限公司 Emotion identification method, apparatus, computer equipment and storage medium
CN109639914A (en) * 2019-01-08 2019-04-16 深圳市沃特沃德股份有限公司 Intelligent examining method, system and computer readable storage medium
CN109767787A (en) * 2019-01-28 2019-05-17 腾讯科技(深圳)有限公司 Emotion identification method, equipment and readable storage medium storing program for executing
CN110009273A (en) * 2019-03-06 2019-07-12 秒针信息技术有限公司 Information processing method and device, storage medium, electronic device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109003624A (en) * 2018-06-29 2018-12-14 北京百度网讯科技有限公司 Emotion identification method, apparatus, computer equipment and storage medium
CN109639914A (en) * 2019-01-08 2019-04-16 深圳市沃特沃德股份有限公司 Intelligent examining method, system and computer readable storage medium
CN109767787A (en) * 2019-01-28 2019-05-17 腾讯科技(深圳)有限公司 Emotion identification method, equipment and readable storage medium storing program for executing
CN110009273A (en) * 2019-03-06 2019-07-12 秒针信息技术有限公司 Information processing method and device, storage medium, electronic device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149586A (en) * 2020-09-28 2020-12-29 上海翰声信息技术有限公司 Automatic video clip extraction system and method based on neural network

Similar Documents

Publication Publication Date Title
CN110910901B (en) Emotion recognition method and device, electronic equipment and readable storage medium
US9093081B2 (en) Method and apparatus for real time emotion detection in audio interactions
US20170154293A1 (en) Customer service appraisal device, customer service appraisal system, and customer service appraisal method
US11245791B2 (en) Detecting robocalls using biometric voice fingerprints
WO2015007107A1 (en) Device and method for performing quality inspection on service quality of customer service staff
CN110930990A (en) Passenger flow volume statistical method, device, equipment and medium based on voice recognition
KR101795593B1 (en) Device and method for protecting phone counselor
CN110633912A (en) Method and system for monitoring service quality of service personnel
CN108257594A (en) A kind of conference system and its information processing method
CN111754982A (en) Noise elimination method and device for voice call, electronic equipment and storage medium
EP2887627A1 (en) Method and system for extracting out characteristics of a communication between at least one client and at least one support agent and computer program product thereof
US20180247272A1 (en) Dynamic alert system
CN112883932A (en) Method, device and system for detecting abnormal behaviors of staff
CN111052749A (en) Mechanism and tool for metering sessions
CN110581927A (en) Call content processing and prompting method and device
JP2010273130A (en) Device for determining progress of fraud, dictionary generator, method for determining progress of fraud, and method for generating dictionary
CN113506097B (en) On-duty state monitoring method, device, equipment and storage medium
JP2011065304A (en) Server for customer service operation, customer service system using the server and method for calculating prediction end time of customer service operation
CN110880081A (en) Employee management method and device based on voice recognition, computer equipment and medium
CN108090193B (en) Abnormal text recognition method and device
CN116975854B (en) Financial information intelligent storage supervision system and method based on big data
CN111210818B (en) Word acquisition method and device matched with emotion polarity and electronic equipment
CN112036820A (en) Enterprise internal information feedback processing method, system, storage medium and equipment
CN111047362A (en) Statistical management method and system for use activity of intelligent sound box
KR101716748B1 (en) Call classification system and method for managing quality of call center

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200313

RJ01 Rejection of invention patent application after publication