CN112668833A - Staff work arrangement method, device, equipment and medium based on artificial intelligence - Google Patents

Staff work arrangement method, device, equipment and medium based on artificial intelligence Download PDF

Info

Publication number
CN112668833A
CN112668833A CN202011347146.5A CN202011347146A CN112668833A CN 112668833 A CN112668833 A CN 112668833A CN 202011347146 A CN202011347146 A CN 202011347146A CN 112668833 A CN112668833 A CN 112668833A
Authority
CN
China
Prior art keywords
health
target employee
employee
state
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011347146.5A
Other languages
Chinese (zh)
Inventor
吴雨润
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Puhui Enterprise Management Co Ltd
Original Assignee
Ping An Puhui Enterprise Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Puhui Enterprise Management Co Ltd filed Critical Ping An Puhui Enterprise Management Co Ltd
Priority to CN202011347146.5A priority Critical patent/CN112668833A/en
Publication of CN112668833A publication Critical patent/CN112668833A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The application discloses a staff work arrangement method and device based on artificial intelligence, electronic equipment and a medium. By applying the technical scheme of the application, before the staff goes to work, the staff can be comprehensively identified according to the facial images and the voice data of the staff, and the body health state of the staff in the future time period can be acquired based on the body health state, so that the work arrangement of the staff in the future time period can be adjusted according to the body health state of the staff in the future time period. The staff can be timely cared when the physical state of the user is not good, the physical health of the staff can be guaranteed, and the working efficiency can be improved.

Description

Staff work arrangement method, device, equipment and medium based on artificial intelligence
Technical Field
The application relates to a data processing technology, in particular to a method, a device, electronic equipment and a medium for staff work arrangement based on artificial intelligence.
Background
In the current internet industry, overtime is a common phenomenon, but when people work hard, the health state of the body of people is often ignored. Therefore, accidents can be caused for a long time, even sudden death can be caused to serious injuries to the self, family and friends. Therefore, if the health state of the staff can be sensed in real time when the staff works, the staff can be timely cared when the state is not good, the physical health of the staff can be guaranteed, and the working efficiency can be improved.
At present, a mature care system is not provided, most companies only issue benefits regularly, and therefore the benefits are always too rigid and cannot be used for pertinently caring for employees. For example, when the project is urgent and the overtime force is increased, the project is not in the welfare issue plan and thus the care plan is not provided, so that the enthusiasm of the staff is reduced, and the progress of the project is not facilitated. The problem can be solved if a system which can automatically sense the physical and mental states of the staff and dynamically adjust the staff care plan based on the states can be developed.
Disclosure of Invention
The embodiment of the application provides a staff work arrangement method, a device, an electronic device and a medium based on artificial intelligence, wherein according to one aspect of the embodiment of the application, the staff work arrangement method based on artificial intelligence is provided, and the method is characterized by comprising the following steps:
the method comprises the steps that a preset image detection convolution network is utilized to conduct feature recognition on a facial image of a target employee, and a first health state of the target employee is obtained;
performing feature recognition on voice data of the target employee by utilizing a voice detection circulation network to obtain a first emotional state of the target employee;
determining a health indicator for the target employee for a first future time period based on the first health state, the first emotional state, and historical health parameters of the target employee;
determining a work schedule for the target employee for the first future time period based on the health indicator.
Optionally, in another embodiment based on the foregoing method of the present application, before the performing feature recognition on the facial image of the target employee by using a preset image detection convolutional network, the method further includes:
obtaining a first number of sample images, wherein the sample images include at least one sample facial feature, wherein each sample facial feature is labeled with a corresponding health status;
and training an initial image semantic convolution neural network model by using at least one sample image marked with the health state to obtain the image detection convolution network meeting preset conditions.
Optionally, in another embodiment based on the foregoing method of the present application, the performing feature recognition on the facial image of the target employee by using a preset image detection convolutional network to obtain a first health status of the target employee includes:
extracting at least one of eye region features, cheek region features and eyebrow region features of the facial image of the target employee by using the image detection convolution network;
performing feature recognition on at least one of the eye region features, the cheek region features and the eyebrow region features by using the image detection convolution network to obtain corresponding size parameters and color parameters;
determining a first health status of the target employee based on the size parameter and the color parameter.
Optionally, in another embodiment based on the foregoing method of the present application, the performing feature recognition on the voice data of the target employee by using a voice detection loop network to obtain a first emotional state of the target employee includes:
extracting at least one of tone color feature, volume feature and tone feature of the voice data by using the voice detection circulation network;
and performing feature recognition on at least one of the tone color feature, the volume feature and the tone feature by using the voice detection circulation network to acquire the corresponding first emotional state.
Optionally, in another embodiment based on the method of the present application, the determining the health indicator of the target employee for the first future time period based on the first health status, the first emotional status, and the historical health parameter of the target employee includes:
acquiring the health state and the emotional state of the target employee within a second historical time period;
generating a future health curve of the target employee according to the health state and the emotional state in the second historical time period, the first health state and the first emotional state;
determining a health indicator for the target employee for a first future time period based on the future health curve for the target employee.
Optionally, in another embodiment based on the above method of the present application, the future health curve of the target employee is generated based on the following formula:
Xt=t*cos(dwA)+n*sin(dwB);
Yn=t*cos(dwB)+n*sin(dwA);
wherein, X is the horizontal axis of the health curve, Y is the vertical axis of the health curve, t is used for representing time, n is used for representing health index, dwA is index corresponding to the health state of the user, and dwB is index corresponding to the emotional state of the user.
Optionally, in another embodiment based on the method of the present application, the determining the health indicator of the target employee for the first future time period based on the first health status, the first emotional status, and the historical health parameter of the target employee includes:
acquiring attribute data of the target employee, wherein the attribute data comprises at least one of weight data, heart rate data and height data;
determining a health index of the target employee based on the attribute data of the target employee, the first health status, the first emotional status, and the historical health parameter of the target employee.
According to another aspect of the embodiments of the present application, there is provided an artificial intelligence-based employee work scheduling apparatus, including:
the system comprises an acquisition module, a judging module and a judging module, wherein the acquisition module is configured to utilize a preset image detection convolution network to perform feature recognition on a facial image of a target employee to obtain a first health state of the target employee;
the acquisition module is configured to perform feature recognition on the voice data of the target employee by using a voice detection circulation network to obtain a first emotional state of the target employee;
a computing module configured to determine a health indicator for the target employee for a first future time period based on the first health state, the first emotional state, and historical health parameters of the target employee;
a determination module configured to determine a work schedule for the target employee at the first future time period based on the health indicator.
According to another aspect of the embodiments of the present application, there is provided an electronic device including:
a memory for storing executable instructions; and
a display for display with the memory for executing the executable instructions to perform the operations of any of the artificial intelligence based employee work scheduling methods described above.
According to yet another aspect of the embodiments of the present application, there is provided a computer-readable storage medium for storing computer-readable instructions, which when executed perform the operations of any one of the above-mentioned artificial intelligence-based employee work scheduling methods.
In the method, a preset image detection convolution network can be utilized to perform feature recognition on the facial image of the target employee to obtain a first health state of the target employee; performing feature recognition on voice data of the target employee by utilizing a voice detection circulation network to obtain a first emotional state of the target employee; determining a health indicator for the target employee for a first future time period based on the first health state, the first emotional state, and historical health parameters of the target employee; determining a work schedule for the target employee for the first future time period based on the health indicator. By applying the technical scheme of the application, before the staff goes to work, the staff can be comprehensively identified according to the facial images and the voice data of the staff, and the body health state of the staff in the future time period can be acquired based on the body health state, so that the work arrangement of the staff in the future time period can be adjusted according to the body health state of the staff in the future time period. And then realize can real-time perception staff health state when staff's during operation, in time carry out the care to the staff when the state is not good, not only can guarantee that the staff is healthy, but also can improve work efficiency.
The technical solution of the present application is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description, serve to explain the principles of the application.
The present application may be more clearly understood from the following detailed description with reference to the accompanying drawings, in which:
FIG. 1 is a schematic diagram of an artificial intelligence based employee work arrangement as set forth in the present application;
FIG. 2 is a schematic structural diagram of an electronic device for staff work arrangement based on artificial intelligence according to the present application;
fig. 3 is a schematic view of an electronic device according to the present disclosure.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In addition, the technical solutions in the embodiments of the present application may be combined with each other, but it must be based on the realization of those skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should be considered to be absent and not within the protection scope of the present application.
It should be noted that all the directional indicators (such as upper, lower, left, right, front and rear … …) in the embodiment of the present application are only used to explain the relative position relationship between the components, the motion situation, etc. in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indicator is changed accordingly.
A method for performing artificial intelligence based employee work scheduling in accordance with an exemplary embodiment of the present application is described below in conjunction with fig. 1. It should be noted that the following application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present application, and the embodiments of the present application are not limited in this respect. Rather, embodiments of the present application may be applied to any scenario where applicable.
The application also provides a staff work arrangement method and device based on artificial intelligence, a target terminal and a medium.
Fig. 1 schematically shows a flowchart of an artificial intelligence-based employee work scheduling method according to an embodiment of the present application. As shown in fig. 1, the method includes:
s101, carrying out feature identification on the facial image of the target employee by utilizing a preset image detection convolution network to obtain a first health state of the target employee.
Firstly, in order to preliminarily acquire the health state of the employee, the embodiment of the application may firstly obtain the facial image of the employee for feature recognition, so as to determine the health state according to the feature parameters. The face image may be a face image generated by an employee when the employee punches a card, or may also be a face image actively uploaded by a user, or the like.
Further, in the process of training to obtain the image detection convolutional network, the initial image semantic convolutional neural network model can be specifically trained according to a certain number of sample images containing the corresponding health states, so that the qualified image detection convolutional network is obtained.
Alternatively, the facial features of the target user may include one or more organ features, such as eye features, lip features, forehead features, cheek features, etc. of the target user. Furthermore, the method and the device can detect and analyze the facial features of the target user by using a neural network model, train an initial image semantic convolution neural network model by using at least one sample image marked with the health state, and obtain the image detection convolution network meeting preset conditions.
Still further, for the used image semantic convolution neural network model, in an embodiment, when the neural network image semantic segmentation model performs semantic segmentation processing on the sample image, the more accurate the classification of the pixel points in the sample image is, the higher the accuracy rate of identifying the labeled object in the sample image is. It should be noted that the preset condition may be set by a user. For example, the preset conditions may be set as: the classification accuracy of the pixel points reaches more than 70%, then the sample image is used for repeatedly training the neural network image semantic segmentation model, and when the classification accuracy of the neural network image semantic segmentation model on the pixel points reaches more than 70%, then the neural network image semantic segmentation model can be applied to the embodiment of the invention for performing semantic segmentation processing on the face image.
And S102, performing feature recognition on voice data of the target staff by utilizing a voice detection circulating network to obtain a first emotion state of the target staff.
Further, the emotional state of the employee can be obtained again, specifically, the characteristic parameters of the voice can be recognized according to the voice recognition model, and therefore the tone, the volume and the tone of the user in the call process are determined.
For example, when the user has a bad physical state and a bad mood, the user may have a corresponding low volume. Therefore, whether the target user has a low voice compared with the usual voice or not can be determined according to the voice data parameters of the target user, and the corresponding emotional state of the target user can be determined.
Further, when the user is not in good mood, such as sending a disease such as a cold, the user may have a sharp tone due to nasal obstruction. Therefore, whether the target user has a sharp sound condition compared with the usual condition or not can be determined according to the voice data parameters of the target user, and the corresponding emotional state of the target user can be determined.
S103, determining the health index of the target employee in the first future time period based on the first health state, the first emotional state and the historical health parameters of the target employee.
The health state and the emotional state of the target employee within the second historical time period are obtained in a mode of determining the health index of the employee, wherein the second historical time period is not specifically limited, and may be, for example, one week, 1 month, and the like. And after the health state and the emotional state of the historical time period are obtained, a future health curve of the target employee is generated by combining the first health state and the first emotional state obtained at this time.
Specifically, according to the embodiment of the application, the health index of the employee in the future time period can be generated and obtained according to a preset formula. For example, the formula may be:
Xt=t*cos(dwA)+n*sin(dwB);Yn=t*cos(dwB)+n*sin(dwA)。
wherein, X is the horizontal axis of the health curve, Y is the vertical axis of the health curve, t is used for representing time, n is used for representing the health index, dwA is the index corresponding to the health state of the user, dwB is the index corresponding to the emotional state of the user.
And S104, determining the work schedule of the target staff in the first future time period based on the health index.
It will be appreciated that when it is determined that the employee is in a condition where a health hazard exists based on the health indicator, the employee may be scheduled to rest for a first future time period. And when the employee is determined to be under the condition that no health hidden danger exists based on the health index, the normal work rhythm in the first future time period can be arranged for the employee.
In the method, a preset image detection convolution network can be utilized to perform feature recognition on the facial image of the target employee to obtain a first health state of the target employee; performing feature recognition on voice data of the target employee by utilizing a voice detection circulation network to obtain a first emotional state of the target employee; determining a health indicator for the target employee for a first future time period based on the first health state, the first emotional state, and historical health parameters of the target employee; determining a work schedule for the target employee for the first future time period based on the health indicator. By applying the technical scheme of the application, before the staff goes to work, the staff can be comprehensively identified according to the facial images and the voice data of the staff, and the body health state of the staff in the future time period can be acquired based on the body health state, so that the work arrangement of the staff in the future time period can be adjusted according to the body health state of the staff in the future time period. And then realize can real-time perception staff health state when staff's during operation, in time carry out the care to the staff when the state is not good, not only can guarantee that the staff is healthy, but also can improve work efficiency.
Optionally, in a possible embodiment of the present application, before S101 (performing feature recognition on the facial image of the target employee by using a preset image detection convolutional network), the following steps may be performed:
obtaining a first number of sample images, wherein the sample images include at least one sample facial feature, wherein each sample facial feature is labeled with a corresponding health status;
and training the initial image semantic convolution neural network model by using at least one sample image marked with a health state to obtain an image detection convolution network meeting a preset condition.
Further, the method can firstly establish an image detection convolution network model for detecting the face images of the staff, and specifically can train an initial image semantic convolution neural network model according to a certain number of sample images containing corresponding health states, so as to obtain the qualified image detection convolution network.
Specifically, the method can identify at least one object (facial feature) included in the facial image of the employee through a neural network image semantic segmentation model. Furthermore, the neural network image semantic segmentation model may classify each organ feature in the facial features in the target image, and classify the organ features belonging to the same classification into the same type, so that the facial features obtained after semantic segmentation of the target image may be facial features composed of a plurality of different organ features.
Optionally, for the neural network image semantic segmentation model used, in an embodiment, the neural network image semantic segmentation model may be trained through the sample image. Specifically, a sample image can be obtained, and a preset neural network image semantic segmentation model is trained by using the sample image to obtain the neural network image semantic segmentation model meeting the preset conditions.
Wherein the sample image includes at least one sample facial feature, which may be the same as the facial features in the embodiments of the present application. For example, sample facial features in the sample image may include eye features, lip features, forehead features, ear features, cheek features, and so forth of the user.
Furthermore, when the neural network image semantic segmentation model performs semantic segmentation processing on the sample image, the more accurate the classification of the pixel points in the sample image is, the higher the accuracy of identifying the labeled object in the sample image is. It should be noted that the preset condition may be set by a user.
For example, the preset conditions may be set as: the classification accuracy of the pixel points reaches more than 70%, then, the sample image is used for repeatedly training the neural network image semantic segmentation model, and when the classification accuracy of the neural network image semantic segmentation model on the pixel points reaches more than 70%, then the neural network image semantic segmentation model can be applied to the embodiment of the invention for performing semantic segmentation processing on the target image.
The first number is not particularly limited in the present application, and may be, for example, 10 sheets, or 1w sheets.
Optionally, in a possible implementation manner of the present application, at S102 (using a preset image detection convolutional network to perform feature recognition on a facial image of a target employee to obtain a first health status of the target employee), the following steps may be implemented:
extracting at least one of eye region characteristics, cheek region characteristics and eyebrow region characteristics of the facial image of the target employee by using the image detection convolution network;
performing feature recognition on at least one of eye region features, cheek region features and eyebrow region features by using an image detection convolution network to obtain corresponding size parameters and color parameters;
based on the size parameter and the color parameter, a first health state of the target employee is determined.
Specifically, the health status corresponding to each employee can be determined according to the corresponding size and color condition of the employee by recognizing at least one of the eye region feature, the cheek region feature and the eyebrow region feature of the face of each employee through a pre-trained detection network.
Furthermore, according to the method and the device, the eye region characteristics of the face image of the user can be identified according to the detection model, so that the size state and the color state of the eye (for example, the eye which is not asleep and awake becomes small, or a dark eye ring exists) can be determined, and the health state of the staff can be obtained. Alternatively, the health status of the employee may be obtained by identifying the cheek region feature of the face image of the user based on the detection model, and determining the size status and color status of the cheek (for example, the cheek may be depressed or the face may be pale in a sub-health state).
For example, when the eye region feature of the employee zhang san is extracted by using the image detection convolutional network, and it is determined according to the feature that the color condition of the eye region of zhang san presents a condition that the black index is greater than a certain threshold, it may be determined that the first health state of zhang san is low.
Or, when the image detection convolution network is used for extracting the characteristics of the cheek region of the employee zhang san and determining that the size of the cheek region of zhang san is larger than a certain threshold according to the characteristics, the first health state of zhang san can be determined to be low.
Optionally, in a possible implementation manner of the present application, in S102 (using the voice detection loop network to perform feature recognition on the voice data of the target employee to obtain the first emotional state of the target employee, the following steps may be implemented:
extracting at least one of tone color feature, volume feature and tone feature of the voice data by using a voice detection circulation network;
and performing feature recognition on at least one of the tone color feature, the volume feature and the tone feature by utilizing a voice detection circulation network to acquire a corresponding first emotional state.
Specifically, the feature parameters of the speech may be recognized according to the speech recognition model, so as to determine the tone, volume and timbre of the user during the call. It can be understood that when the emotion state of the user is not good, the corresponding situations of volume rise, tone rise and tone fluctuation may occur, so that whether the user is angry or depressed or not can be determined according to the voice parameters of the user, and the corresponding emotion state of the user can be determined.
Optionally, in a possible embodiment of the present application, at S103 (determining a health index of the target employee at a first future time period based on the first health state, the first emotional state, and the historical health parameters of the target employee), the following steps may be implemented:
acquiring the health state and the emotional state of the target employee in a second historical time period;
generating a future health curve of the target employee according to the health state and the emotional state in the second historical time period, the first health state and the first emotional state;
based on the future health curve of the target employee, a health indicator of the target employee for a first future time period is determined.
Optionally, in one possible implementation of the present application, the present application may generate a future health curve of the target employee based on the following formula:
Xt=t*cos(dwA)+n*sin(dwB);
Yn=t*cos(dwB)+n*sin(dwA);
wherein, X is the horizontal axis of the health curve, Y is the vertical axis of the health curve, t is used for representing time, n is used for representing health index, dwA is index corresponding to the health state of the user, and dwB is index corresponding to the emotional state of the user.
According to the health state of each employee in the historical time period and the corresponding emotional state, a special health curve can be drawn for each employee. And according to a preset formula, extending to obtain a health curve of the employee.
The historical time period is not particularly limited, and may be, for example, one year, one month, or the like.
Optionally, in a possible embodiment of the present application, at S103 (determining a health index of the target employee at a first future time period based on the first health state, the first emotional state, and the historical health parameters of the target employee), the following steps may be implemented:
acquiring attribute data of a target employee, wherein the attribute data comprises at least one of weight data, heart rate data and height data;
determining a health index of the target employee based on the attribute data, the first health state, the first emotional state, and the historical health parameters of the target employee.
Furthermore, in the process of determining the health index of the target employee, the health index can be comprehensively determined according to the weight data, the heart rate data and the height data of the employee. It can be understood that when the weight data or the heart rate data of a certain employee are changed too much in a short time, a certain problem may occur on the health status of the employee.
Optionally, in another embodiment of the present application, as shown in fig. 2, the present application further provides an artificial intelligence based employee work scheduling apparatus. The method comprises an acquisition module 201, a calculation module 202 and a determination module 203, wherein:
the acquiring module 201 is configured to perform feature recognition on a facial image of a target employee by using a preset image detection convolution network to obtain a first health state of the target employee;
the obtaining module 201 is configured to perform feature recognition on the voice data of the target employee by using a voice detection circulation network to obtain a first emotional state of the target employee;
a calculation module 202 configured to determine a health indicator for the target employee for a first future time period based on the first health state, the first emotional state, and historical health parameters of the target employee;
a determination module 203 configured to determine a work schedule of the target employee during the first future time period based on the health indicator.
In the method, a preset image detection convolution network can be utilized to perform feature recognition on the facial image of the target employee to obtain a first health state of the target employee; performing feature recognition on voice data of the target employee by utilizing a voice detection circulation network to obtain a first emotional state of the target employee; determining a health indicator for the target employee for a first future time period based on the first health state, the first emotional state, and historical health parameters of the target employee; determining a work schedule for the target employee for the first future time period based on the health indicator. By applying the technical scheme of the application, before the staff goes to work, the staff can be comprehensively identified according to the facial images and the voice data of the staff, and the body health state of the staff in the future time period can be acquired based on the body health state, so that the work arrangement of the staff in the future time period can be adjusted according to the body health state of the staff in the future time period. And then realize can real-time perception staff health state when staff's during operation, in time carry out the care to the staff when the state is not good, not only can guarantee that the staff is healthy, but also can improve work efficiency.
In another embodiment of the present application, the obtaining module 201 further includes:
an acquisition module 201 configured to acquire a first number of sample images, wherein the sample images include at least one sample facial feature, wherein each sample facial feature is labeled with a corresponding health status;
an obtaining module 201 configured to train an initial image semantic convolution neural network model with at least one sample image labeled with the health status, so as to obtain the image detection convolution network meeting a preset condition.
In another embodiment of the present application, the obtaining module 201 further includes:
an obtaining module 201 configured to extract at least one of an eye region feature, a cheek region feature and an eyebrow region feature of the facial image of the target employee using the image detection convolution network;
an obtaining module 201, configured to perform feature identification on at least one of the eye region features, cheek region features and eyebrow region features by using the image detection convolution network, and obtain corresponding size parameters and color parameters;
an obtaining module 201 configured to determine a first health status of the target employee based on the size parameter and the color parameter.
In another embodiment of the present application, the obtaining module 201 further includes:
an obtaining module 201 configured to extract at least one of a tone characteristic, a volume characteristic, and a tone characteristic of the voice data using the voice detection loop network;
an obtaining module 201 configured to perform feature recognition on at least one of the tone color feature, the volume feature and the tone feature by using the voice detection loop network, and obtain the corresponding first emotional state.
In another embodiment of the present application, the obtaining module 201 further includes:
an obtaining module 201 configured to obtain a health status and an emotional status of the target employee within a second historical time period;
an obtaining module 201 configured to generate a future health curve of the target employee according to the health status and the emotional status in the second historical time period, and the first health status and the first emotional status;
an obtaining module 201 configured to determine a health indicator of the target employee for a first future time period based on a future health curve of the target employee.
In another embodiment of the present application, the method further includes: generating a future health curve for the target employee based on the following formula:
Xt=t*cos(dwA)+n*sin(dwB);
Yn=t*cos(dwB)+n*sin(dwA);
wherein, X is the horizontal axis of the health curve, Y is the vertical axis of the health curve, t is used for representing time, n is used for representing the health index, dwA is the index corresponding to the health state of the user, dwB is the index corresponding to the emotional state of the user.
In another embodiment of the present application, the obtaining module 201 further includes:
an obtaining module 201 configured to obtain attribute data of the target employee, the attribute data including at least one of weight data, heart rate data, and height data;
an obtaining module 201 configured to determine a health indicator of the target employee based on the attribute data of the target employee, the first health state, the first emotional state, and the historical health parameter of the target employee.
Fig. 3 is a block diagram illustrating a logical structure of an electronic device according to an example embodiment. For example, the electronic device 300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
In an exemplary embodiment, there is also provided a non-transitory computer readable storage medium, such as a memory, including instructions executable by an electronic device processor to perform the artificial intelligence based employee work scheduling method described above, the method comprising: the method comprises the steps that a preset image detection convolution network is utilized to conduct feature recognition on a facial image of a target employee, and a first health state of the target employee is obtained; performing feature recognition on the voice data of the target employee by using a voice detection circulation network to obtain a first emotion state of the target employee; determining a health indicator for the target employee for a first future time period based on the first health state, the first emotional state, and historical health parameters of the target employee; determining a work schedule for the target employee for the first future time period based on the health indicator. Optionally, the instructions may also be executable by a processor of the electronic device to perform other steps involved in the exemplary embodiments described above. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided an application/computer program product comprising one or more instructions executable by a processor of an electronic device to perform the above artificial intelligence based employee work scheduling method, the method comprising: the method comprises the steps that a preset image detection convolution network is utilized to conduct feature recognition on a facial image of a target employee, and a first health state of the target employee is obtained; performing feature recognition on the voice data of the target employee by using a voice detection circulation network to obtain a first emotion state of the target employee; determining a health indicator for the target employee for a first future time period based on the first health state, the first emotional state, and historical health parameters of the target employee; determining a work schedule for the target employee for the first future time period based on the health indicator. Optionally, the instructions may also be executable by a processor of the electronic device to perform other steps involved in the exemplary embodiments described above.
Fig. 3 is an exemplary diagram of the computer device 30. Those skilled in the art will appreciate that the schematic diagram 3 is merely an example of the computer device 30 and does not constitute a limitation of the computer device 30 and may include more or less components than those shown, or combine certain components, or different components, e.g., the computer device 30 may also include input output devices, network access devices, buses, etc.
The Processor 302 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor 302 may be any conventional processor or the like, and the processor 302 is the control center of the computer device 30 and connects the various parts of the entire computer device 30 using various interfaces and lines.
Memory 301 may be used to store computer readable instructions 303 and processor 302 may implement various functions of computer device 30 by executing or executing computer readable instructions or modules stored within memory 301 and by invoking data stored within memory 301. The memory 301 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the computer device 30, and the like. In addition, the Memory 301 may include a hard disk, a Memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Memory Card (Flash Card), at least one disk storage device, a Flash Memory device, a Read-Only Memory (ROM), a Random Access Memory (RAM), or other non-volatile/volatile storage devices.
The modules integrated by the computer device 30 may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by hardware related to computer readable instructions, which may be stored in a computer readable storage medium, and when the computer readable instructions are executed by a processor, the steps of the method embodiments may be implemented.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings and described above, and that various modifications and changes can be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. An employee work arrangement method based on artificial intelligence, comprising:
the method comprises the steps that a preset image detection convolution network is utilized to conduct feature recognition on a facial image of a target employee, and a first health state of the target employee is obtained;
performing feature recognition on voice data of the target employee by utilizing a voice detection circulation network to obtain a first emotional state of the target employee;
determining a health indicator for the target employee for a first future time period based on the first health state, the first emotional state, and historical health parameters of the target employee;
determining a work schedule for the target employee for the first future time period based on the health indicator.
2. The method of claim 1, wherein before said performing feature recognition on the facial image of the target employee using the preset image detection convolutional network, further comprising:
obtaining a first number of sample images, wherein the sample images include at least one sample facial feature, wherein each sample facial feature is labeled with a corresponding health status;
and training an initial image semantic convolution neural network model by using at least one sample image marked with the health state to obtain the image detection convolution network meeting preset conditions.
3. The method of claim 1 or 2, wherein the performing feature recognition on the facial image of the target employee by using a preset image detection convolutional network to obtain the first health status of the target employee comprises:
extracting at least one of eye region features, cheek region features and eyebrow region features of the facial image of the target employee by using the image detection convolution network;
performing feature recognition on at least one of the eye region features, the cheek region features and the eyebrow region features by using the image detection convolution network to obtain corresponding size parameters and color parameters;
determining a first health status of the target employee based on the size parameter and the color parameter.
4. The method of claim 1, wherein said performing feature recognition on the voice data of the target employee using a voice detection loop network to obtain a first emotional state of the target employee comprises:
extracting at least one of tone color feature, volume feature and tone feature of the voice data by using the voice detection circulation network;
and performing feature recognition on at least one of the tone color feature, the volume feature and the tone feature by using the voice detection circulation network to acquire the corresponding first emotional state.
5. The method of claim 1, wherein determining the health indicator for the target employee for the first future time period based on the first health state, the first emotional state, and historical health parameters of the target employee comprises:
acquiring the health state and the emotional state of the target employee within a second historical time period;
generating a future health curve of the target employee according to the health state and the emotional state in the second historical time period, the first health state and the first emotional state;
determining a health indicator for the target employee for a first future time period based on the future health curve for the target employee.
6. The method of claim 1, wherein the future health curve for the target employee is generated based on the following formula:
Xt=t*cos(dwA)+n*sin(dwB);
Yn=t*cos(dwB)+n*sin(dwA);
wherein X is the horizontal axis of the health curve, Y is the vertical axis of the health curve, t is used for representing time, n is used for representing a health index, dwA is an index corresponding to the health state of the user, and dwB is an index corresponding to the emotional state of the user.
7. The method of claim 1, wherein determining the health indicator for the target employee for the first future time period based on the first health state, the first emotional state, and historical health parameters of the target employee comprises:
acquiring attribute data of the target employee, wherein the attribute data comprises at least one of weight data, heart rate data and height data;
determining a health indicator of the target employee based on the attribute data of the target employee, the first health status, the first emotional status, and the historical health parameter of the target employee.
8. An artificial intelligence based employee work scheduling apparatus comprising:
the system comprises an acquisition module, a judging module and a judging module, wherein the acquisition module is configured to utilize a preset image detection convolution network to perform feature recognition on a facial image of a target employee to obtain a first health state of the target employee;
the acquisition module is configured to perform feature recognition on voice data of the target employee by using a voice detection circulation network to obtain a first emotional state of the target employee;
a computing module configured to determine a health indicator for the target employee for a first future time period based on the first health state, the first emotional state, and historical health parameters of the target employee;
a determination module configured to determine a work schedule for the target employee at the first future time period based on the health indicator.
9. An electronic device, comprising:
a memory for storing executable instructions; and the number of the first and second groups,
a processor for display with the memory to execute the executable instructions to perform the operations of the artificial intelligence based employee work schedule method of any one of claims 1-7.
10. A computer readable storage medium storing computer readable instructions which, when executed, perform the operations of the artificial intelligence based employee work schedule method of any one of claims 1 to 7.
CN202011347146.5A 2020-11-26 2020-11-26 Staff work arrangement method, device, equipment and medium based on artificial intelligence Pending CN112668833A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011347146.5A CN112668833A (en) 2020-11-26 2020-11-26 Staff work arrangement method, device, equipment and medium based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011347146.5A CN112668833A (en) 2020-11-26 2020-11-26 Staff work arrangement method, device, equipment and medium based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN112668833A true CN112668833A (en) 2021-04-16

Family

ID=75403667

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011347146.5A Pending CN112668833A (en) 2020-11-26 2020-11-26 Staff work arrangement method, device, equipment and medium based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN112668833A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780751A (en) * 2021-08-18 2021-12-10 福建宁德核电有限公司 Nuclear power plant defect management method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109766766A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 Employee work condition monitoring method, device, computer equipment and storage medium
CN110516593A (en) * 2019-08-27 2019-11-29 京东方科技集团股份有限公司 A kind of emotional prediction device, emotional prediction method and display device
CN111839552A (en) * 2020-07-24 2020-10-30 广州广杰网络科技有限公司 Intelligent physical and mental state recognizer based on 5G + AIoT

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109766766A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 Employee work condition monitoring method, device, computer equipment and storage medium
CN110516593A (en) * 2019-08-27 2019-11-29 京东方科技集团股份有限公司 A kind of emotional prediction device, emotional prediction method and display device
CN111839552A (en) * 2020-07-24 2020-10-30 广州广杰网络科技有限公司 Intelligent physical and mental state recognizer based on 5G + AIoT

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780751A (en) * 2021-08-18 2021-12-10 福建宁德核电有限公司 Nuclear power plant defect management method and system

Similar Documents

Publication Publication Date Title
CN107895146B (en) Micro-expression recognition method, device and system and computer readable storage medium
CN107392124A (en) Emotion identification method, apparatus, terminal and storage medium
WO2021036664A1 (en) Method and apparatus for identifying customer satisfaction on basis of micro-expressions, terminal and medium
JP6906717B2 (en) Status determination device, status determination method, and status determination program
KR102351008B1 (en) Apparatus and method for recognizing emotions
US20140316216A1 (en) Pet medical checkup device, pet medical checkup method, and non-transitory computer readable recording medium storing program
US11127181B2 (en) Avatar facial expression generating system and method of avatar facial expression generation
US20220230471A1 (en) Artificial Intelligence-Assisted Evaluation Method for Aesthetic Medicine and Evaluation System Using Same
Kamaruddin et al. Human behavior state profile mapping based on recalibrated speech affective space model
Del Líbano et al. Discrimination between smiling faces: Human observers vs. automated face analysis
Dadiz et al. Detecting depression in videos using uniformed local binary pattern on facial features
CN112668833A (en) Staff work arrangement method, device, equipment and medium based on artificial intelligence
CN112149610A (en) Method and system for identifying target object
CN113033387A (en) Intelligent assessment method and system for automatically identifying chronic pain degree of old people
CN113313795A (en) Virtual avatar facial expression generation system and virtual avatar facial expression generation method
WO2020175969A1 (en) Emotion recognition apparatus and emotion recognition method
CN104598866B (en) A kind of social feeling quotrient based on face promotes method and system
KR20210019182A (en) Device and method for generating job image having face to which age transformation is applied
Seanglidet et al. Mood prediction from facial video with music “therapy” on a smartphone
CN115100560A (en) Method, device and equipment for monitoring bad state of user and computer storage medium
TW201839635A (en) Emotion detection system and method
Chin et al. Skin condition detection of smartphone face image using multi-feature decision method
JP2020149361A (en) Expression estimating apparatus, feeling determining apparatus, expression estimating method, and program
WO2023054295A1 (en) Information processing device, information processing method, and program
US11816927B2 (en) Information providing device, information providing method, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination