CN112099629A - Method and system for providing work operation guide - Google Patents

Method and system for providing work operation guide Download PDF

Info

Publication number
CN112099629A
CN112099629A CN202010954387.XA CN202010954387A CN112099629A CN 112099629 A CN112099629 A CN 112099629A CN 202010954387 A CN202010954387 A CN 202010954387A CN 112099629 A CN112099629 A CN 112099629A
Authority
CN
China
Prior art keywords
user
mode
action
working mode
deviation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010954387.XA
Other languages
Chinese (zh)
Other versions
CN112099629B (en
Inventor
吴晓军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Jilian Human Resources Service Group Co ltd
Original Assignee
Hebei Jilian Human Resources Service Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Jilian Human Resources Service Group Co ltd filed Critical Hebei Jilian Human Resources Service Group Co ltd
Priority to CN202010954387.XA priority Critical patent/CN112099629B/en
Publication of CN112099629A publication Critical patent/CN112099629A/en
Application granted granted Critical
Publication of CN112099629B publication Critical patent/CN112099629B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Tourism & Hospitality (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Human Resources & Organizations (AREA)
  • Psychiatry (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Economics (AREA)
  • Social Psychology (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The present disclosure provides a method and system for providing work operation guidance, wherein the method comprises: sensing and identifying the working mode of a user in progress; analyzing the working mode, and acquiring standard actions and environmental scenes in the working mode; inputting the actual action of the user and the standard action obtained by analysis into a machine learning model, and calculating to obtain the deviation between the actual action and the standard action of the user; combining an environment scene, if the deviation exceeds a threshold value, sending a prompt to a user, and displaying correct operation guide; continuously monitoring whether the action of the user is corrected or not, and if the user cannot normally operate, sending an early warning to a preset object; storing data of the deviation.

Description

Method and system for providing work operation guide
Technical Field
The present disclosure relates to the field of human resource management and sensor technologies, and in particular, to a method, a system, an electronic device, and a computer-readable storage medium for providing work operation guidance.
Background
The home services are a comprehensive labor, for example, the work of the caregiver may include one or more of cooking, mopping, wiping the window, watching children, watching the old, etc., and the work of decoration or repair may include water work, electrician, bricklayer, clay worker, painter, carpenter, etc. Each work mode requires different skills and has different compensation per unit time. These jobs, although very different, are both short-term or zero-work jobs, and problems arise that are difficult to follow.
In the prior art, participants of shortcuts and retail workers are not good enough to be monitored in real time, and the evidence is not easy to be obtained after errors occur, so that an identification device for identifying and professionally prompting the users to avoid danger and fix the evidence is urgently needed, the users are managed, the prompt is given when the actions of the users are not standard, the alarms are given when the actions of the users are not standard continuously, the evidence is fixed, and the employers are informed.
Disclosure of Invention
In view of the above, an object of the embodiments of the present disclosure is to provide a method and a system for providing work operation guidance, which provide work guidance to a user by identifying a correct work mode, improve the work level of the user, and play a role in supervising, fixing evidence, and reminding an employer.
According to a first aspect of the present disclosure, there is provided a method of providing work operation directions, comprising:
sensing and identifying the working mode of a user in progress;
analyzing the working mode, and acquiring standard actions and environmental scenes in the working mode;
inputting the actual action of the user and the standard action obtained by analysis into a machine learning model, and calculating to obtain the deviation between the actual action and the standard action of the user;
combining an environment scene, if the deviation exceeds a threshold value, sending a prompt to a user, and displaying correct operation guide;
continuously monitoring whether the action of the user is corrected or not, and if the user cannot normally operate, giving an early warning to a preset person;
storing data of the deviation.
In one possible embodiment, the operating modes include: a cleaning working mode, a decoration working mode, a worker protection working mode and a nurse working mode.
In one possible embodiment, in the cleaning operation mode, the method for resolving the operation mode further includes: the analysis of the action of the leg area and the analysis of the cleaning effect of the cleaning object are enhanced.
In a possible embodiment, in the decoration operation mode, the method for resolving the operation mode further includes: enhance the analysis of the motion of the hand region and actively remind the user of the operation specification.
In a possible embodiment, in the work mode of the maintenance worker, the method for resolving the work mode further includes: and enhancing the analysis of the types and the dosage of the medicines.
In a possible embodiment, in the nanny operation mode, the method for resolving the operation mode further includes: enhancing the analysis of the physical form of the cared person, and analyzing the conversation between the user and the cared person by using a BERT-based machine learning model.
In one possible embodiment, the method further comprises: when the session involves abuse, the preset object is notified and the data is stored.
According to a second aspect of the present disclosure, there is provided a system for providing work operation guidance, comprising:
the sensing unit is used for sensing and identifying the working mode of a user;
the analysis unit is used for analyzing the working mode and acquiring the standard action and the environment scene in the working mode;
the deviation calculation unit is used for inputting the actual action and the standard action obtained by analysis of the user into the machine learning model and calculating the deviation between the actual action and the standard action of the user;
the prompting unit is used for sending a prompt to a user and displaying correct operation guide if the deviation exceeds a threshold value in combination with an environment scene;
the early warning unit is used for continuously monitoring whether the action of the user is corrected or not, and if the user cannot normally operate, early warning is sent to a preset person;
and the storage unit is used for storing the data of the deviation.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to the first aspect when executing the program.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method of the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts. The foregoing and other objects, features and advantages of the application will be apparent from the accompanying drawings. Like reference numerals refer to like parts throughout the drawings. The drawings are not intended to be to scale as practical, emphasis instead being placed upon illustrating the subject matter of the present application.
Fig. 1 shows a schematic diagram of a typical identification device of an housekeeping mode of operation according to an embodiment of the present disclosure.
FIG. 2 illustrates a schematic diagram of a method of typical work operation directions in accordance with an embodiment of the present disclosure.
FIG. 3 shows a schematic diagram of acceleration sensor values for a typical hand motion while wiping a window according to an embodiment of the present disclosure.
FIG. 4 illustrates a schematic diagram of an exemplary visual reminder according to an embodiment of the present disclosure.
FIG. 5 illustrates a schematic diagram of a system of typical work operation directions in accordance with an embodiment of the present disclosure.
Fig. 6 shows a schematic structural diagram of an electronic device for implementing an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The words "a", "an" and "the" and the like as used herein are also intended to include the meanings of "a plurality" and "the" unless the context clearly dictates otherwise. Furthermore, the terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
The home services are a comprehensive labor, for example, the work of the caregiver may include one or more of cooking, mopping, wiping the window, watching children, watching the old, etc., and the work of decoration or repair may include water work, electrician, bricklayer, clay worker, painter, carpenter, etc. Each work mode requires different skills and has different compensation per unit time. These jobs, although very different, are both short-term or zero-work jobs, and problems arise that are difficult to follow.
In the prior art, participants of shortcuts and retail workers are not good enough to be monitored in real time, and the evidence is not easy to be obtained after errors occur, so that an identification device for identifying and professionally prompting the users to avoid danger and fix the evidence is urgently needed, the users are managed, the prompt is given when the actions of the users are not standard, the alarms are given when the actions of the users are not standard continuously, the evidence is fixed, and the employers are informed.
In view of the above, an object of the embodiments of the present disclosure is to provide a method and a system for providing work operation guidance, which provide work guidance to a user by identifying a correct work mode, improve the work level of the user, and play a role in supervising, fixing evidence, and reminding an employer.
The present disclosure is described in detail below with reference to the attached drawings.
Fig. 1 shows a schematic diagram of a typical identification device of an housekeeping mode of operation according to an embodiment of the present disclosure.
FIG. 2 illustrates a schematic diagram of a method of typical work operation directions in accordance with an embodiment of the present disclosure.
Step 201, sensing and recognizing the working mode of the user.
The user wears an identification device, for example, an identification device 100 of the home work mode as shown in fig. 1, senses the movement of the user, and identifies the home work mode. The employer or user can set the current home-based working mode by himself. The camera 110 is used for collecting video data of a first visual angle of a user. The camera 110 may be worn on the head, a wearable device such as a hat, helmet, glasses, etc., which may have a wireless communication interface, such as WiFi, bluetooth, etc., to upload the captured video data to a server (not shown) for processing. A computer program is deployed on the server for extracting spatial features 112 and temporal features 113 from the video data. A plurality of motion sensors including a bracelet motion sensor 120-1 and a head motion sensor 120-2, the motion sensors being worn on different body parts of the user to detect respective motion sensing data. Specifically, the bracelet motion sensor 120-1 may be worn on the wrist of the user for detecting acceleration, angular acceleration and geomagnetic data of the wrist while the user is acting; the head movement sensor 120-2 may be worn on the head of a user, such as a wearable device like a hat, helmet, glasses, etc., for detecting acceleration, angular acceleration, and geomagnetic data of the head when the user acts. The acceleration includes translational acceleration in the direction of the X, Y, Z axis of the three-dimensional space coordinate system, and the angular acceleration includes acceleration around three coordinate axes of the three-dimensional space coordinate system, including angular acceleration of pitch, roll, and rotation. The geomagnetic data includes detected data on a direction of the geomagnetism, i.e., an azimuth orientation of the motion sensor.
The motion sensing data of the motion sensors 120-1 and 120-2 may be input to the support vector machine 121, and the motion features 122 are generated by the support vector machine 121. The support vector machine 121 may be pre-trained and thus adapted to generate motion features 122-1 and 122-2 with respect to a preset motion pattern. The motion features 122 may be motion pattern vectors, where each component represents a probability and strength that the motion sensing data belongs to a respective motion class. The motion category comprises large motion translation, large motion rotation, large motion vibration, fine motion translation, fine motion rotation and fine motion vibration, and the intensity comprises displacement distance, amplitude and frequency.
And calculating the motion intensity according to the acceleration and the angular acceleration in the motion sensing data. For example, the displacement distance, amplitude and frequency of the motion sensor can be calculated by means of inertial navigation. The category components are combined with displacement distance, amplitude, and frequency to form the motion profile 122.
The following data were obtained: based on the spatial features 112 and temporal features 113 of the image data, the motion pattern features 122 and the spatial relationship features 130.
Wherein the spatial features of the image data comprise 8-fold down-sampling features, 16-fold down-sampling features and 32-fold down-sampling features of video frames in the image data, which are extracted using a convolutional neural network and combined to form multi-scale features. The temporal features of the image data include randomly selecting a portion of the video frames from a plurality of video frames within a time period prior to the current frame, and combining spatial features of the selected portion of the video frames to form the temporal features. The motion pattern comprises categories and intensity, the categories comprise large motion translation, large motion rotation, large motion vibration, fine motion translation, fine motion rotation and fine motion vibration, and the intensity comprises distance, amplitude and frequency. The spatial relationship features include: based on the sensing data of the head motion sensor and the sensing data of the bracelet motion sensor, the spatial relationship vector of the bracelet motion sensor relative to the head motion sensor is calculated in an inertial navigation mode, and the time sequence of the spatial relationship vector is obtained and used as the spatial relationship characteristic. The spatial relationship is characterized by the spatial relationship of the limbs relative to the head, and the position of the hands near the body can be sensed by the spatial relationship, so that the housekeeping mode can be determined.
The spatial relationship feature 130 is generated from motion sensing data of the bracelet motion sensor 120-1 and the head motion sensor 120-2. The spatial feature 112 and the temporal feature 113 of the video data, the motion features 122-1 and 122-2 of the motion sensor, and the spatial relationship feature 130 obtained above may be combined by stitching and input together to the neural network 140. The neural network 140 may output vectors regarding the behavior patterns and the working strength, and the behavior pattern with the highest probability is used as the working pattern 150 of the user.
Step 202, analyzing the working mode, and acquiring standard actions and environmental scenes in the working mode;
aiming at four political work modes, namely a cleaning work mode, a decoration work mode, a worker protection work mode and a nurse work mode, standard actions, also called template actions, are set for each mode. The action is a standard action obtained by wearing a motion catcher on a human body, inviting a subject to repeatedly act according to a standard flow and a posture and collecting the action. In the recognized or preset home work mode, the standard actions associated with the home work mode are stored.
FIG. 3 shows a schematic diagram of acceleration sensor values for a typical hand motion while wiping a window according to an embodiment of the present disclosure.
The acceleration value of the standard motion is expressed in the form of a three-dimensional coordinate system, and since the hand moves on a plane almost when the window is wiped, the acceleration of one axis is almost 0. Similarly, the angular acceleration includes accelerations around three coordinate axes of a three-dimensional space coordinate system, including angular accelerations of pitch, roll, and rotation.
In addition to the acceleration values and the angular acceleration values, which may represent standard motions, there are frame difference vectors, displacement values and gyroscope values of the video, and other dimensions such as the spatial and temporal features, the motion pattern features and the spatial relationship features based on the image data may also represent standard motions, which is not limited by the present disclosure.
The environmental scenario for each home-based working mode is also pre-established and stored in order to reduce the error in calculating the motion deviation. For example, for the working modes of nanny, sanitation and decoration, the environmental scenes are divided into indoor and outdoor. Indoor refers to a daily living environment, and outdoor refers to a daily outdoor environment. For the work mode of the nursing staff, the environment of day and night is increased besides the indoor and outdoor modes. The recognition device 100 can automatically recognize the current environmental scenario, and the employer or the user can set the current environmental scenario by himself.
Step 203, inputting the actual action of the user and the standard action obtained by analysis into the machine learning model, and calculating the deviation between the actual action and the standard action of the user.
The machine learning model used in the present disclosure is a deep learning neural network model based on, and includes a convolution layer, a pooling layer, a nonlinear transformation layer and a weight nonlinear layer which are connected in sequence.
When calculating the deviation of each action category, taking data in a specific sliding window size range of the actual action and the standard action as input; each data in the input window may be mapped to an N-dimensional vector; then, the convolutional layer generates global features corresponding to the hidden nodes; these features are fed to the pooling layer and then passed through a non-linear variation layer and a weighted non-linear layer. Finally, the characteristics including local characteristics and global characteristics are sent into a standard radial network, and the last layer of implicit function value after characteristic extraction is multiplied by a certain weight wiAnd output to linear neural unit, i.e. pooling layer, in reverse to realize the information processing of valueThe rows are recycled, increasing the weight in the overall information. A back propagation algorithm is used to train to a level where the entire network is suitably stable.
The purpose of the environment scenes is to set the weight w for reinforcing the corresponding part of the machine learning model for each environment sceneiTo make it pay more attention to things. For example, for a cleaning work mode in an indoor scene, the cleaning effect of the cleaning object in the image can be more concerned. The hand motion recognition can help to judge that the user is in different motions of window cleaning, floor sweeping, table cleaning and the like. If the current action of the user is determined to be sweeping, the weight of the features of the objects on the floor in the image is increased to facilitate determination of the cleaning effect.
Further, step 203 may preset values, such as floor area, window area, etc. for cleaning. Thus, in the cleaning work, the cleaned area and the uncleaned area, and the time taken can be identified through the step 203.
And step 204, combining the environment scene, if the deviation exceeds the threshold value, sending a prompt to a user, and displaying a correct operation guide.
Since the accuracy of recognition of different actions is different, there is a corresponding threshold in different scenarios in each operating mode. For example, in an indoor environment, the threshold is small because the recognition accuracy of the motion is high due to less interference. In an outdoor environment, there are many interfering objects, light is strong or weak, and an error of identification through a video is large, so that an error of deviation is also large, and a corresponding threshold value is also large.
And prompting the user when the threshold value is exceeded, and displaying a correct operation guide to hope that the user can complete the work in a correct working method. For example, for the action of mopping the floor in an indoor scene in a cleaning work mode, the recognized action is to move the mop once per second, which is obviously lower than the level of normal people, and the user may be lazy. Assuming that the threshold value is now 10% offset and the actual calculated offset is 50%, the threshold value is exceeded and a prompt may be given to the user.
Presenting the correct operating instructions may include: voice prompts and visual prompts. Where the visual alert is displayed through AR glasses 160 of device 100.
FIG. 4 illustrates a schematic diagram of an exemplary visual reminder according to an embodiment of the present disclosure.
The visualization of fig. 4 is exemplified by a window wiping action, in which a cleaned area and an uncleaned area are shown in the displayed area. In the cleaned area, the degree of cleaning, the efficiency of cleaning, the cleaning material are displayed. The cleaning degree is obtained in step 203, and the cleaning efficiency can be calculated according to the preset area to be cleaned, wherein the cleaning efficiency is the ratio of the actually spent time to the average time under the same area. Glass detergent: the normal use reflects the use condition of the cleaning materials, and prompts the condition of waste or too little use. The cleaned area is identified by step 203. The cleaning route 410 is a cleaning route of a cleaned area, recorded by the apparatus 100.
In the uncleaned area, the remaining area and the expected time are shown. The remaining area can be identified in step 203, or the difference between the preset area to be cleaned and the cleaned area can be calculated. The estimated time is estimated by the cleaning efficiency. The recommended cleaning route 411 is a recommended cleaning route preset in the uncleaned area, as needed or according to some criteria. If the cleaning route is not set, the recommended cleaning route is not shown, or the cleaning route carried by the device 100 is shown.
The visual presentation may also include the task type, and in what order and time the task is required to be completed. Tasks that the user needs to properly clean (e.g., floors, furniture, windows, etc.) are marked red. The information of the cleaning tools needed, the work step guidance and other instructions which are helpful for work can appear in the display area of the AR glasses of the user, so that some cleaning tasks with special requirements can be smoothly carried out, errors in cleaning are reduced, the cleaner can receive the guidance while working, the time for training the cleaner is reduced, and the cleaning effect can be ensured.
For example, some users use the same wipe for any area in order to save trouble, such as: such a serious error can be immediately detected at this time by cleaning the table top of the living room using a rag for cleaning a toilet or a kitchen. The colors of different areas can be set by self for convenient work. For example, setting the uncleaned area to red and the cleaned area to green is not friendly to red-green blindness and can be adjusted to other colors.
Step 205, continuously monitoring whether the action of the user is corrected, and if the user cannot operate normally, giving an early warning to a preset person.
The action of the user is continuously detected, the deviation is calculated, if the user cannot normally operate after a certain time, an early warning prompt can be sent to an employer or a preset object, and common communication means such as short messages, section calls, APP notification and the like can be used.
Step 206, storing the data of the deviation.
In one possible embodiment, in the cleaning operation mode, the method for resolving the operation mode further includes: the analysis of the action of the leg area and the analysis of the cleaning effect of the cleaning object are enhanced.
In the cleaning work, the large displacement of the body is more, such as sweeping the floor, mopping the floor, wiping the table surface and the like, and in an indoor environment, due to poor GPS signals, the displacement obtained by the integration of the acceleration sensor is influenced, so that the error is larger. Therefore, the weight of the leg motion in the image data is increased, and the error generated when the deviation is calculated is reduced as much as possible.
In one possible embodiment, in the decoration operation mode, the method for resolving the operation mode further includes: enhance the analysis of the motion of the hand region and actively remind the user of the operation specification.
The finishing work belongs to fine hand work, and in the mode, the movement of the hand and the type of objects held by the hand are particularly noticed. The purpose is to distinguish the specific working categories of users, such as electricians, water works and the like.
The water conservancy project and electrician in the decoration have safe and standard operation. When the user is identified to belong to the working mode, the user is actively reminded of the operation specification.
In one possible embodiment, in the work mode of the maintenance worker, the method for resolving the work mode further includes: and enhancing the analysis of the types and the dosage of the medicines.
The caregiver may be involved in the act of administering the medication to the caregiver and is very important and error-free. The resolution of the drugs and the amount of the drugs is enhanced by increasing the weight of small items such as bottles, tablets and the like in the image data.
In one possible embodiment, in the nanny operation mode, the method for resolving the operation mode further includes: enhancing the analysis of the physical form of the cared person, and analyzing the conversation between the user and the cared person by using a BERT-based machine learning model.
When the nurse works, the infant and the old are greatly distinguished from the cared person, the infant and the old do not need to have too many conversations, and the old can have more conversations. Abuse is also a concern if the session involves abuse. It is desirable to enhance the analysis of the physical characteristics of the person being cared for age, as well as the analysis of the user's interaction with the person being cared for abuse.
In one possible embodiment, the preset objects are notified and the data stored when the session involves abuse. If the caregiver is abused or abused on the caretaker, the employer or the preset object is notified, while the data collected by the camera device and the voice device is stored.
One way to analyze dialogues and determine if they are abusive is to utilize a BERT-based machine learning model.
The classification models with the best effect acknowledged in the field of Natural Language Processing (NLP) at present are BERT, ELECTRA and the like, so the scheme adopted by the disclosure is a machine learning model obtained by directly adding words related to abuse to train on the basis of a BERT pre-training model.
FIG. 5 illustrates a schematic diagram of a system of typical work operation directions in accordance with an embodiment of the present disclosure. The system 500, comprising:
a sensing unit 501, configured to sense and recognize an ongoing operating mode of a user;
an analyzing unit 502, configured to analyze the working mode, and obtain a standard action and an environmental scene in the working mode;
a deviation calculation unit 503, configured to input the actual motion of the user and the standard motion obtained through analysis into the machine learning model, and calculate a deviation between the actual motion of the user and the standard motion;
a prompt unit 504, configured to send a prompt to a user and display a correct operation guide if the deviation exceeds a threshold value in combination with an environment scene;
an early warning unit 505 for continuously monitoring whether the user's action is corrected, and if the user cannot operate normally, sending an early warning to a preset person;
a storage unit 506 for storing data of the deviation.
Fig. 6 shows a schematic structural diagram of an electronic device for implementing an embodiment of the present disclosure. As shown in fig. 6, the electronic apparatus 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer-readable medium bearing instructions that, in such embodiments, may be downloaded and installed from a network through the communication portion 609, and/or installed from the removable media 611. The various method steps described in this disclosure are performed when the instructions are executed by a Central Processing Unit (CPU) 601.
Although example embodiments have been described, it will be apparent to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the disclosed concept. Accordingly, it should be understood that the above-described exemplary embodiments are not limiting, but illustrative.

Claims (10)

1. A method of providing work operation directions, comprising:
sensing and identifying the working mode of a user in progress;
analyzing the working mode, and acquiring standard actions and environmental scenes in the working mode;
inputting the actual action of the user and the standard action obtained by analysis into a machine learning model, and calculating to obtain the deviation between the actual action and the standard action of the user;
combining an environment scene, if the deviation exceeds a threshold value, sending a prompt to a user, and displaying correct operation guide;
continuously monitoring whether the action of the user is corrected or not, and if the user cannot normally operate, sending an early warning to a preset object;
storing data of the deviation.
2. The method of claim 1, the operational mode comprising: a cleaning working mode, a decoration working mode, a worker protection working mode and a nurse working mode.
3. The method of claim 2, wherein in the cleaning operation mode, the method of resolving the operation mode further comprises: the analysis of the action of the leg area and the analysis of the cleaning effect of the cleaning object are enhanced.
4. The method of claim 2, wherein in the finishing mode of operation, the method of resolving the mode of operation further comprises: enhance the analysis of the motion of the hand region and actively remind the user of the operation specification.
5. The method of claim 2, wherein in the work mode of the caregiver, the method of resolving the work mode further comprises: and enhancing the analysis of the types and the dosage of the medicines.
6. The method of claim 2, wherein in the nanny mode of operation, the method of resolving the mode of operation further comprises: enhancing the analysis of the physical form of the cared person, and analyzing the conversation between the user and the cared person by using a BERT-based machine learning model.
7. The method of claim 6, further comprising: when the session involves abuse, the preset object is notified and the data is stored.
8. A system for providing work operation directions, comprising:
the sensing unit is used for sensing and identifying the working mode of a user;
the analysis unit is used for analyzing the working mode and acquiring the standard action and the environment scene in the working mode;
the deviation calculation unit is used for inputting the actual action and the standard action obtained by analysis of the user into the machine learning model and calculating the deviation between the actual action and the standard action of the user;
the prompting unit is used for sending a prompt to a user and displaying correct operation guide if the deviation exceeds a threshold value in combination with an environment scene;
the early warning unit is used for continuously monitoring whether the action of the user is corrected or not, and if the user cannot normally operate, the early warning unit sends out an early warning to a preset object;
and the storage unit is used for storing the data of the deviation.
9. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-7.
10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1 to 7.
CN202010954387.XA 2020-09-11 2020-09-11 Method and system for providing working operation guide Active CN112099629B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010954387.XA CN112099629B (en) 2020-09-11 2020-09-11 Method and system for providing working operation guide

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010954387.XA CN112099629B (en) 2020-09-11 2020-09-11 Method and system for providing working operation guide

Publications (2)

Publication Number Publication Date
CN112099629A true CN112099629A (en) 2020-12-18
CN112099629B CN112099629B (en) 2024-04-16

Family

ID=73751531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010954387.XA Active CN112099629B (en) 2020-09-11 2020-09-11 Method and system for providing working operation guide

Country Status (1)

Country Link
CN (1) CN112099629B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113204306A (en) * 2021-05-12 2021-08-03 同济大学 Object interaction information prompting method and system based on augmented reality environment
TWI766491B (en) * 2020-12-22 2022-06-01 國立清華大學 Human negligence warning method based on augmented reality

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010146223A (en) * 2008-12-18 2010-07-01 Hitachi Ltd Behavior extraction system, behavior extraction method, and server
CN102483618A (en) * 2010-07-28 2012-05-30 费希尔-罗斯蒙德系统公司 Intrinsically-safe handheld field maintenance tool with image and/or sound capture
US20170188895A1 (en) * 2014-03-12 2017-07-06 Smart Monitor Corp System and method of body motion analytics recognition and alerting
US20170220854A1 (en) * 2016-01-29 2017-08-03 Conduent Business Services, Llc Temporal fusion of multimodal data from multiple data acquisition systems to automatically recognize and classify an action
CN107292247A (en) * 2017-06-05 2017-10-24 浙江理工大学 A kind of Human bodys' response method and device based on residual error network
US20180372499A1 (en) * 2017-06-25 2018-12-27 Invensense, Inc. Method and apparatus for characterizing platform motion
CN110301012A (en) * 2017-02-24 2019-10-01 通用电气公司 The auxiliary information about health care program and system performance is provided using augmented reality
CN111274881A (en) * 2020-01-10 2020-06-12 中国平安财产保险股份有限公司 Driving safety monitoring method and device, computer equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010146223A (en) * 2008-12-18 2010-07-01 Hitachi Ltd Behavior extraction system, behavior extraction method, and server
CN102483618A (en) * 2010-07-28 2012-05-30 费希尔-罗斯蒙德系统公司 Intrinsically-safe handheld field maintenance tool with image and/or sound capture
US20170188895A1 (en) * 2014-03-12 2017-07-06 Smart Monitor Corp System and method of body motion analytics recognition and alerting
US20170220854A1 (en) * 2016-01-29 2017-08-03 Conduent Business Services, Llc Temporal fusion of multimodal data from multiple data acquisition systems to automatically recognize and classify an action
CN110301012A (en) * 2017-02-24 2019-10-01 通用电气公司 The auxiliary information about health care program and system performance is provided using augmented reality
CN107292247A (en) * 2017-06-05 2017-10-24 浙江理工大学 A kind of Human bodys' response method and device based on residual error network
US20180372499A1 (en) * 2017-06-25 2018-12-27 Invensense, Inc. Method and apparatus for characterizing platform motion
CN111274881A (en) * 2020-01-10 2020-06-12 中国平安财产保险股份有限公司 Driving safety monitoring method and device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI766491B (en) * 2020-12-22 2022-06-01 國立清華大學 Human negligence warning method based on augmented reality
CN113204306A (en) * 2021-05-12 2021-08-03 同济大学 Object interaction information prompting method and system based on augmented reality environment

Also Published As

Publication number Publication date
CN112099629B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
US9974466B2 (en) Method and apparatus for detecting change in health status
US20200349347A1 (en) Systems and methods for monitoring and recognizing human activity
US11298050B2 (en) Posture estimation device, behavior estimation device, storage medium storing posture estimation program, and posture estimation method
US9262068B2 (en) Interactive surface
US20150302310A1 (en) Methods for data collection and analysis for event detection
CN113397520B (en) Information detection method and device for indoor object, storage medium and processor
US20180085045A1 (en) Method and system for determining postural balance of a person
AU2013296153A1 (en) A system, method, software application and data signal for determining movement
US11412957B2 (en) Non-contact identification of gait dynamics, patterns and abnormalities for elderly care
US11747443B2 (en) Non-contact identification of multi-person presence for elderly care
CN112099629B (en) Method and system for providing working operation guide
US20220084657A1 (en) Care recording device, care recording system, care recording program, and care recording method
US20210020295A1 (en) Physical function independence support device of physical function and method therefor
CN108346260A (en) It is a kind of for the monitoring method of unmanned plane, device, unmanned plane and monitoring system
Amir et al. Real-time threshold-based fall detection system using wearable IoT
JP7539779B2 (en) Anomaly Detection System
Garrido et al. Automatic detection of falls and fainting
JP2020187389A (en) Mobile body locus analysis apparatus, mobile body locus analysis program, and mobile body locus analysis method
AU2021106898A4 (en) Network-based smart alert system for hospitals and aged care facilities
JP5971510B2 (en) Watch system, watch device and program
JP2023131905A (en) Behavior estimation system, behavior estimation method, and program
Yoshihara et al. Life Log Visualization System Based on Informationally Structured Space for Supporting Elderly People
Ardiyanto et al. Autonomous monitoring framework with fallen person pose estimation and vital sign detection
Kaluža et al. A multi-agent system for remote eldercare
US12002224B2 (en) Apparatus and method for protecting against environmental hazards

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant