CN113326713B - Action recognition method, device, equipment and medium - Google Patents

Action recognition method, device, equipment and medium Download PDF

Info

Publication number
CN113326713B
CN113326713B CN202010127230.XA CN202010127230A CN113326713B CN 113326713 B CN113326713 B CN 113326713B CN 202010127230 A CN202010127230 A CN 202010127230A CN 113326713 B CN113326713 B CN 113326713B
Authority
CN
China
Prior art keywords
operation step
working
gesture feature
standard
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010127230.XA
Other languages
Chinese (zh)
Other versions
CN113326713A (en
Inventor
崔维存
陈录城
石恒
刘子力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Shunde Haier Intelligent Electronic Co ltd
Kaos Digital Technology (Qingdao) Co.,Ltd.
Karos Iot Technology Co ltd
Qingdao Blue Whale Technology Co ltd
Cosmoplat Industrial Intelligent Research Institute Qingdao Co Ltd
Original Assignee
Karos Iot Technology Co ltd
Qingdao Blue Whale Technology Co ltd
Haier Digital Technology Qingdao Co Ltd
Cosmoplat Industrial Intelligent Research Institute Qingdao Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Karos Iot Technology Co ltd, Qingdao Blue Whale Technology Co ltd, Haier Digital Technology Qingdao Co Ltd, Cosmoplat Industrial Intelligent Research Institute Qingdao Co Ltd filed Critical Karos Iot Technology Co ltd
Priority to CN202010127230.XA priority Critical patent/CN113326713B/en
Publication of CN113326713A publication Critical patent/CN113326713A/en
Application granted granted Critical
Publication of CN113326713B publication Critical patent/CN113326713B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/117Biometrics derived from hands
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a method, a device, equipment and a medium for identifying actions. The action recognition method comprises the following steps: receiving an operation video of an operator in the working process, which is uploaded by the AR equipment; identifying at least one gesture feature in the operation video, and determining at least one operation step contained in a workflow according to the gesture feature, wherein the operation step contains at least one gesture feature; and determining the working hours of each operation step, and determining whether the operation step is matched with the standard operation step according to the working hours of each operation step. According to the technical scheme, the operation flow is divided into the plurality of operation steps for analysis by identifying the gesture features contained in the operation video, so that the efficiency of identifying and monitoring the operation actions in the operation flow is improved.

Description

Action recognition method, device, equipment and medium
Technical Field
The embodiment of the invention relates to an industrial operation monitoring technology, in particular to a method, a device, equipment and a medium for identifying actions.
Background
In industrial production, in order to ensure product quality and production efficiency, the operation actions of operators need to be monitored, detailed operation time is recorded, and a working hour database is formed, so that process engineers can conveniently monitor the operation normalization and normalize the operation time.
The existing operation action monitoring method is that a process engineer directly observes actions of operators on site, and according to a standard method of action analysis, the actions are split to form working hour records, on one hand, depending on a manual mode, the actions analyzed by different engineers have differences, the real standard working hour is difficult to determine, the actions are inevitably omitted due to negligence, on the other hand, the on-site direct observation time cost is too high, an operator is required to be configured with one engineer, and the economic cost is too high.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a medium for identifying actions, which divide a work flow into a plurality of operation steps for analysis by identifying gesture features contained in an operation video, so that the efficiency of identifying and monitoring the work actions in the work flow is improved.
In a first aspect, an embodiment of the present invention provides an action recognition method, where the method is applied to a background system, and the method includes:
receiving an operation video of an operator in the working process, which is uploaded by the AR equipment;
identifying at least one gesture feature in the operation video, and determining at least one operation step contained in a workflow according to the gesture feature, wherein the operation step contains at least one gesture feature;
And determining the working hours of each operation step, and determining whether the operation step is matched with the standard operation step according to the working hours of each operation step.
In a second aspect, an embodiment of the present invention provides an action recognition method, where the method includes:
the AR equipment acquires an operation video of an operator in the operation process through a camera, and uploads the operation video to a background system;
the background system identifies at least one gesture feature in the operation video, and determines at least one operation step contained in a workflow according to the gesture feature, wherein the operation step contains at least one gesture feature;
the background system determines the man-hour of each operation step and determines whether the operation step is matched with the standard operation step according to the man-hour of each operation step.
In a third aspect, an embodiment of the present invention further provides an action recognition apparatus, where the apparatus includes:
the operation video receiving module is used for receiving operation videos of operators in the working process, which are uploaded by the AR equipment;
an operation step determining module, configured to identify at least one gesture feature in the operation video, and determine at least one operation step included in a workflow according to the gesture feature, where the operation step includes at least one gesture feature;
And the operation step judging module is used for determining the working hours of each operation step and determining whether the operation step is matched with the standard operation step according to the working hours of each operation step.
In a fourth aspect, an embodiment of the present invention further provides an apparatus, including:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of action recognition provided by any embodiment of the present invention.
In a fifth aspect, embodiments of the present invention further provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method for identifying actions provided by any embodiment of the present invention.
According to the technical scheme provided by the embodiment of the invention, the background system firstly identifies at least one gesture feature in the operation video uploaded by the AR equipment, then determines at least one operation step contained in the operation flow according to the gesture feature, finally determines the working hours of each operation step, determines whether the operation step is matched with the standard operation step according to the working hours of each operation step, solves the problems that an engineer is required to observe the operation flow of an operator on site and conduct splitting and analysis, and the time and economic cost are high, and improves the efficiency of identifying and monitoring the operation action in the operation flow.
Drawings
FIG. 1 is a flow chart of a method of motion recognition in accordance with a first embodiment of the present invention;
FIG. 2a is a flow chart of a method for identifying actions according to a second embodiment of the present invention;
FIG. 2b is a schematic diagram illustrating feature extraction of a single-hand gesture according to a second embodiment of the present invention;
fig. 2c is a schematic diagram illustrating the extraction of the positional relationship of two hands in the second embodiment of the present invention;
FIG. 3 is a flow chart of a method of motion recognition in a third embodiment of the present invention;
FIG. 4 is a flow chart of a method of motion recognition in a fourth embodiment of the present invention;
FIG. 5 is a schematic diagram of a motion recognition device according to a fifth embodiment of the present invention;
fig. 6 is a schematic structural diagram of an apparatus according to a sixth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Example 1
Fig. 1 is a flowchart of an action recognition method in a first embodiment of the present invention, where the technical solution of the present embodiment is applied to a background system, and is suitable for a case of splitting and analyzing a workflow by the background system, where the method may be performed by an action recognition device, and the device may be implemented by software and/or hardware, and may be integrated in various general purpose computer devices, and specifically includes the following steps:
Step 110, receiving an operation video of an operator in the working process, which is uploaded by the AR equipment.
The AR device used in this embodiment is a head ring Type AR device, and is configured with a camera, a microphone and a speaker, and is suitable for application scenarios of wearing and recording images for a long time, and may be connected with a background system in various manners such as wireless, bluetooth or Type C. Before the AR equipment is used, the access authority of the AR equipment is set in the background system, so that the AR equipment can conveniently send collected image information to the background system and receive instruction information processed by the background system.
In this embodiment, the background system first receives an operation video of an operator in a working process uploaded by the AR device, specifically, the operator carries out video recording of a first view angle on a hand action of the operator through a camera of the AR device in the working process of wearing the AR device, so as to obtain the operation video, and then uploads the operation video to the background system, and the background system receives the operation video so as to further identify and process an operation step in the operation video. By way of example, the operator's workflow may be a product component assembly flow.
Step 120, at least one gesture feature in the operation video is identified, and at least one operation step included in the operation flow is determined according to the gesture feature, wherein the operation step includes at least one gesture feature.
Wherein, the gesture feature refers to a hand feature capable of representing the hand action of the current operator and is used for determining the operation steps being executed by the previous operator. By way of example, the gesture feature may be operator hand profile information as well as the center of hand rotation.
In this embodiment, the background system identifies the received operation video, acquires at least one gesture feature included in the operation video, then divides the whole operation flow into at least one operation step according to the gesture feature, so as to identify and analyze the operation of the operator, specifically, the background system identifies the received operation video frame by frame, identifies all gesture features included in the operation video, and then divides the operation flow of the operator into a plurality of operation steps according to the operation steps to which each gesture feature belongs.
The background system recognizes at least one gesture feature included in the operation video, for example, recognizes that the geometric features of the left hand and the right hand and the positional relationship of the two hands together form a gesture feature, compares each gesture feature with a set gesture feature recognition condition, determines an operation procedure to which each gesture feature belongs, and finally divides the operation flow into a plurality of operation procedures, for example, the gesture feature recognition condition for taking the material 1 includes a gesture feature that an operator stretches his hands to take the material 1, and a gesture feature that the material 1 is placed on a workbench, and determines that the at least one gesture feature belongs to the operation procedure for taking the material 1 after recognizing the gesture feature corresponding to at least one gesture feature recognition condition in the operation video.
Optionally, after identifying at least one gesture feature in the operation video and determining at least one operation step included in the workflow according to the gesture feature, the method further includes:
and according to the at least one operation step, simulating a virtual operation flow, and displaying a simulation result to assist the monitoring personnel in analyzing the operation step.
In this optional embodiment, a specific operation after at least one operation step included in the operation flow is determined according to the gesture feature, a simulation of the virtual operation flow is performed according to the determined at least one operation step, a simulation result can be displayed on a display screen, and a background monitor can conveniently and intuitively see the whole operation process. By way of example, when the operator is identified to be performing product assembly operation, virtual assembly simulation can be performed according to the product CAD model and the operation flow of the operator, and finally the simulated assembly flow of the whole assembly operation is displayed on a screen, so that the background monitoring personnel can intuitively observe and analyze the operation flow of the operator.
Step 130, determining the man-hour of each operation step, and determining whether the operation step is matched with the standard operation step according to the man-hour of each operation step.
In this embodiment, after dividing the workflow into a plurality of operation steps, the man-hours corresponding to each operation step are determined according to the time spent for each operation step, and whether each operation step matches with the standard operation step is determined according to the man-hours corresponding to each operation step. Specifically, the working hours corresponding to each operation step are determined according to the starting time and the ending time of each operation step, then the working hours of the operation step are compared with the standard working hours corresponding to the operation step, and whether the operation step is matched with the standard operation step is determined.
For example, when the start time of the operation step of taking the material 1 is 3 minutes and 5 seconds and the end time of the step is 3 minutes and 9.5 seconds, the working hour of the operation step may be determined to be 4.5 seconds, then the standard working hour corresponding to the operation step of taking the material 1, for example, 4 seconds, is obtained, and obviously, the working hour error of the two steps is 0.5 seconds, then according to the preset judging standard, it is judged whether the operation step is matched with the standard operation step, for example, when the working hour error is greater than 1 second, it is determined that the task operation step is not matched with the standard operation step, and then the operation step of taking the material 1 is matched with the standard operation step.
According to the technical scheme provided by the embodiment of the invention, the background system firstly identifies at least one gesture feature in the operation video uploaded by the AR equipment, then determines at least one operation step contained in the operation flow according to the gesture feature, finally determines the working hours of each operation step, determines whether the operation step is matched with the standard operation step according to the working hours of each operation step, solves the problems that an engineer is required to observe the operation flow of an operator on site and conduct splitting and analysis, and the time and economic cost are high, and improves the efficiency of identifying and monitoring the operation action in the operation flow.
Example two
Fig. 2a is a flowchart of a motion recognition method in a second embodiment of the present invention, which is further refined on the basis of the foregoing embodiment, and provides specific steps of recognizing at least one gesture feature in the operation video and determining at least one operation step included in a workflow according to the gesture feature. The following describes a motion recognition method according to a second embodiment of the present invention with reference to fig. 2a, including the following steps:
step 210, receiving an operation video of an operator in the working process, which is uploaded by the AR equipment.
Step 220, identifying single-hand geometric features corresponding to the left hand and the right hand of the operator in the operation video, wherein the single-hand geometric features comprise gesture outlines and hand motion centers.
The single-hand geometric feature is a geometric feature characterizing the current single-hand action of the operator and is used for determining an operation step to which the current action of the operator belongs, wherein the single-hand geometric feature is shown in fig. 2b and comprises a gesture outline and a hand motion center.
In this embodiment, a manner of identifying gesture features of an operator is provided, specifically, by identifying an operation video frame by frame, obtaining single-hand geometric features corresponding to left and right hands of the operator in the operation video, and by way of example, sequentially obtaining left hand profile features and left hand movement centers of the operator, and right hand profile features and right hand movement centers of the operator, where the gesture profile features may include palms and fingertips of each finger.
Step 230, determining a two-hand position relationship according to the one-hand geometric feature, and forming the gesture feature of the operator by the one-hand geometric feature and the two-hand position relationship together.
In this embodiment, after the single-hand geometric features corresponding to the left hand and the right hand are obtained, the two-hand position relationship is determined according to the single-hand geometric features, as shown in fig. 2c, and finally, the gesture features of the operator are formed by the single-hand geometric features and the two-hand position relationship together, where the two-hand position relationship may include distance and angle information of the two hands, for example.
Step 240, when the gesture feature meets the set gesture feature recognition condition, determining that the gesture feature belongs to an operation step corresponding to the gesture feature recognition condition, and dividing the operation flow into at least one operation step according to the gesture feature;
the gesture feature recognition condition is at least one standard gesture feature corresponding to each operation step, which is preset.
In this embodiment, it is determined whether the identified gesture feature meets a set gesture recognition condition, specifically, the gesture feature recognition condition is a preset standard gesture feature corresponding to each operation step, and if the identified gesture feature recognition condition is successfully matched with the standard gesture feature, it is determined that the current gesture feature belongs to the operation step corresponding to the gesture feature recognition condition, and finally, the operation flow is divided into at least one operation step according to the gesture feature.
Illustratively, defining the gesture feature recognition condition corresponding to the fitting part 2 includes the gesture feature of the operator grasping the fitting part 2 with one hand and the gesture feature of the fitting part 2 with both hands, and determining that at least one gesture feature belongs to the operation step of the fitting part 2 after at least one gesture feature matching the gesture feature recognition condition in the operation video is recognized. And finally, determining operation steps corresponding to all the identified gesture features in the operation video, and dividing the whole operation flow into at least one operation step.
Step 250, determining the man-hour of each operation step, and determining whether the operation step is matched with the standard operation step according to the man-hour of each operation step.
According to the technical scheme, the gesture characteristics of the operator are determined by identifying the single-hand geometric characteristics and the two-hand position relations of the operator, the operation steps to which the gesture characteristics belong are determined according to the gesture characteristic identification conditions, so that the operation flow is divided into at least one operation step, the working hours of each operation step are finally determined, whether the operation steps are matched with the standard operation steps or not is determined according to the working hours of each operation step, the problem that an engineer is required to observe the operation flow of the operator on site and split and analyze the operation is solved, the time and economic cost are high is solved, the efficiency of identifying and monitoring the operation action in the operation flow is improved, the step division is carried out by identifying the gesture characteristics, the experience of monitoring personnel is not relied on, and the accuracy of action identification and analysis is improved.
Example III
Fig. 3 is a flowchart of a motion recognition method in a third embodiment of the present invention, which is further refined on the basis of the foregoing embodiment, and provides specific steps for determining man-hours of each operation step, and determining whether the operation step matches a standard operation step according to the man-hours of each operation step. The following describes a motion recognition method according to a third embodiment of the present invention with reference to fig. 3, including the following steps:
step 310, receiving an operation video of an operator in the working process, which is uploaded by the AR equipment.
Step 320, identifying at least one gesture feature in the operation video, and determining at least one operation step included in the operation flow according to the gesture feature, wherein the operation step includes at least one gesture feature.
Step 330, determining the man-hour of each operation step according to the appearance time of the start gesture feature and the appearance time of the end gesture feature corresponding to each operation step, wherein the start gesture feature is the first gesture feature corresponding to the operation step, and the end gesture feature is the last gesture feature corresponding to the operation step.
In this embodiment, a manner of determining the working hours of the operation steps is provided, specifically, first, the appearance time of the start gesture feature and the appearance time of the end gesture feature corresponding to the operation steps are obtained, and the difference between the appearance time of the end gesture feature and the appearance time of the start gesture feature is calculated, that is, the working hours of the current operation step, where the start gesture feature is the first gesture feature corresponding to the operation step, and the end gesture feature is the last gesture feature corresponding to the operation step.
After determining that the first three gesture features in the operation video belong to the operation step 1, the first gesture feature occurrence time and the third gesture feature occurrence time are obtained, and a time difference is calculated and used as the working hours of the operation step 1, so that whether each operation step accords with the specification can be analyzed according to the working hours of each operation step.
Step 340, each standard man-hour corresponding to each operation step is acquired.
In this embodiment, in order to determine whether each operation step meets the specification according to the working hours of each operation step, standard working hours corresponding to each operation step need to be acquired, where the standard working hours corresponding to each operation step are obtained by analyzing a standard work flow designed in advance by an engineer. For example, for an assembly job of a product, before entering factory production, an engineer may refer to a set of standard work flows in advance, provide guidance for production operations of operators, analyze the standard work flows, and obtain standard working hours corresponding to each standard operation step in the standard work flows, so as to determine whether the operation steps of the operators meet specifications in the generation process.
Step 350, calculating the working hour error of each operation step according to the working hour of each operation step and each standard working hour.
In this embodiment, after standard man-hours corresponding to the operation steps are obtained, an error between the man-hours of each operation step and the standard man-hours is calculated to determine whether each operation step meets the specification.
Step 360, when the man-hour error is greater than the set threshold, determining that the operation step does not match the standard operation step.
In this embodiment, a manner of determining whether each operation step meets a specification according to a man-hour error is provided, specifically, when the man-hour error is greater than a set threshold, it is determined that the operation step is not matched with a standard operation step. By way of example, the threshold of the man-hour error is set to 1 second, the standard man-hour of the fitting part 2 is 5 seconds, the corresponding man-hour of the operation step is 6.5 seconds when the current operator works, the man-hour error is 1.5 seconds, and obviously greater than the threshold of the set man-hour error, the current operation step is determined to be not matched with the standard operation step.
Optionally, after determining that the operation step does not match the standard operation step, the method further includes:
dividing the operation step into at least one operation action according to at least one gesture feature contained in the operation step, wherein each operation action corresponds to one gesture feature;
Acquiring the time of the operator holding the gesture feature as the working time of the working action corresponding to the gesture feature;
calculating the working hour error of each working action according to the working hour of each working action and the standard working hour corresponding to each working;
and when the working hour error is larger than a set threshold value, determining that the working action is not matched with the standard working action.
In this optional embodiment, a specific analysis manner is provided after determining that the operation steps are not matched with the standard operation steps, in order to further analyze the specific time for generating the working hours error, the operation steps are first divided into at least one working action according to at least one gesture feature included in each operation step, wherein each gesture feature corresponds to one operation step, further, the working hours of the working action corresponding to the gesture feature are determined according to the time for the operator to keep the gesture feature, finally, the working hours of each working action are compared with the standard working hours corresponding to each working action, and when the working hours error is greater than a set threshold, the current working action is determined to be not matched with the standard working action.
For example, when it is determined that the operation steps of the fitting part 2 do not match the standard operation steps, the operation steps are further analyzed, the operation steps are divided into two operation steps corresponding to each gesture feature according to the two gesture features included in the operation steps of the fitting part 2, namely, an operation action of taking the part 2 and an operation action of fitting the part 2, and then the stay time of an operator in the two operation actions is obtained as the man-hour corresponding to the two operation actions, and finally, by comparing the man-hour of the two operation actions with the standard man-hour corresponding to the two operation actions, when it is determined that the man-hour error of the operation action of the fitting part 2 is greater than the set threshold, for example, 0.5 seconds, it is determined that the operation action does not match the standard operation action, which is a main factor of the man-hour error.
Optionally, after determining that the job action does not match the standard job action, further comprising:
when the working time of the operation is greater than the standard working time corresponding to each operation, sending operation error information to the AR equipment so as to instruct the AR equipment to initiate an operation non-standard voice prompt to the operator;
when the working time of the operation action is smaller than the standard working time corresponding to each operation, displaying the operation action, and initiating an operation optimization prompt to remind a monitoring person of analyzing whether the operation action has optimizability or not.
In this optional embodiment, a specific operation is provided after a specific operation is determined to be not matched with a standard operation, first, whether the working time of the operation is greater than the corresponding standard working time is judged, if yes, it is indicated that the operation of the operator is not standard, the reason may be that the operation is not focused or that the time consumed for a certain operation is too long, at this time, operation error information needs to be sent to the AR device to instruct the AR device to initiate a voice prompt for the operator with the operation not standard; if not, the operator's operation action is not matched with the standard operation action, which may be because the operator does not stay in place in a shorter operation or has a better operation mode to improve the operation efficiency, at this time, the operator's operation action is displayed, and an operation optimization prompt, for example, a display screen prompt or a voice prompt is initiated to remind the monitor to analyze the operation action, and whether the operator's operation action is normal or whether the preset standard operation action can be optimized is determined.
According to the technical scheme provided by the embodiment of the invention, the background system firstly identifies at least one gesture feature in the operation video uploaded by the AR equipment, then determines the working hours of each operation step according to at least one operation step contained in the gesture feature, calculates the error between the working hours of each operation step and the standard working hours, and further identifies and analyzes the operation step when the working hours error is larger than the set threshold, so that the operation action with larger error is directly displayed, monitoring personnel are not required to search the operation action generating the working hours error from all operation flows, and the efficiency of identifying and monitoring the operation action in the operation flows is improved.
Example IV
Fig. 4 is a flowchart of an action recognition method in a fourth embodiment of the present invention, where the technical solution of the present embodiment is applied to an action recognition system, and is suitable for a situation that an AR device collects an operation video and splits and analyzes a work flow through a background system, and the following description is given with reference to fig. 4 on the action recognition method provided in the fourth embodiment of the present invention, where the action recognition method includes the following steps:
in step 410, the AR device obtains an operation video of an operator in a working process through the camera, and uploads the operation video to the background system.
In this embodiment, during the process of wearing the AR device to perform a job, an operator records the hand movements of the operator through the camera of the AR device at a first viewing angle, obtains an operation video of the operator, and then uploads the operation video to the background system, so as to further identify and process the operation steps in the operation video.
Step 420, the background system identifies at least one gesture feature in the operation video, and determines at least one operation step included in the workflow according to the gesture feature, where the operation step includes at least one gesture feature.
In this embodiment, the background system identifies the received operation video, acquires at least one gesture feature included in the operation video, then divides the whole operation flow into at least one operation step according to the gesture feature, so as to identify and analyze the operation of the operator, specifically, the background system identifies the received operation video frame by frame, identifies all gesture features included in the operation video, and then divides the operation flow of the operator into a plurality of operation steps according to the operation steps to which each gesture feature belongs.
The background system recognizes 5 gesture features contained in the operation video, then matches each gesture feature with a preset standard gesture feature, determines that the gesture feature belongs to an operation step corresponding to the standard gesture feature when the matching is successful, and finally determines that the first 3 gesture features belong to operation step 1, and the second 2 gesture features belong to operation step 2, namely, the operation flow is divided into 2 operation steps.
Step 430, the background system determines the man-hour of each operation step, and determines whether the operation step matches the standard operation step according to the man-hour of each operation step.
In this embodiment, after dividing the workflow into a plurality of operation steps, the man-hours corresponding to each operation step are determined according to the duration of each operation step, and whether each operation step matches with the standard operation step is determined according to the man-hours corresponding to each operation step. The time of occurrence of the first gesture feature and the time of occurrence of the last gesture feature included in the operation step are obtained, the man-hour of the operation step is determined according to the two times, the man-hour error is calculated finally by comparing the man-hour of the operation step with the standard man-hour corresponding to the operation step, and the operation step with the man-hour larger than the set threshold value is finally determined to be not matched with the standard operation step, namely, the operation step may have the problem of non-standard operation.
According to the technical scheme, firstly, an operation video of an operator in a working process is collected through the AR equipment, the operation video is uploaded to a background system, then at least one gesture feature in the operation video uploaded by the AR equipment is identified through the background system, at least one operation step included in a working procedure is determined according to the gesture feature, finally, after working hours of each operation step are determined, whether the operation step is matched with a standard operation step or not is determined according to the working hours of each operation step, an engineer is not required to observe the working procedure of the operator on site and conduct splitting and analysis, and the efficiency of identifying and monitoring the working actions in the working procedure is improved.
Example five
Fig. 5 is a schematic structural diagram of an action recognition device according to a fifth embodiment of the present invention, where the action recognition device includes: an operation video receiving module 510, an operation step determining module 520, and an operation step judging module 530.
An operation video receiving module 510, configured to receive an operation video of an operator in a working process, which is uploaded by the AR device;
an operation step determining module 520, configured to identify at least one gesture feature in the operation video, and determine at least one operation step included in a workflow according to the gesture feature, where the operation step includes at least one gesture feature;
the operation step judging module 530 is configured to determine the man-hours of each operation step, and determine whether the operation step matches with the standard operation step according to the man-hours of each operation step.
According to the technical scheme provided by the embodiment of the invention, the background system firstly identifies at least one gesture feature in the operation video uploaded by the AR equipment, then determines at least one operation step contained in the operation flow according to the gesture feature, finally determines the working hours of each operation step, determines whether the operation step is matched with the standard operation step according to the working hours of each operation step, solves the problems that an engineer is required to observe the operation flow of an operator on site and conduct splitting and analysis, and the time and economic cost are high, and improves the efficiency of identifying and monitoring the operation action in the operation flow.
Optionally, the operation step determining module 520 includes:
the single-hand geometric feature recognition unit is used for recognizing single-hand geometric features corresponding to the left hand and the right hand of an operator in the operation video, wherein the single-hand geometric features comprise gesture outlines and hand motion centers;
the gesture feature forming unit is used for determining a two-hand position relation according to the single-hand geometric feature and forming gesture features of an operator by the single-hand geometric feature and the two-hand position relation;
an operation step determining unit, configured to determine that the gesture feature belongs to an operation step corresponding to a set gesture feature recognition condition when the gesture feature meets the set gesture feature recognition condition, and divide the operation flow into at least one operation step according to the gesture feature;
the gesture feature recognition condition is at least one preset standard gesture feature corresponding to each operation step.
Optionally, the operation step determining module 530 includes:
the working hour determining unit is used for determining working hours of the operation steps according to the appearance time of the starting gesture feature and the appearance time of the ending gesture feature corresponding to the operation steps, wherein the starting gesture feature is the first gesture feature corresponding to the operation step, and the ending gesture feature is the last gesture feature corresponding to the operation step;
A standard man-hour obtaining unit for obtaining each standard man-hour corresponding to each operation step;
a man-hour error calculation unit for calculating a man-hour error of each operation step according to the man-hour of each operation step and each standard man-hour;
and the operation step judging module unit is used for determining that the operation step is not matched with the standard operation step when the working hour error is larger than a set threshold value.
Optionally, the action recognition device further includes:
the operation action dividing module is used for dividing the operation step into at least one operation action according to at least one gesture feature contained in the operation step after the operation step is not matched with the standard operation step, wherein each operation action corresponds to one gesture feature;
a man-hour determining module, configured to obtain a time when the gesture feature is maintained by the operator, as a man-hour of a working action corresponding to the gesture feature;
the working hour error calculation module is used for calculating the working hour error of each working action according to the working hour of each working action and the standard working hour corresponding to each working action;
and the operation action judging module is used for determining that the operation action is not matched with the standard operation action when the working hour error is larger than a set threshold value.
Optionally, the action recognition device further includes:
an operation error information sending module, configured to send operation error information to the AR device when the working time of the working action is greater than the standard working time corresponding to each working, so as to instruct the AR device to initiate an operation unnormal voice prompt to the operator;
and the operation optimization prompting module is used for displaying the operation actions when the working time of the operation actions is smaller than the standard working time corresponding to each operation, and initiating operation optimization prompting to remind monitoring staff of analyzing whether the operation actions have optimizability.
Optionally, the action recognition device further includes:
and the simulation result display module is used for carrying out simulation of the virtual work flow according to at least one operation step after identifying at least one gesture feature in the operation video and determining at least one operation step included in the work flow according to the gesture feature, and displaying a simulation result so as to assist the monitoring personnel to analyze the operation step.
The action recognition device provided by the embodiment of the invention can execute the action recognition method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example six
Fig. 6 is a schematic structural diagram of an AR device according to a sixth embodiment of the present invention, as shown in fig. 6, the electronic device includes a processor 60 and a memory 61; the number of processors 60 in the device may be one or more, one processor 60 being taken as an example in fig. 6; the processor 60 and the memory 61 in the device may be connected by a bus or other means, in fig. 6 by way of example.
The memory 61 is a computer-readable storage medium that can be used to store a software program, a computer-executable program, and modules, such as program instructions/modules (e.g., an operation video receiving module 510, an operation step determining module 520, and an operation step judging module 530 in an operation recognition apparatus) corresponding to an operation recognition method in the embodiment of the present invention. The processor 60 executes various functional applications of the device and data processing, namely, implements the above-described action recognition method by running software programs, instructions and modules stored in the memory 61.
The method comprises the following steps:
receiving an operation video of an operator in the working process, which is uploaded by the AR equipment;
identifying at least one gesture feature in the operation video, and determining at least one operation step contained in a workflow according to the gesture feature, wherein the operation step contains at least one gesture feature;
And determining the working hours of each operation step, and determining whether the operation step is matched with the standard operation step according to the working hours of each operation step.
The memory 61 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for functions; the storage data area may store data created according to the use of the terminal, etc. In addition, the memory 61 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 61 may further comprise memory remotely located relative to processor 60, which may be connected to the device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Example seven
A seventh embodiment of the present invention also provides a computer-readable storage medium having stored thereon a computer program for performing an action recognition method when executed by a computer processor, the method comprising:
Receiving an operation video of an operator in the working process, which is uploaded by the AR equipment;
identifying at least one gesture feature in the operation video, and determining at least one operation step contained in a workflow according to the gesture feature, wherein the operation step contains at least one gesture feature;
and determining the working hours of each operation step, and determining whether the operation step is matched with the standard operation step according to the working hours of each operation step.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, etc., and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments of the present invention.
It should be noted that, in the embodiment of the motion recognition apparatus, each unit and module included are only divided according to the functional logic, but not limited to the above-mentioned division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (8)

1. A method of motion recognition, the method being applied to a background system, the method comprising:
receiving an operation video of an operator in the working process, which is uploaded by the AR equipment;
Identifying at least one gesture feature in the operation video, and determining at least one operation step contained in a workflow according to the gesture feature, wherein the operation step contains at least one gesture feature;
determining the working hours of each operation step, and determining whether the operation step is matched with a standard operation step according to the working hours of each operation step;
the step of determining the man-hour of each operation step and determining whether the operation step is matched with the standard operation step according to the man-hour of each operation step comprises the following steps:
determining the working hours of each operation step according to the appearance time of the starting gesture feature and the appearance time of the ending gesture feature corresponding to each operation step, wherein the starting gesture feature is the first gesture feature corresponding to the operation step, and the ending gesture feature is the last gesture feature corresponding to the operation step;
acquiring standard working hours corresponding to the operation steps;
calculating the working hour error of each operation step according to the working hour of each operation step and each standard working hour;
when the man-hour error is greater than a set threshold, determining that the operation step is not matched with the standard operation step;
After determining that the operation step does not match the standard operation step, further comprising:
dividing the operation step into at least one operation action according to at least one gesture feature contained in the operation step, wherein each operation action corresponds to one gesture feature;
acquiring the time of the operator holding the gesture feature as the working time of the working action corresponding to the gesture feature;
calculating the working hour error of each working action according to the working hour of each working action and the standard working hour corresponding to each working;
and when the working hour error is larger than a set threshold value, determining that the working action is not matched with the standard working action.
2. The method of claim 1, wherein identifying at least one gesture feature in the operation video and determining at least one operation step involved in a workflow from the gesture feature comprises:
identifying single-hand geometric features corresponding to left and right hands of an operator in the operation video, wherein the single-hand geometric features comprise gesture outlines and hand motion centers;
determining a two-hand position relation according to the single-hand geometric feature, and forming gesture features of an operator by the single-hand geometric feature and the two-hand position relation together;
When the gesture features meet the set gesture feature recognition conditions, determining that the gesture features belong to operation steps corresponding to the gesture feature recognition conditions, and dividing the operation flow into at least one operation step according to the gesture features;
the gesture feature recognition condition is at least one preset standard gesture feature corresponding to each operation step.
3. The method of claim 1, further comprising, after determining that the job action does not match a standard job action:
when the working time of the operation is greater than the standard working time corresponding to each operation, sending operation error information to the AR equipment so as to instruct the AR equipment to initiate an operation non-standard voice prompt to the operator;
when the working time of the operation action is smaller than the standard working time corresponding to each operation, displaying the operation action, and initiating an operation optimization prompt to remind a monitoring person of analyzing whether the operation action has optimizability or not.
4. The method of claim 1, further comprising, after identifying at least one gesture feature in the operation video and determining at least one operation step included in a workflow based on the gesture feature:
And according to the at least one operation step, simulating a virtual operation flow, and displaying a simulation result to assist monitoring staff in analyzing the operation step.
5. A method of motion recognition, comprising:
the AR equipment acquires an operation video of an operator in the operation process through a camera, and uploads the operation video to a background system;
the background system identifies at least one gesture feature in the operation video, and determines at least one operation step contained in a workflow according to the gesture feature, wherein the operation step contains at least one gesture feature;
the background system determines the working hours of each operation step and determines whether the operation step is matched with a standard operation step according to the working hours of each operation step;
the background system determines the man-hour of each operation step, and determines whether the operation step is matched with a standard operation step according to the man-hour of each operation step, and comprises the following steps:
determining working hours of each operation step according to the appearance time of the starting gesture feature and the appearance time of the ending gesture feature corresponding to each operation step, wherein the starting gesture feature is the first gesture feature corresponding to the operation step, and the ending gesture feature is the last gesture feature corresponding to the operation step;
Acquiring standard working hours corresponding to each operation step;
calculating the working hour error of each operation step according to the working hour of each operation step and each standard working hour;
when the working hour error is larger than the set threshold value, determining that the operation step is not matched with the standard operation step;
after determining that the operation step does not match the standard operation step, further comprising:
dividing the operation step into at least one operation action according to at least one gesture feature contained in the operation step, wherein each operation action corresponds to one gesture feature;
acquiring the time of the operator holding the gesture feature as the working time of the working action corresponding to the gesture feature;
calculating the working hour error of each working action according to the working hour of each working action and the standard working hour corresponding to each working;
and when the working hour error is larger than a set threshold value, determining that the working action is not matched with the standard working action.
6. An action recognition device, comprising:
the operation video receiving module is used for receiving operation videos of operators in the working process, which are uploaded by the AR equipment;
an operation step determining module, configured to identify at least one gesture feature in the operation video, and determine at least one operation step included in a workflow according to the gesture feature, where the operation step includes at least one gesture feature;
The operation step judging module is used for determining the working hours of each operation step and determining whether the operation step is matched with the standard operation step according to the working hours of each operation step;
the operation step judging module comprises:
the working hour determining unit is used for determining working hours of the operation steps according to the appearance time of the starting gesture feature and the appearance time of the ending gesture feature corresponding to the operation steps, wherein the starting gesture feature is the first gesture feature corresponding to the operation step, and the ending gesture feature is the last gesture feature corresponding to the operation step;
a standard man-hour obtaining unit for obtaining each standard man-hour corresponding to each operation step;
a man-hour error calculation unit for calculating a man-hour error of each operation step according to the man-hour of each operation step and each standard man-hour;
an operation step judging module unit, configured to determine that the operation step is not matched with the standard operation step when the man-hour error is greater than a set threshold;
the motion recognition apparatus further includes:
the operation action dividing module is used for dividing the operation step into at least one operation action according to at least one gesture feature contained in the operation step after the operation step is not matched with the standard operation step, wherein each operation action corresponds to one gesture feature;
A man-hour determining module, configured to obtain a time when the gesture feature is maintained by the operator, as a man-hour of a working action corresponding to the gesture feature;
the working hour error calculation module is used for calculating the working hour error of each working action according to the working hour of each working action and the standard working hour corresponding to each working action;
and the operation action judging module is used for determining that the operation action is not matched with the standard operation action when the working hour error is larger than a set threshold value.
7. An apparatus, comprising:
one or more processors;
storage means for storing one or more programs,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-4.
8. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-4.
CN202010127230.XA 2020-02-28 2020-02-28 Action recognition method, device, equipment and medium Active CN113326713B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010127230.XA CN113326713B (en) 2020-02-28 2020-02-28 Action recognition method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010127230.XA CN113326713B (en) 2020-02-28 2020-02-28 Action recognition method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN113326713A CN113326713A (en) 2021-08-31
CN113326713B true CN113326713B (en) 2023-06-09

Family

ID=77412531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010127230.XA Active CN113326713B (en) 2020-02-28 2020-02-28 Action recognition method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN113326713B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115379300A (en) * 2022-07-27 2022-11-22 国能龙源环保有限公司 Auxiliary method and auxiliary device for normatively installing explosive package based on AI (Artificial Intelligence) recognition algorithm
CN115661726B (en) * 2022-12-26 2023-05-05 江苏中车数字科技有限公司 Autonomous video acquisition and analysis method for rail train workpiece assembly

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205080500U (en) * 2015-09-07 2016-03-09 哈尔滨市一舍科技有限公司 Electronic equipment with 3D subassembly of making a video recording
CN107784481A (en) * 2017-08-30 2018-03-09 平安科技(深圳)有限公司 Task timeliness method for early warning and device
CN110111016A (en) * 2019-05-14 2019-08-09 深圳供电局有限公司 Precarious position monitoring method, device and the computer equipment of operating personnel
JP2020013341A (en) * 2018-07-18 2020-01-23 コニカミノルタ株式会社 Work process management system, work process management method, and work process management program
CN110738135A (en) * 2019-09-25 2020-01-31 艾普工华科技(武汉)有限公司 worker work step specification visual identification judgment and guidance method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205080500U (en) * 2015-09-07 2016-03-09 哈尔滨市一舍科技有限公司 Electronic equipment with 3D subassembly of making a video recording
CN107784481A (en) * 2017-08-30 2018-03-09 平安科技(深圳)有限公司 Task timeliness method for early warning and device
JP2020013341A (en) * 2018-07-18 2020-01-23 コニカミノルタ株式会社 Work process management system, work process management method, and work process management program
CN110111016A (en) * 2019-05-14 2019-08-09 深圳供电局有限公司 Precarious position monitoring method, device and the computer equipment of operating personnel
CN110738135A (en) * 2019-09-25 2020-01-31 艾普工华科技(武汉)有限公司 worker work step specification visual identification judgment and guidance method and system

Also Published As

Publication number Publication date
CN113326713A (en) 2021-08-31

Similar Documents

Publication Publication Date Title
CN113326713B (en) Action recognition method, device, equipment and medium
WO2019196205A1 (en) Foreign language teaching evaluation information generating method and apparatus
CN110738135A (en) worker work step specification visual identification judgment and guidance method and system
CN110196580B (en) Assembly guidance method, system, server and storage medium
CN108596148B (en) System and method for analyzing labor state of construction worker based on computer vision
CN112364722A (en) Nuclear power operator monitoring processing method and device and computer equipment
CN111563480A (en) Conflict behavior detection method and device, computer equipment and storage medium
CN113380088A (en) Interactive simulation training support system
JP2018504960A (en) Method and apparatus for processing human body feature data
CN110378273B (en) Method and device for monitoring operation flow
US20210295053A1 (en) Observed-object recognition system and method
CN113627897A (en) Method and device for managing and controlling safety of field operating personnel and storage medium
CN114495057A (en) Data acquisition method, electronic device and storage medium
US20180165622A1 (en) Action analysis device, acton analysis method, and analysis program
CN111857470A (en) Unattended control method and device for production equipment and controller
CN112329560A (en) Illegal behavior recognition method and device for nuclear power operating personnel and computer equipment
CN113888024A (en) Operation monitoring method and device, electronic equipment and storage medium
CN112383734B (en) Video processing method, device, computer equipment and storage medium
CN113752266A (en) Human-computer cooperation method, system and medium based on cooperative driving and controlling integrated robot
Madrid et al. Recognition of dynamic Filipino Sign language using MediaPipe and long short-term memory
CN116758493A (en) Tunnel construction monitoring method and device based on image processing and readable storage medium
CN113536842A (en) Electric power operator safety dressing identification method and device
EP4156115A1 (en) Method and apparatus for identifying product that has missed inspection, electronic device, and storage medium
JP2007048232A (en) Information processing device, information processing method, and computer program
CN111242029A (en) Device control method, device, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 266000 No. 1, Minshan Road, Qingdao area, China (Shandong) pilot Free Trade Zone, Qingdao, Shandong

Applicant after: CAOS industrial Intelligence Research Institute (Qingdao) Co.,Ltd.

Applicant after: Qingdao blue whale Technology Co.,Ltd.

Applicant after: Haier digital technology (Qingdao) Co.,Ltd.

Applicant after: Karos IoT Technology Co.,Ltd.

Address before: Room 257, management committee of Sino German ecological park, 2877 Tuanjie Road, Huangdao District, Qingdao City, Shandong Province, 266510

Applicant before: QINGDAO HAIER INDUSTRIAL INTELLIGENCE RESEARCH INSTITUTE Co.,Ltd.

Applicant before: Qingdao blue whale Technology Co.,Ltd.

Applicant before: Haier digital technology (Qingdao) Co.,Ltd.

Applicant before: Haier CAOS IOT Ecological Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 266000 No. 1, Minshan Road, Qingdao area, China (Shandong) pilot Free Trade Zone, Qingdao, Shandong

Patentee after: CAOS industrial Intelligence Research Institute (Qingdao) Co.,Ltd.

Patentee after: Qingdao blue whale Technology Co.,Ltd.

Patentee after: Kaos Digital Technology (Qingdao) Co.,Ltd.

Patentee after: Karos IoT Technology Co.,Ltd.

Address before: 266000 No. 1, Minshan Road, Qingdao area, China (Shandong) pilot Free Trade Zone, Qingdao, Shandong

Patentee before: CAOS industrial Intelligence Research Institute (Qingdao) Co.,Ltd.

Patentee before: Qingdao blue whale Technology Co.,Ltd.

Patentee before: Haier digital technology (Qingdao) Co.,Ltd.

Patentee before: Karos IoT Technology Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230810

Address after: 266000 No. 1, Minshan Road, Qingdao area, China (Shandong) pilot Free Trade Zone, Qingdao, Shandong

Patentee after: CAOS industrial Intelligence Research Institute (Qingdao) Co.,Ltd.

Patentee after: Foshan Shunde Haier Intelligent Electronic Co.,Ltd.

Patentee after: Kaos Digital Technology (Qingdao) Co.,Ltd.

Patentee after: Karos IoT Technology Co.,Ltd.

Address before: 266000 No. 1, Minshan Road, Qingdao area, China (Shandong) pilot Free Trade Zone, Qingdao, Shandong

Patentee before: CAOS industrial Intelligence Research Institute (Qingdao) Co.,Ltd.

Patentee before: Qingdao blue whale Technology Co.,Ltd.

Patentee before: Kaos Digital Technology (Qingdao) Co.,Ltd.

Patentee before: Karos IoT Technology Co.,Ltd.